id
stringlengths 10
10
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| content
stringlengths 3.91k
873k
| references
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.02255 | Convolutional Normalizing Flows | Bayesian posterior inference is prevalent in various machine learning
problems. Variational inference provides one way to approximate the posterior
distribution, however its expressive power is limited and so is the accuracy of
resulting approximation. Recently, there has a trend of using neural networks
to approximate the variational posterior distribution due to the flexibility of
neural network architecture. One way to construct flexible variational
distribution is to warp a simple density into a complex by normalizing flows,
where the resulting density can be analytically evaluated. However, there is a
trade-off between the flexibility of normalizing flow and computation cost for
efficient transformation. In this paper, we propose a simple yet effective
architecture of normalizing flows, ConvFlow, based on convolution over the
dimensions of random input vector. Experiments on synthetic and real world
posterior inference problems demonstrate the effectiveness and efficiency of
the proposed method. | http://arxiv.org/pdf/1711.02255 | Guoqing Zheng, Yiming Yang, Jaime Carbonell | cs.LG | ICML 2018 Workshop on Theoretical Foundations and Applications of
Deep Generative Models | null | cs.LG | 20171107 | 20180709 | 8 1 0 2
l u J 9 ] G L . s c [
2 v 5 5 2 2 0 . 1 1 7 1 : v i X r a
# Convolutional Normalizing Flows
# Guoqing Zheng 1 Yiming Yang 1 Jaime Carbonell 1
# Abstract
Bayesian posterior inference is prevalent in var- ious machine learning problems. Variational in- ference provides one way to approximate the pos- terior distribution, however its expressive power is limited and so is the accuracy of resulting ap- proximation. Recently, there has a trend of using neural networks to approximate the variational posterior distribution due to the ï¬exibility of neu- ral network architecture. One way to construct ï¬exible variational distribution is to warp a sim- ple density into a complex by normalizing ï¬ows, where the resulting density can be analytically evaluated. However, there is a trade-off between the ï¬exibility of normalizing ï¬ow and computa- tion cost for efï¬cient transformation. In this paper, we propose a simple yet effective architecture of normalizing ï¬ows, ConvFlow, based on convolu- tion over the dimensions of random input vector. Experiments on synthetic and real world posterior inference problems demonstrate the effectiveness and efï¬ciency of the proposed method.
# 1. Introduction
Posterior inference is the key to Bayesian modeling, where we are interested to see how our belief over the variables of interest change after observing a set of data points. Pre- dictions can also beneï¬t from Bayesian modeling as every prediction will be equipped with conï¬dence intervals repre- senting how sure the prediction is. Compared to the maxi- mum a posterior estimator of the model parameters, which is a point estimator, the posterior distribution provide richer information about the model parameter hence enabling more justiï¬ed prediction.
Among the various inference algorithms for posterior esti- mation, variational inference (VI) and Monte Carlo Markov
1School of Computer Science, Carnegie Mellon Univer- sity, Pittsburgh PA, USA. Correspondence to: Guoqing Zheng <gzheng@cs.cmu.edu>.
Presented at the ICML 2018 Workshop on Theoretical Foundations and Applications of Deep Generative Models. Copyright 2018 by the author(s).
chain (MCMC) are the most two widely used ones. It is well known that MCMC suffers from slow mixing time though asymptotically the samples from the chain will be distributed from the true posterior. VI, on the other hand, facilitates faster inference, since it is optimizing an explicit objective function and convergence can be measured and controlled, and itâs been widely used in many Bayesian mod- els, such as Latent Dirichlet Allocation (Blei et al., 2003), etc. However, one drawback of VI is that it makes strong assumption about the shape of the posterior such as the pos- terior can be decomposed into multiple independent factors. Though faster convergence can be achieved by parameter learning, the approximating accuracy is largely limited.
The above drawbacks stimulates the interest for richer func- tion families to approximate posteriors while maintaining acceptable learning speed. Speciï¬cally, neural network is one among such models which has large modeling capac- ity and endows efï¬cient learning. (Rezende & Mohamed, 2015) proposed normalization ï¬ow, where the neural net- work is set up to learn an invertible transformation from one known distribution, which is easy to sample from, to the true posterior. Model learning is achieved by minimizing the KL divergence between the empirical distribution of the generated samples and the true posterior. After properly trained, the model will generate samples which are close to the true posterior, so that Bayesian predictions are made possible. Other methods based on modeling random vari- able transformation, but based on different formulations are also explored, including NICE (Dinh et al., 2014), the In- verse Autoregressive Flow (Kingma et al., 2016), and Real NVP (Dinh et al., 2016).
One key component for normalizing ï¬ow to work is to com- pute the determinant of the Jacobian of the transformation, and in order to maintain fast Jacobian computation, either very simple function is used as the transformation, such as the planar ï¬ow in (Rezende & Mohamed, 2015), or complex tweaking of the transformation layer is required. Alterna- tively, in this paper we propose a simple and yet effective architecture of normalizing ï¬ows, based on convolution on the random input vector. Due to the nature of convolution, bi-jective mapping between the input and output vectors can be easily established; meanwhile, efï¬cient computation of the determinant of the convolution Jacobian is achieved linearly. We further propose to incorporate dilated convo-
Convolutional Normalizing Flows
lution (Yu & Koltun, 2015; Oord et al., 2016a) to model long range interactions among the input dimensions. The resulting convolutional normalizing ï¬ow, which we term as Convolutional Flow (ConvFlow), is simple and yet effective in warping simple densities to match complex ones.
The remainder of this paper is organized as follows: We brieï¬y review the principles for normalizing ï¬ows in Sec- tion 2, and then present our proposed normalizing ï¬ow architecture based on convolution in Section 3. Empirical evaluations and analysis on both synthetic and real world data sets are carried out in Section 4, and we conclude this paper in Section 5.
# 2. Preliminaries
where *(z) = h'(w'z + b)w. The computation cost of the determinant is hence reduced from O(d?) to O(d).
Applying f to z can be viewed as feeding the input vari- able z to a neural network with only one single hidden unit followed by a linear output layer which has the same di- mension with the input layer. Obviously, because of the bottleneck caused by the single hidden unit, the capacity of the family of transformed density is hence limited.
# 3. A new transformation unit
In this section, we ï¬rst propose a general extension to the above mentioned planar normalizing ï¬ow, and then propose a restricted version of that, which actually turns out to be convolution over the dimensions of the input random vector.
# 2.1. Transformation of random variables
Given a random variable z ⬠R¢ with density p(z), consider a smooth and invertible function f : R¢ â R® operated on z. Let zâ = f(z) be the resulting random variable, the density of zâ can be evaluated as
; afo} af | v(2!) = ple) act | = piayface 4) ay
# 3.1. Normalizing ï¬ow with d hidden units
Instead of having a single hidden unit as suggested in planar ï¬ow, consider d hidden units in the process. We denote the weights associated with the edges from the input layer to the output layer as W â RdÃd and the vector to adjust the magnitude of each dimension of the hidden layer activation as u, and the transformation is deï¬ned as
thus
lz!) = loz plz) â low lact 2f log p(zâ) = log p(z) â log |det 5, (2)
f(z) =ueh(Wz +b) (6)
where © denotes the point-wise multiplication. The Jaco- bian matrix of this transformation is
# 2.2. Normalizing ï¬ows
Normalizing ï¬ows considers successively transforming z0 with a series of transformations {f1, f2, ..., fK} to construct arbitrarily complex densities for zK = fK ⦠fKâ1 ⦠... ⦠f1(z0) as
# âf âz âf âz
or = diag(u © h'(Wz + b))W (7)
det or = det{diag(u © h'(Wz + b))|det(W) (8)
Ofk det . OZp-1 K log p(zx) = log p(z0) â > log (3) k=l
Hence the complexity lies in computing the determinant of the Jacobian matrix. Without further assumption about f , the general complexity for that is O(d3) where d is the dimension of z. In order to accelerate this, (Rezende & Mohamed, 2015) proposed the following family of transfor- mations that they termed as planar ï¬ow:
As det(diag(u © hâ(Wz + b))) is linear, the complexity of computing the above transformation lies in computing det(W). Essentially the planar flow is restricting W to be a vector of length d instead of matrices, however we can relax that assumption while still maintaining linear complexity of the determinant computation based on a very simple fact that the determinant of a triangle matrix is also just the product of the elements on the diagonal.
# 3.2. Convolutional Flow
f(z) =z+uh(w'z +b) (4)
where w ⬠R?,u ⬠R4,b ⬠R are parameters and h(-) is a univariate non-linear function with derivative hâ(-). For this family of transformations, the determinant of the Jacobian matrix can be computed as
of det az det(I + ud(z)')=1+uly(z) ©)
Since normalizing ï¬ow with a fully connected layer may not be bijective and generally requires O(d3) computations for the determinant of the Jacobian even it is, we propose to use 1-d convolution to transform random vectors.
Figure 1(a) illustrates how 1-d convolution is performed over an input vector and outputs another vector. We propose to perform a 1-d convolution on an input random vector z, followed by a non-linearity and necessary post operation
Convolutional Normalizing Flows
(a) (b)
Figure 1: (a) Illustration of 1-D convolution, where the dimensions of the input/output variable are both 8 (the input vector is padded with 0), the width of the convolution ï¬lter is 3 and dilation is 1; (b) A block of ConvFlow layers stacked with different dilations.
after activation to generate an output vector. Speciï¬cally,
f(z) =z+u© h(conv(z, w)) (9)
where w ⬠R* is the parameter of the 1-d convolution filter (k is the convolution kernel width), conv(z, w) is the Id convolution operation as shown in Figure 1(a), h(-) is a monotonic non-linear activation function!, © denotes point-wise multiplication, and w ⬠R@ is a vector adjusting the magnitude of each dimension of the activation from h(-). We term this normalizing flow as Convolutional Flow (ConvFlow).
ConvFlow enjoys the following properties
⢠Bi-jectivity can be easily achieved with standard and fast 1d convolution operator if proper padding and a monotonic activation function with bounded gradients are adopted (Minor care is needed to guarantee strict invertibility, see Appendix A for details);
⢠Due to local connectivity, the Jacobian determinant of ConvFlow only takes O(d) computation independent from convolution kernel width k since
Jacobian matrix of the 1d convolution conv(z, w) is
â conv(z, w) âz w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w3 w1 w2 w1 (11)
=
which is a triangular matrix whose determinant can be easily computed;
⢠ConvFlow is much simpler than previously proposed variants of normalizing ï¬ows. The total number of parameters of one ConvFlow layer is only d + k where generally k < d, particularly efï¬cient for high dimen- sional cases. Notice that the number of parameters in the planar ï¬ow in (Rezende & Mohamed, 2015) is 2d and one layer of Inverse Autoregressive Flow (IAF) (Kingma et al., 2016) and Real NVP (Dinh et al., 2016) require even more parameters. In Section 3.3, we discuss the key differences of ConvFlow from IAF in detail.
A series of K ConvFlows can be stacked to generate com- plex output densities. Further, since convolutions are only visible to inputs from adjacent dimensions, we propose to incorporate dilated convolution (Yu & Koltun, 2015; Oord et al., 2016a) to the ï¬ow to accommodate interactions among dimensions with long distance apart. Figure 1(b) presents a block of 3 ConvFlows stacked, with different dilations for each layer. Larger receptive ï¬eld is achieved without increasing the number of parameters. We term this as a ConvBlock.
From the block of ConvFlow layers presented in Figure 1(b), it is easy to verify that dimension i (1 ⤠i ⤠d) of the output vector only depends on succeeding dimensions, but not preceding ones. In other words, dimensions with larger indices tend to end up getting little warping compared to the ones with smaller indices. Fortunately, this can be easily resolved by a Revert Layer, which simply outputs a reversed version of its input vector. Speciï¬cally, a Revert Layer g operates as
or =I1+diag(w;u@ h'(conv(z,w))) (10)
g(Z) := g([Z1, 22, ws 2d)" ) = [Za, Zdâ-1 i)" (12)
where w1 denotes the ï¬rst element of w. For example for the illustration in Figure 1(a), the
1Examples of valid h(x) include all conventional activations, including sigmoid, tanh, softplus, rectiï¬er (ReLU), leaky rectiï¬er (Leaky ReLU) and exponential linear unit (ELU).
Itâs easy to verify a Revert Layer is bijective and that the Jacobian of g is a d x d matrix with Is on its anti-diagonal and 0 otherwise, thus log |aet 32 is 0. Therefore, we can append a Revert Layer after each ConvBlock to accommo- date warping for dimensions with larger indices without
Convolutional Normalizing Flows
additional computation cost for the Jacobian as follows
# z ConvBlock ConvBlock Revert Revert
# Repetitions of ConvBlock+Revert for K times
# f (z)
(13)
these singularity transforms in the autoregressive NN are somewhat mitigated by their ï¬nal coupling with the input z, IAF still performs slightly worse in empirical evaluations than ConvFlow as no singular transform is involved in ConvFlow.
# 3.3. Connection to Inverse Autoregressive Flow
Inspired by the idea of constructing complex tractable densi- ties from simpler ones with bijective transformations, differ- ent variants of the original normalizing ï¬ow (NF) (Rezende & Mohamed, 2015) have been proposed. Perhaps the one most related to ConvFlow is Inverse Autoregressive Flow (Kingma et al., 2016), which employs autoregres- sive transformations over the input dimensions to construct output densities. Speciï¬cally, one layer of IAF works as follows
⢠Lastly, despite the similar nature of modeling variable dimension with an autoregressive manner, ConvFlow is much more efï¬cient since the computation of the ï¬ow weights w and the input z is carried out by fast native 1- d convolutions, where IAF in its simplest form needs to maintain a masked feed forward network (if not main- taining an RNN). Similar idea of using convolution operators for efï¬cient modeling of data dimensions is also adopted by PixelCNN (Oord et al., 2016b).
# 4. Experiments
where
f(z) = Mz) + o(z) Oz (14)
[µ(z), Ï(z)] â AutoregressiveNN(z) (15)
We test performance the proposed ConvFlow on two set- tings, one on synthetic data to infer unnormalized target density and the other on density estimation for hand written digits and characters.
are outputs from an autoregressive neural network over the dimensions of z. There are two drawbacks of IAF compared to the proposed ConvFlow:
⢠The autoregressive neural network over input dimen- sions in IAF is represented by a Masked Autoen- coder (Germain et al., 2015), which generally requires O(d2) parameters per layer, where d is the input di- mension, while each layer of ConvFlow is much more parameter efï¬cient, only needing k + d parameters (k is the kernel size of 1d convolution and k < d).
⢠More importantly, due to the coupling of Ï(z) and z in the IAF transformation, in order to make the compu- tation of the overall Jacobian determinant det âf âz linear in d, the Jacobian of the autoregressive NN transforma- tion is assumed to be strictly triangular (Equivalently, the Jacobian determinants of µ and Ï w.r.t z are both always 0. This is achieved by letting the ith dimension of µ and Ï depend only on dimensions 1, 2, ..., i â 1 of z). In other words, the mappings from z onto µ(z) and Ï(z) via the autoregressive NN are always singu- lar, no matter how their parameters are updated, and because of this, µ and Ï will only be able to cover a subspace of the input space z belongs to, which is ob- viously less desirable for a normalizing ï¬ow.2 Though
2Since the singular transformations will only lead to subspace coverage of the resulting variable µ and Ï, one could try to allevi- ate the subspace issue by modifying IAF to set both µ and Ï as free parameters to be learned, the resulting normalizing ï¬ow of which is exactly a version of planar ï¬ow as proposed in (Rezende & Mohamed, 2015).
# 4.1. Synthetic data
We conduct experiments on using the proposed ConvFlow to approximate an unnormalized target density of z with dimension 2 such that p(z) â exp(âU (z)). We adopt the same set of energy functions U (z) in (Rezende & Mo- hamed, 2015) for a fair comparison, which is reproduced below
2 1 z||â2 1p z1â2]2 _apzt2)2 ve) = 5 (EL*) log (e~ #142)" + e441â) 1 | 22 -wi(z vale) = 5 : ni
where w,(z) = sin (35+) r. The target density of z are plotted as the left most column in Figure 2, and we test to see if the proposed ConvFlow can transform a two dimensional standard Gaussian to the target density by minimizing the KL divergence
KL (qx (2x)||p(2)) = Bes Tonal k)) â Ez, log p(zx) of) 8.0 =Ez, log qo(z0)) â Ez, log (16) â0)) + const
where all expectations are evaluated with samples taken from q0(z0). We use a 2-d standard Gaussian as q0(z0) and we test different number of ConvBlocks stacked together in this task. Each ConvBlock in this case consists a ConvFlow layer with kernel size 2, dilation 1 and followed by another ConvFlow layer with kernel size 2, dilation 2. Revert Layer is appended after each ConvBlock, and tanh activation func- tion is adopted by ConvFlow. The Autoregressive NN in
Convolutional Normalizing Flows
IAF is implemented as a two layer masked fully connected neural network (Germain et al., 2015).
where p(z) and p(z1) are the priors deï¬ned over z and z1 for G1 and G2, respectively. All other conditional densities are speciï¬ed with their parameters θ deï¬ned by neural networks, therefore ending up with two stochastic neural networks. This network could have any number of layers, however in this paper, we focus on the ones which only have one and two stochastic layers, i.e., G1 and G2, to conduct a fair comparison with previous methods on similar network architectures, such as VAE, IWAE and Normalizing Flows.
We use the same network architectures for both G1 and G2 as in (Burda et al., 2015), speciï¬cally shown as follows
Figure 2: (a) True density; (b) Density learned by IAF (16 layers); (c) Density learned by ConvFlow. (8 blocks with each block consisting of 2 layers)
G1 : A single Gaussian stochastic layer z with 50 units. In between the latent variable z and observation x there are two deterministic layers, each with 200 units;
Experimental results are shown in Figure 2 for IAF (middle column) and ConvFlow (right column) to approximate the target density (left column). Even with 16 layers, IAF puts most of the density to one mode, conï¬rming our analysis about the singular transform problem in IAF: As the data dimension is only two, the subspace modeled by µ(z) and Ï(z) in Eq. (14) will be lying on a 1-d space, i.e., a straight line, which is shown in the middle column. The effect of singular transform on IAF will be less severe for higher dimensions. While with 8 layers of ConvBlocks (each block consists of 2 1d convolution layers), ConvFlow is already approximating the target density quite well despite the minor underestimate about the density around the boundaries.
G2 : Two Gaussian stochastic layers z1 and z2 with 50 and 100 units, respectively. Two deterministic layers with 200 units connect the observation x and latent variable z2, and two deterministic layers with 100 units are in between z2 and z1.
where a Gaussian stochastic layer consists of two fully con- nected linear layers, with one outputting the mean and the other outputting the logarithm of diagonal covariance. All other deterministic layers are fully connected with tanh non- linearity. Bernoulli observation models are assumed for both MNIST and OMNIGLOT. For MNIST, we employ the static binarization strategy as in (Larochelle & Murray, 2011) while dynamic binarization is employed for OMNIGLOT.
# 4.2. Handwritten digits and characters
4.2.1. SETUPS
To test the proposed ConvFlow for variational inference we use standard benchmark datasets MNIST3 and OM- NIGLOT4 (Lake et al., 2013). Our method is general and can be applied to any formulation of the generative model pθ(x, z); For simplicity and fair comparison, in this paper, we focus on densities deï¬ned by stochastic neural networks, i.e., a broad family of ï¬exible probabilistic generative mod- els with its parameters deï¬ned by neural networks. Specif- ically, we consider the following two family of generative models
G1 : pθ(x, z) = pθ(z)pθ(x|z) G2 : pθ(x, z1, z2) = pθ(z1)pθ(z2|z1)pθ(x|z2)
(18)
(17)
The inference networks q(z|x) for G1 and G2 have similar architectures to the generative models, with details in (Burda et al., 2015). ConvFlow is hence used to warp the output of the inference network q(z|x), assumed be to Gaussian condi- tioned on the input x, to match complex true posteriors. Our baseline models include VAE (Kingma & Welling, 2013), IWAE (Burda et al., 2015) and Normalizing Flows (Rezende & Mohamed, 2015). Since our propose method involves adding more layers to the inference network, we also include another enhanced version of VAE with more deterministic layers added to its inference network, which we term as VAE+.5 With the same VAE architectures, we also test the abilities of constructing complex variational posteriors with IAF and ConvFlow, respectively. All models are im- plemented in PyTorch. Parameters of both the variational distribution and the generative distribution of all models are optimized with Adam (Kingma & Ba, 2014) for 2000 epochs, with a ï¬xed learning rate of 0.0005, exponential decay rates for the 1st and 2nd moments at 0.9 and 0.999, respectively. Batch normalization (Ioffe & Szegedy, 2015)
3Data downloaded from http://www.cs.toronto.
# edu/Ëlarocheh/public/datasets/binarized_ mnist/ 4Data
from https://github.com/ downloaded yburda/iwae/raw/master/datasets/OMNIGLOT/ chardata.mat
5VAE+ adds more layers before the stochastic layer of the inference network while the proposed method is add convolutional ï¬ow layers after the stochastic layer.
Convolutional Normalizing Flows
and linear annealing of the KL divergence term between the variational posterior and the prior is employed for the ï¬rst 200 epochs, as it has been shown to help training multi- layer stochastic neural networks (Sønderby et al., 2016). Code to reproduce all reported results will be made publicly available.
For inference models with latent variable z of 50 dimen- sions, a ConvBlock consists of following ConvFlow layers
[ConvFlow(kernel size = 5, dilation = 1), ConvFlow(kernel size = 5, dilation = 2), ConvFlow(kernel size = 5, dilation = 4), ConvFlow(kernel size = 5, dilation = 8), ConvFlow(kernel size = 5, dilation = 16), ConvFlow(kernel size = 5, dilation = 32)]
and for inference models with latent variable z of 100 di- mensions, a ConvBlock consists of following ConvFlow layers
(19)
variational posterior further close to the true posterior. We also observe that VAE with Inverse Autoregressive Flows (VAE+IAF) improves over VAE and VAE+, due to its model- ing of complex densities, however the improvements are not as signiï¬cant as ConvFlow. The limited improvement might be explained by our analysis on the singular transformation and subspace issue in IAF. Lastly, combining convolutional normalizing ï¬ows with multiple importance weighted sam- ples, as shown in last row of Table 1, further improvement on the test set log-likelihood is achieved. Overall, the method combining ConvFlow and importance weighted samples achieves best NLL on both settings, outperforming IWAE signiï¬cantly by 7.1 nats on G1 and 5.7 nats on G2. No- tice that, ConvFlow combined with IWAE achieves an NLL of 79.11, comparable to the best published result of 79.10, achieved by PixelRNN (Oord et al., 2016b) with a much more sophisticated architecture. Also itâs about 0.8 nat bet- ter than the best IAF result of 79.88 reported in (Kingma et al., 2016), which demonstrates the representative power of ConvFlow compared to IAF6.
[ConvFlow(kernel size = 5, dilation = 1), ConvFlow(kernel size = 5, dilation = 2), ConvFlow(kernel size = 5, dilation = 4), ConvFlow(kernel size = 5, dilation = 8), ConvFlow(kernel size = 5, dilation = 16), ConvFlow(kernel size = 5, dilation = 32), ConvFlow(kernel size = 5, dilation = 64)]
(20)
Results on OMNIGLOT are presented in Table 2 where similar trends can be observed as on MNIST. One ob- servation different from MNIST is that, the gain from IWAE+ConvFlow over IWAE is not as large as it is on MNIST, which could be explained by the fact that OM- NIGLOT is a more difï¬cult set compared to MNIST, as there are 1600 different types of symbols in the dataset with roughly 20 samples per type. Again on OMNIGLOT we ob- serve IAF with VAE improves over VAE and VAE+, while doesnât perform as well as ConvFlow.
A Revert layer is appended after each ConvBlock and leaky ReLU with a negative slope of 0.01 is used as the activation function in ConvFlow. For IAF, the autoregressive neural network is implemented as a two layer masked fully con- nected neural network.
4.2.2. GENERATIVE DENSITY ESTIMATION
For MNIST, models are trained and tuned on the 60,000 training and validation images, and estimated log-likelihood on the test set with 128 importance weighted samples are reported. Table 1 presents the performance of all models, when the generative model is assumed to be from both G1 and G2.
4.2.3. LATENT CODE VISUALIZATION
We visualize the inferred latent codes z of 5000 digits in the MNIST test set with respect to their true class labels in Fig- ure 3 from different models with tSNE (Maaten & Hinton, 2008). We observe that on generative model G2, all three models are able to infer latent codes of the digits consistent with their true classes. However, VAE and VAE+IAF both show disconnected cluster of latent codes from the same class (e.g., digits 0 and digits 1). Latent codes inferred by VAE for digit 3 and 5 tend to mix with each other. Overall, VAE equipped with ConvFlow produces clear separable la- tent codes for different classes while also maintaining high in-class density (notably for digit classes 0, 1, 2, 7, 8, 9 as
Firstly, VAE+ achieves higher log-likelihood estimates than vanilla VAE due to the added more layers in the inference network, implying that a better posterior approximation is learned (which is still assumed to be a Gaussian). Sec- ond, we observe that VAE with ConvFlow achieves much better density estimates than VAE+, which conï¬rms our expectation that warping the variational distribution with convolutional ï¬ows enforces the resulting variational poste- rior to match the true non-Gaussian posterior. Also, adding more blocks of convolutional ï¬ows to the network makes the
6The result in (Kingma et al., 2016) are not directly compara- ble, as their results are achieved with a much more sophisticated VAE architecture and a much higher dimension of latent code (d = 1920 for the best NLL of 79.88). However, in this paper, we only assume a relatively simple VAE architecture compose of fully connected layers and the dimension of latent codes to be relatively low, 50 or 100, depending on the generative model in VAE. One could expect the performance of ConvFlow to improve even fur- ther if similar complex VAE architecture and higher dimension of latent codes are used.
Convolutional Normalizing Flows
Table 1: MNIST test set NLL with generative models G1 and G2 (lower is better K is number of ConvBlocks)
MNIST (static binarization) â log p(x) on G1 â log p(x) on G2 VAE (Burda et al., 2015) IWAE (IW = 50) (Burda et al., 2015) VAE+NF (Rezende & Mohamed, 2015) 88.37 86.90 - 85.66 84.26 ⤠85.10 VAE+ (K = 1) VAE+ (K = 4) VAE+ (K = 8) 88.20 88.08 87.98 85.41 85.26 85.16 VAE+IAF (K = 1) VAE+IAF (K = 2) VAE+IAF (K = 4) VAE+IAF (K = 8) 87.70 87.30 87.02 86.62 85.03 84.74 84.55 84.26 VAE+ConvFlow (K = 1) VAE+ConvFlow (K = 2) VAE+ConvFlow (K = 4) VAE+ConvFlow (K = 8) 86.91 86.40 84.78 83.89 85.45 85.37 81.64 81.21 IWAE+ConvFlow (K = 8, IW = 50) 79.78 79.11
C©OINHRUAWNHO
Figure 3: Left: VAE, Middle: VAE+IAF, Right:VAE+ConvFlow. (best viewed in color)
shown in the rightmost ï¬gure).
4.2.4. GENERATION
After the models are trained, generative samples can be obtained by feeding z â¼ N (0, I) to the learned genera- tive model G1 (or z2 â¼ N (0, I) to G2). Since higher log- likelihood estimates are obtained on G2, Figure 4 shows three sets of random generative samples from our proposed method trained with G2 on both MNIST and OMNIGLOT, compared to real samples from the training sets. We ob- serve the generated samples are visually consistent with the training data.
# 5. Conclusions
This paper presents a simple and yet effective architecture to compose normalizing ï¬ows based on 1d convolution on the input vectors. ConvFlow takes advantage of the effective computation of convolution to warp a simple density to the
possibly complex target density, as well as maintaining as few parameters as possible. To further accommodate long range interactions among the dimensions, dilated convolu- tion is incorporated to the framework without increasing model computational complexity. A Revert Layer is used to maximize the opportunity that all dimensions get as much warping as possible. Experimental results on inferring target complex density and density estimation on generative mod- eling on real world handwritten digits data demonstrates the strong performance of ConvFlow. Particularly, density estimates on MNIST show signiï¬cant improvements over state-of-the-art methods, validating the power of ConvFlow in warping multivariate densities. It remains an interesting question to see how ConvFlows can be directly combined with powerful observation models such as PixelRNN to further advance generative modeling with tractable density evaluation. We hope to address these challenges in future work.
Convolutional Normalizing Flows
Table 2: OMNIGLOT test set NLL with generative models G1 and G2 (lower is better, K is number of ConvBlocks)
OMNIGLOT â log p(x) on G1 â log p(x) on G2 VAE (Burda et al., 2015) IWAE (IW = 50) (Burda et al., 2015) 108.22 106.08 106.09 104.14 VAE+ (K = 1) VAE+ (K = 4) VAE+ (K = 8) 108.30 108.31 108.31 106.30 106.48 106.05 VAE+IAF (K = 1) VAE+IAF (K = 2) VAE+IAF (K = 4) VAE+IAF (K = 8) 107.31 106.93 106.69 106.33 105.78 105.34 105.56 105.00 VAE+ConvFlow (K = 1) VAE+ConvFlow (K = 2) VAE+ConvFlow (K = 4) VAE+ConvFlow (K = 8) 106.42 106.08 105.21 104.86 105.33 104.85 104.30 103.49 IWAE+ConvFlow (K = 8, IW = 50) 104.21 103.02
rSJ MP Ko oO Gg â o 6 i 3 g 4 Fa a & | Cnt me C7 INE OL % EN UNSW OW oe ya Wa Cand ty SX On ~~ WM Fe & & S- SFP OG-5 8H wy et oN own GA cake ey So Rees
A as + as Ci ere ere 2ywSyansa SMO woah ~ Ot Hâ HHO ~sWPOGg~w~o rSJ MP Ko oO Gg â Pw AO Qu Awe OCHO Bean Te FHA Vw Low o 6 i 3 g 4 Fa a & | Cnt me C7 INE OL % EN UNSW OW oe ya Wa Cand ty SX On ~~ WM Fe & & S- SFP OG-5 8H (es paso sg Lat ~Ey orf nwry nw SH BA how wD oe eS] NO&o2\ rte ner Wo Derma TGR ~-OV we 2 ~âH- Ao ee MW LW A fn eS we OND te HO BONA SWOâAKAD ⢠Te â Ch WU AD oy om wy et oN own wh ge) > OC OG oT WOOK BDAWHY pla tw OQw ns home OT A We Seve S#vanret-< ww Sw OO Cat GA cake ey So Rees > Nn oq a
A as Ci ere ere 2ywSyansa (es paso sg Lat ~Ey orf nwry nw SH BA how wD oe eS] NO&o2\ rte ner wh ge) > OC OG oT WOOK BDAWHY
+ Pw AO Qu Awe OCHO Bean Te FHA Vw Low Wo Derma TGR ~-OV we 2 ~âH- pla tw OQw ns > a
as SMO woah ~ Ot Hâ HHO ~sWPOGg~w~o Ao ee MW LW A fn eS we OND te HO BONA SWOâAKAD ⢠Te â Ch WU AD oy om home OT A We Seve S#vanret-< ww Sw OO Cat Nn oq
(a) MNIST Training data (b) Random samples 1 from IWAE-ConvFlow (K = 8) (c) Random samples 2 from IWAE-ConvFlow (K = 8) (d) Random samples 3 from IWAE-ConvFlow (K = 8)
m£tepRutachn +OwpI# CHEER
m£tepRutachn +OwpI# CHEER
(e) OMNIGLOT Training data (f) Random samples from IWAE- ConvFlow (K = 8) (g) Random samples from IWAE- ConvFlow (K = 8) (h) Random samples from IWAE- ConvFlow (K = 8)
Figure 4: Training data and generated samples
Convolutional Normalizing Flows
# References
Blei, David M., Ng, Andrew Y., and Jordan, Michael I. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993â1022, 2003.
Burda, Yuri, Grosse, Roger, and Salakhutdinov, Ruslan. arXiv preprint Importance weighted autoencoders. arXiv:1509.00519, 2015.
Maaten, Laurens van der and Hinton, Geoffrey. Visualizing data using t-sne. Journal of machine learning research, 9 (Nov):2579â2605, 2008.
Oord, Aaron van den, Dieleman, Sander, Zen, Heiga, Si- monyan, Karen, Vinyals, Oriol, Graves, Alex, Kalch- brenner, Nal, Senior, Andrew, and Kavukcuoglu, Ko- ray. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.
Dinh, Laurent, Krueger, David, and Bengio, Yoshua. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016b.
Dinh, Laurent, Sohl-Dickstein, Jascha, and Bengio, Samy. arXiv preprint Density estimation using real nvp. arXiv:1605.08803, 2016.
Germain, Mathieu, Gregor, Karol, Murray, Iain, and Larochelle, Hugo. Made: masked autoencoder for distri- bution estimation. In Proceedings of the 32nd Interna- tional Conference on Machine Learning (ICML-15), pp. 881â889, 2015.
Ioffe, Sergey and Szegedy, Christian. Batch normaliza- tion: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd Inter- national Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448â456, 2015.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Rezende, Danilo Jimenez and Mohamed, Shakir. Variational In Proceedings of inference with normalizing ï¬ows. the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 1530â1538, 2015.
Sønderby, Casper Kaae, Raiko, Tapani, Maaløe, Lars, Sønderby, Søren Kaae, and Winther, Ole. Ladder vari- ational autoencoders. In Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 3738â3746, 2016.
Yu, Fisher and Koltun, Vladlen. Multi-scale context arXiv preprint aggregation by dilated convolutions. arXiv:1511.07122, 2015.
# A. Conditions for Invertibility
Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Kingma, Diederik P., Salimans, Tim, J´ozefowicz, Rafal, Chen, Xi, Sutskever, Ilya, and Welling, Max. Improv- ing variational autoencoders with inverse autoregressive ï¬ow. In Advances in Neural Information Processing Sys- tems 29: Annual Conference on Neural Information Pro- cessing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 4736â4744, 2016.
Lake, Brenden M., Salakhutdinov, Ruslan, and Tenenbaum, Joshua B. One-shot learning by inverting a compositional causal process. In Advances in Neural Information Pro- cessing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pp. 2526â2534, 2013.
The ConvFlow proposed in Section 3 is invertible, as long as every term in the main diagonal of the Jacobian speciï¬ed in Eq. (10) is non-zero, i.e., for all i = 1, 2, ..., d,
wyu;h' (conv(z, w)) +140 (21)
where wu; is the i-th entry of the scaling vector u. When using h(x) = tanh(x), since hâ(z) = 1 â tanh?(x) ⬠(0, 1], a sufficient condition for invertibility is to ensure wy u; > â1. Thus a new scaling vector uâ can be created from free parameter wu to satisfy the condition as
u ifw, = v w= -z + softplus(w) if w; >0 (22) â Zh â softplus(u) if w) <0
Larochelle, Hugo and Murray, Iain. The neural autore- In Proceedings of the gressive distribution estimator. Fourteenth International Conference on Artiï¬cial Intel- ligence and Statistics, AISTATS 2011, Fort Lauderdale, USA, April 11-13, 2011, pp. 29â37, 2011.
where softplus(x) = log(1 + exp(x)). The above sufï¬cient condition works readily for other non-linearity functions h , including sigmoid, softplus, rectiï¬er(ReLU), leaky rectiï¬er (Leaky ReLU) and exponential linear unit (ELU), as all their gradients are bounded in [0, 1]. | {
"id": "1511.07122"
} |
1711.01239 | Routing Networks: Adaptive Selection of Non-linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in
tasks to improve performance, but often suffers from task interference which
reduces the benefits of transfer. To address this issue we introduce the
routing network paradigm, a novel neural network and training algorithm. A
routing network is a kind of self-organizing neural network consisting of two
components: a router and a set of one or more function blocks. A function block
may be any neural network - for example a fully-connected or a convolutional
layer. Given an input the router makes a routing decision, choosing a function
block to apply and passing the output back to the router recursively,
terminating when a fixed recursion depth is reached. In this way the routing
network dynamically composes different function blocks for each input. We
employ a collaborative multi-agent reinforcement learning (MARL) approach to
jointly train the router and function blocks. We evaluate our model against
cross-stitch networks and shared-layer baselines on multi-task settings of the
MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a
significant improvement in accuracy, with sharper convergence. In addition,
routing networks have nearly constant per-task training cost while cross-stitch
networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we
obtain cross-stitch performance levels with an 85% reduction in training time. | http://arxiv.org/pdf/1711.01239 | Clemens Rosenbaum, Tim Klinger, Matthew Riemer | cs.LG, cs.CV, cs.NE | Under Review at ICLR 2018 | null | cs.LG | 20171103 | 20171231 | 7 1 0 2
c e D 1 3 ] G L . s c [
2 v 9 3 2 1 0 . 1 1 7 1 : v i X r a
ROUTING NETWORKS: ADAPTIVE SELECTION OF NON-LINEAR FUNCTIONS FOR MULTI-TASK LEARN- ING
Clemens Rosenbaum College of Information and Computer Sciences University of Massachusetts Amherst 140 Governors Dr., Amherst, MA 01003 cgbr@cs.umass.edu
Tim Klinger & Matthew Riemer IBM Research AI 1101 Kitchawan Rd, Yorktown Heights, NY 10598 {tklinger,mdriemer}@us.ibm.com
# ABSTRACT
Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the beneï¬ts of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural net- work â for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a ï¬xed recursion depth is reached. In this way the routing network dynamically composes different func- tion blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a signiï¬cant improvement in accuracy, with sharper con- vergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR- 100 (20 tasks) we obtain cross-stitch performance levels with an 85% reduction in training time.
# INTRODUCTION
Multi-task learning (MTL) is a paradigm in which multiple tasks must be learned simultaneously. Tasks are typically separate prediction problems, each with their own data distribution. In an early formulation of the problem, (Caruana, 1997) describes the goal of MTL as improving generalization performance by âleveraging the domain-speciï¬c information contained in the training signals of related tasks.â This means a model must leverage commonalities in the tasks (positive transfer) while minimizing interference (negative transfer). In this paper we propose a new architecture for MTL problems called a routing network, which consists of two trainable components: a router and a set of function blocks. Given an input, the router selects a function block from the set, applies it to the input, and passes the result back to the router, recursively up to a ï¬xed recursion depth. If the router needs fewer iterations then it can decide to take a PASS action which leaves the current state unchanged. Intuitively, the architecture allows the network to dynamically self-organize in response to the input, sharing function blocks for different tasks when positive transfer is possible, and using separate blocks to prevent negative transfer.
The architecture is very general allowing many possible router implementations. For example, the router can condition its decision on both the current activation and a task label or just one or the other. It can also condition on the depth (number of router invocations), ï¬ltering the function mod- ule choices to allow layering. In addition, it can condition its decision for one instance on what was historically decided for other instances, to encourage re-use of existing functions for improved compression. The function blocks may be simple fully-connected neural network layers or whole
1
networks as long as the dimensionality of each function block allows composition with the previous function block choice. They neednât even be the same type of layer. Any neural network or part of a network can be âroutedâ by adding its layers to the set of function blocks, making the architecture applicable to a wide range of problems. Because the routers make a sequence of hard decisions, which are not differentiable, we use reinforcement learning (RL) to train them. We discuss the train- ing algorithm in Section 3.1, but one way we have modeled this as an RL problem is to create a separate RL agent for each task (assuming task labels are available in the dataset). Each such task agent learns its own policy for routing instances of that task through the function blocks.
To evaluate we have created a âroutedâ version of the convnet used in (Ravi & Larochelle, 2017) and use three image classiï¬cation datasets adapted for MTL learning: a multi-task MNIST dataset that we created, a Mini-imagenet data split as introduced in (Vinyals et al., 2016), and CIFAR-100 (Krizhevsky, 2009), where each of the 20 label superclasses are treated as different tasks.1 We conduct extensive experiments comparing against cross-stitch networks (Misra et al., 2016) and the popular strategy of joint training with layer sharing as described in (Caruana, 1997). Our results indicate a signiï¬cant improvement in accuracy over these strong baselines with a speedup in con- vergence and often orders of magnitude improvement in training time over cross-stitch networks.
# 2 RELATED WORK
Work on multi-task deep learning (Caruana, 1997) traditionally includes signiï¬cant hand design of neural network architectures, attempting to ï¬nd the right mix of task-speciï¬c and shared parameters. For example, many architectures share low-level features like those learned in shallow layers of deep convolutional networks or word embeddings across tasks and add task-speciï¬c architectures in later layers. By contrast, in routing networks, we learn a fully dynamic, compositional model which can adjust its structure differently for each task.
Routing networks share a common goal with techniques for automated selective transfer learning using attention (Rajendran et al., 2017) and learning gating mechanisms between representations (Stollenga et al., 2014), (Misra et al., 2016), (Ruder et al., 2017). In the latter two papers, experi- ments are performed on just 2 tasks at a time. We consider up to 20 tasks in our experiments and compare directly to (Misra et al., 2016).
Our work is also related to mixtures of experts architectures (Jacobs et al., 1991), (Jordan & Jacobs, 1994) as well as their modern attention based (Riemer et al., 2016) and sparse (Shazeer et al., 2017) variants. The gating network in a typical mixtures of experts model takes in the input and chooses an appropriate weighting for the output of each expert network. This is generally implemented as a soft mixture decision as opposed to a hard routing decision, allowing the choice to be differen- tiable. Although the sparse and layer-wise variant presented in (Shazeer et al., 2017) does save some computational burden, the proposed end-to-end differentiable model is only an approximation and doesnât model important effects such as exploration vs. exploitation tradeoffs, despite their impact on the system. Mixtures of experts have recently been considered in the transfer learning setting (Aljundi et al., 2016), however, the decision process is modelled by an autoencoder-reconstruction- error-based heuristic and is not scaled to a large number of tasks.
In the use of dynamic representations, our work is also related to single task and multi-task models that learn to generate weights for an optimal neural network (Ha et al., 2016), (Ravi & Larochelle, 2017), (Munkhdalai & Yu, 2017). While these models are very powerful, they have trouble scaling to deep models with a large number of parameters (Wichrowska et al., 2017) without tricks to simplify the formulation. In contrast, we demonstrate that routing networks can be applied to create dynamic network architectures for architectures like convnets by routing some of their layers.
Our work extends an emerging line of recent research focused on automated architecture search. In this work, the goal is to reduce the burden on the practitioner by automatically learning black box algorithms that search for optimal architectures and hyperparameters. These include techniques based on reinforcement learning (Zoph & Le, 2017), (Baker et al., 2017), evolutionary algorithms (Miikkulainen et al., 2017), approximate random simulations (Brock et al., 2017), and adaptive growth (Cortes et al., 2016). To the best of our knowledge we are the ï¬rst to apply this idea to multi- task learning. Our technique can learn to construct a very general class of architectures without the
1All dataset splits and the code will be released with the publication of this paper.
2
need for human intervention to manually choose which parameters will be shared and which will be kept task-speciï¬c.
Also related to our work is the literature on minimizing computation cost for single-task problems by conditional routing. These include decisions trained with REINFORCE (Denoyer & Gallinari, 2014), (Bengio et al., 2015), (Hamrick et al., 2017), Q Learning (Liu & Deng, 2017), and actor-critic methods (McGill & Perona, 2017). Our approach differs however in the introduction of several novel elements. Speciï¬cally, our work explores the multi-task learning setting, it uses a multi-agent reinforcement learning training algorithm, and it is structured as a recursive decision process.
There is a large body of related work which focuses on continual learning, in which tasks are pre- sented to the network one at a time, potentially over a long period of time. One interesting recent paper in this setting, which also uses the notion of routes (âpathsâ), but uses evolutionary algorithms instead of RL is Fernando et al. (2017).
While a routing network is a novel artiï¬cial neural network formulation, the high-level idea of task speciï¬c âroutingâ as a cognitive function is well founded in biological studies and theories of the human brain (Gurney et al., 2001), (Buschman & Miller, 2010), (Stocco et al., 2010).
3 ROUTING NETWORKS
router(v, t, 1) router(v, t, 2) router(v, t, 3) f13 f23 f33 input: v, t f12 f22 f32 Ëy = f32(f21(f13(v, t))) f11 f21 f31
Figure 1: Routing (forward) Example
A routing network consists of two components: a router and a set of function blocks, each of which can be any neural network layer. The router is a function which selects from among the function blocks given some input. Routing is the process of iteratively applying the router to select a se- quence of function blocks to be composed and applied to the input vector. This process is illustrated in Figure 1. The input to the routing network is an instance to be classiï¬ed (v, t), v â Rd is a repre- sentation vector of dimension d and t is an integer task identiï¬er. The router is given v, t and a depth (=1), the depth of the recursion, and selects from among a set of function block choices available at depth 1, {f13, f12, f11}, picking f13 which is indicated with a dashed line. f13 is applied to the input (v, t) to produce an output activation. The router again chooses a function block from those available at depth 2 (if the function blocks are of different dimensions then the router is constrained to select dimensionally matched blocks to apply) and so on. Finally the router chooses a function block from the last (classiï¬cation) layer function block set and produces the classiï¬cation Ëy.
Algorithm 1 gives the routing procedure in detail. The algorithm takes as input a vector v, task label t and maximum recursion depth n. It iterates n times choos- ing a function block on each iteration and applying it to produce an output representation vector. A special PASS action (see Appendix Section 7.2 for details) just skips to the next iteration. Some experiments donât require a task label and in that case we just pass a dummy value. For simplicity we assume the algorithm has access to the router function and function blocks and donât include them explicitly in the input. The router decision function router : Rd à Z+ à Z+ â {1, 2, . . . , k, P ASS} (for d the input representation dimension and k the number of function blocks) maps the current representation v, task label t â Z+, and current depth i â Z+ to the index of the function block to route next in the ordered set function block.
Algorithm 1: Routing Algorithm input : x, t, n:
x â Rd, d the representation dim; t integer task id; n max depth
output: v - the vector result of applying the composition of the selected functions to the input x
1 v â x 2 for i in 1...n do 3
# a + router(z, t, 7) if a A PASS then
4
# we
5 x â function blocka(x)
5
# 6 return v
3
If the routing network is run for d invocations then we say it has depth d. For N function blocks a routing network run to a depth d can select from N d distinct trainable functions (the paths in the network). Any neural network can be represented as a routing network by adding copies of its layers as routing network function blocks. We can group the function blocks for each network layer and constrain the router to pick from layer 0 function blocks at depth 0, layer 1 blocks at depth 1, and so on. If the number of function blocks differs from layer to layer in the original network, then the router may accommodate this by, for example, maintaining a separate decision function for each depth.
3.1 ROUTER TRAINING USING RL
Algorithm 2: Router-Trainer: Training of a Routing Network. input: A dataset D of samples (v, t, y), v the input representation, t an integer task label, y a
ground-truth target label
1 for each sample s = (v, t, y) â D do 2
Do a forward pass through the network, applying Algorithm 1 to sample s. Store a trace T = (S, A, R, rf inal), where S = sequence of visited states (si); A = sequence of actions taken (ai); R = sequence of immediate action rewards (ri) for action ai; and the ï¬nal reward rf inal. The last output as the networkâs prediction Ëy and the ï¬nal reward rf inal is +1 if the prediction Ëy is correct; -1 if not. Compute the loss L(Ëy, y) between prediction Ëy and ground truth y and backpropagate along the function blocks on the selected route to train their parameters. Use the trace T to train the router using the desired RL training algorithm.
4
We can view routing as an RL problem in the following way. The states of the MDP are the triples (v, t, i) where v ⬠R@ isa representation vector (initially the input), ¢ is an integer task label for v, and i is the depth (initially 1). The actions are function block choices (and PASS) in {1,...k, PASS} for k the number of function blocks. Given a state s = (v,t,i), the router makes a decision about which action to take. For the non-PASS actions, the state is then updated sâ = (vâ,t,i + 1) and the process continues. The PASS action produces the same representation vector again but increments the depth, so sâ = (v,t,i +1). We train the router policy using a variety of RL algorithms and settings which we will describe in detail in the next section.
Regardless of the RL algorithm applied, the router and function blocks are trained jointly. For each instance we route the instance through the network to produce a prediction Ëy. Along the way we record a trace of the states si and the actions ai taken as well as an immediate reward ri for action ai. When the last function block is chosen, we record a ï¬nal reward which depends on the prediction Ëy and the true label y.
f13 âL âf13 f21 âL âf21 f32 âL âf32 L(Ëy, y) Routing Example (see Figure 1) a1 +r1 a2 +r2 a3 +r3 rf inal Ëy = f32(f21(f13(v, t)))
Figure 2: Training (backward) Example
We train the selected function blocks using SGD/backprop. In the example of Figure 1 this means computing gradients for f32, f21 and f13. We then use the computed trace to train the router using an RL algorithm. The high-level procedure is summarized in Algorithm 2 and illustrated in Figure 2. To keep the presentation uncluttered we assume the RL training algorithm has access to the router function, function blocks, loss function, and any speciï¬c hyper-parameters such as discount rate needed for the training and donât include them explicitly in the input.
# 3.1.1 REWARD DESIGN
A routing network uses two kinds of rewards: immediate action rewards ri given in response to an action ai and a ï¬nal reward rf inal, given at the end of the routing. The ï¬nal reward is a function
4
of the networkâs performance. For the classiï¬cation problems focused on in this paper, we set it to +1 if the prediction was correct (Ëy = y), and â1 otherwise. For other domains, such as regression domains, the negative loss (âL(Ëy, y)) could be used.
We experimented with an immediate reward that encourages the router to use fewer function blocks when possible. Since the number of function blocks per-layer needed to maximize performance is not known ahead of time (we just take it to be the same as the number of tasks), we wanted to see whether we could achieve comparable accuracy while reducing the number of function blocks ever chosen by the router, allowing us to reduce the size of the network after training. We experimented with two such rewards, multiplied by a hyper-parameter Ï â [0, 1]: the average number of times that block was chosen by the router historically and the average historical probability of the router choosing that block. We found no signiï¬cant difference between the two approaches and use the average probability in our experiments. We evaluated the effect of Ï on ï¬nal performance and report the results in Figure 12 in the appendix. We see there that generally Ï = 0.0 (no collaboration reward) or a small value works best and that there is relatively little sensitivity to the choice in this range.
3.1.2 RL ALGORITHMS
Router Router Router NY ~ far] a1 | faa] fax] ao a (value, task) (value, task) (value, task) (a): Single (b): Per-Task (c): Dispatched
Figure 3: Task-based routing. (value, task) is the input consisting of value, the partial evaluation of the previous function block (or input x) and the task label task. a; is a routing agent; ay is a dispatching agent.
To train the router we evaluate both single-agent and multi-agent RL strategies. Figure 3 shows three variations which we consider. In Figure 3(a) there is just a single agent which makes the routing decision. This is be trained using either policy-gradient (PG) or Q-Learning experiments. Figure 3(b) shows a multi-agent approach. Here there are a ï¬xed number of agents and a hard rule which assigns the input instance to a an agent responsible for routing it. In our experiments we create one agent per task and use the input task label as an index to the agent responsible for routing that instance. Figure 3(c) shows a multi-agent approach in which there is an additional agent, denoted αd and called a dispatching agent which learns to assign the input to an agent, instead of using a ï¬xed rule. For both of these multi-agent scenarios we additionally experiment with a MARL algorithm called Weighted Policy Learner (WPL).
We experiment with storing the policy both as a table and in form of an approximator. The tabular representation has the invocation depth as its row dimension and the function block as its column dimension with the entries containing the probability of choosing a given function block at a given depth. The approximator representation can consist of either one MLP that is passed the depth (represented in 1-hot), or a vector of d MLPs, one for each decision/depth.
Both the Q-Learning and Policy Gradient algorithms are applicable with tabular and approximation function policy representations. We use REINFORCE (Williams, 1992) to train both the approx- imation function and tabular representations. For Q-Learning the table stores the Q-values in the entries. We use vanilla Q-Learning (Watkins, 1989) to train tabular representation and train the approximators to minimize the £2 norm of the temporal difference error.
Implementing the router decision policy using multiple agents turns the routing problem into a stochastic game, which is a multi-agent extension of an MDP. In stochastic games multiple agents interact in the environment and the expected return for any given policy may change without any action on that agentâs part. In this view incompatible agents need to compete for blocks to train, since negative transfer will make collaboration unattractive, while compatible agents can gain by
5
sharing function blocks. The agentâs (locally) optimal policies will correspond to the gameâs Nash equilibrium 2.
For routing networks, the environment is non-stationary since the function blocks are being trained as well as the router policy. This makes the training considerably more difï¬cult than in the single- agent (MDP) setting. We have experimented with single-agent policy gradient methods such as REINFORCE but ï¬nd they are less well adapted to the changing environment and changes in other agentâs behavior, which may degrade their performance in this setting.
One MARL algorithm specifically designed to address this problem, and which has also been shown to con- verge in non-stationary environments, is the weighted policy learner (WPL) algorithm (Abdallah & Lesser, 2006), shown in Algorithm 3. WPL is a PG algorithm designed to dampen oscillation and push the agents to converge more quickly. This is done by scaling the gradient of the expected return for an action a ac- cording the probability of taking that action 7(a) (if the gradient is positive) or 1 â (a) (if the gradient is negative). Intuitively, this has the effect of slow- ing down the learning rate when the policy is mov- ing away from a Nash equilibrium strategy and in- creasing it when it approaches one. The full WPL algorithm is shown in Algorithm 3. It is assumed that the historical average return Ri for each action a; is initialized to 0 before the start of training. The function simplex-projection projects the updated pol- icy values to make it a valid probability distribution. The projection is defined as: clip()/ >°(clip(7)), where clip(a) = max(0,min(1,a)). The states Sin the trace are not used by the WPL algorithm. Details, including convergence proofs and more exam-
Algorithm 3: Weighted Policy Learner input : A trace T = (S, A, Rr finat) n the maximum depth; R, the historical average returns (initialized to 0 at the start of training); 7 the discount factor ; and A, the policy learning rate output: An updated router policy 7 1 for each action a; ⬠Ado 2 Compute the return: ng 3 Ri HT final + Vi=i yr; Update the average return: Ri & (LâAn)Ri + AnRi Compute the gradient: A(ai) â Ri = Ri Update the policy: 4 | A(ai) â A(ai)(1 â x(a:)) " else L Alai) â Alai)(m(ai)) 13 am < simplex-projection(7 + A,,A) if A(a;) < 0 then
Details, including convergence proofs and more exam- ples giving the intuition behind the algorithm, can be found in (Abdallah & Lesser, 2006). A longer expla- nation of the algorithm can be found in Section 7.4 in the appendix. The WPL-Update algorithm is deï¬ned only for the tabular setting. It is future work to adapt it to work with function approximators.
As we have described it, the training of the router and function blocks is performed independently after computing the loss. We have also experimented with adding the gradients from the router choices â(ai) to those for the function blocks which produce their input. We found no advantage but leave a more thorough investigation for future work.
# 4 QUANTITATIVE RESULTS
We experiment with three datasets: multi-task versions of MNIST (MNIST-MTL) (Lecun et al., 1998), Mini-Imagenet (MIN-MTL) (Vinyals et al., 2016) as introduced by (Ravi & Larochelle, 2017), and CIFAR-100 (CIFAR-MTL) (Krizhevsky, 2009) where we treat the 20 superclasses as tasks. In the binary MNIST-MTL dataset, the task is to differentiate instances of a given class c from non-instances. We create 10 tasks and for each we use 1k instances of the positive class c and 1k each of the remaining 9 negative classes for a total of 10k instances per task during training, which we then test on 200 samples per task (2k samples in total). MIN-MTL is a smaller version of ImageNet (Deng et al., 2009) which is easier to train in reasonable time periods. For mini-ImageNet we randomly choose 50 labels and create tasks from 10 disjoint random subsets of 5 labels each chosen from these. Each label has 800 training instances and 50 testing instances â so 4k training and 250 testing instances per task. For all 10 tasks we have a total of 40k training instances. Finally,
2A Nash equilibrium is a set of policies for each agent where each agentâs expected return will be lower if that agent unilaterally changes its policy
6
CIFAR-100 has coarse and ï¬ne labels for its instances. We follow existing work (Krizhevsky, 2009) creating one task for each of the 20 coarse labels and include 500 instances for each of the corre- sponding ï¬ne labels. There are 20 tasks with a total of 2.5k instances per task; 2.5k for training and 500 for testing. All results are reported on the test set and are averaged over 3 runs. The data are summarized in Table 1.
Each of these datasets has interesting characteristics which challenge the learning in different ways. CIFAR-MTL is a ânaturalâ dataset whose tasks correspond to human categories. MIN-MTL is ran- domly generated so will have less task coherence. This makes positive transfer more difï¬cult to achieve and negative transfer more of a problem. And MNIST-MTL, while simple, has the difï¬cult property that the same instance can appear with different labels in different tasks, causing interfer- ence. For example, in the â0 vs other digitsâ task, â0â appears with a positive label but in the â1 vs other digitsâ task it appears with a negative label.
Our experiments are conducted on a convnet archi- tecture (SimpleConvNet) which appeared recently in (Ravi & Larochelle, 2017). This model has 4 convo- lutional layers, each consisting of a 3x3 convolution and 32 ï¬lters, followed by batch normalization and a ReLU. The convolutional layers are followed by 3 fully connected layers, with 128 hidden units each. Our routed version of the network routes the 3 fully connected layers and for each routed layer we supply one randomly initialized function block per task in the dataset. When we use neural net approximators for the router agents they are always 2 layer MLPs with a hidden dimension of 64. A state (v, t, i) is encoded for input to the ap- proximator by concatenating v with a 1-hot representation of t (if used). That is, encoding(s) = concat(v, one hot(t)).
# Training Dataset 50k CIFAR-MTL MIN-MTL 40k MNIST-MTL 100k # Testing 10k 2.5k 2k Table 1: Dataset training and testing splits
We did a parameter sweep to ï¬nd the best learning rate and Ï value for each algorithm on each dataset. We use Ï = 0.0 (no collaboration reward) for CIFAR-MTL and MIN-MTL and Ï = 0.3 for MNIST-MTL. The learning rate is initialized to 10â2 and annealed by dividing by 10 every 20 epochs. We tried both regular SGD as well as Adam Kingma & Ba (2014), but chose SGD as it resulted in marginally better performance. The SimpleConvNet has batch normalization layers but we use no dropout.
For one experiment, we dedicate a special âPASSâ action to allow the agents to skip layers dur- ing training which leaves the current state unchanged (routing-all-fc recurrent/+PASS). A detailed description of the PASS action is provided in the Appendix in Section 7.2.
All data are presented in Table 2 in the Appendix.
In the ï¬rst experiment, shown in Figure 4, we compare different RL training algorithms on CIFAR- MTL. We compare ï¬ve algorithms: MARL:WPL; a single agent REINFORCE learner with a sep- arate approximation function per layer; an agent-per-task REINFORCE learner which maintains a separate approximation function for each layer; an agent-per-task Q learner with a separate approx- imation function per layer; and an agent-per-task Q learner with a separate table for each layer. The best performer is the WPL algorithm which outperforms the nearest competitor, tabular Q-Learning by about 4%. We can see that (1) the WPL algorithm works better than a similar vanilla PG, which has trouble learning; (2) having multiple agents works better than having a single agent; and (3) the tabular versions, which just use the task and depth to make their predictions, work better here than the approximation versions, which all use the representation vector in addition predict the next action.
The next experiment compares the best performing algorithm WPL against other routing approaches, including the already introduced REINFORCE: single agent (for which WPL is not applicable). All of these algorithms route the full-connected layers of the SimpleConvNet using the layering ap- proach we discussed earlier. To make the next comparison clear we rename MARL:WPL to routing- all-fc in Figure 5 to reï¬ect the fact that it routes all the fully connected layers of the SimpleConvNet, and rename REINFORCE: single agent to routing-all-fc single agent. We compare against several other approaches. One approach, routing-all-fc-recurrent/+PASS, has the same setup as routing-all- fc, but does not constrain the router to pick only from layer 0 function blocks at depth 0, etc. It is allowed to choose any function block from two of the layers (since the ï¬rst two routed layers
7
70 (TNO T Te tate ne ZN Y mee é al low i ve Accuracy in % & t 1 / Epoch
70 604 i Accuracy in % id 8 Epoch
Inï¬uence of the RL algorithm on Figure 4: CIFAR-MTL. Detailed descriptions of the im- plementation each approach can be found in the Appendix in Section 7.3.
Figure 5: Comparison of Routing Architec- tures on CIFAR-MTL. Implementation details of each approach can be found in the Appendix in Section 7.3.
have identical input and output dimensions; the last is the classiï¬cation layer). Another approach, soft-mixture-fc, is a soft version of the router architecture. This soft version uses the same function blocks as the routed version, but replaces the hard selection with a trained softmax attention (see the discussion below on cross-stitch networks for the details). We also compare against the single agent architecture shown in 3(a) called routing-all-fc single agent and the dispatched architecture shown in Figure 3(c) called routing-all-fc dispatched. Neither of these approached the performance of the per-task agents. The best performer by a large margin is routing-all-fc, the fully routed WPL algorithm.
We next compare routing-all-fc on different domains against the cross-stitch networks of Misra et al. (2016) and two challenging baselines: task speciï¬c-1-fc and task speciï¬c-all-fc, described below.
Cross-stitch networks Misra et al. (2016) are a kind of linear-combination model for multi-task learning. They maintain one model per task with a shared input layer, and ââcross stitchâ connection layers, which allow sharing between tasks. Instead of selecting a single function block in the next layer to route to, a cross-stitch network routes to all the function blocks simultaneously, with the input for a function block i in layer / given by a linear combination of the activations computed by all the function blocks of layer /â1. That is: input,; = an Why-1,5> for learned weights why and layer | â 1 activations v;_1,;. For our experiments, we add a cross-stitch layer to each of the routed layers of SimpleConvNet. We additional compare to a similar âsoft routingâ version soft-mixture-fc in Figure 5. Soft-routing uses a softmax to normalize the weights used to combine the activations of previous layers and it shares parameters for a given layer so that wi = wh), for all i, iâ, 1.
Figure 6: Results on domain CIFAR-MTL
70 60 Fa <0 z g FA < ââ retinal = = = task specific-alkfc task specific l-fe ross sttealFe 20 40 © Ea 100 Epoch
60 50+ --9s7 7 -â Fa Fe a ee) alee < ye anem z a g Ne 3° 1 < " 30 fi U 20 40 Epoch
# Figure 7: Results on domain MIN-MTL (mini ImageNet)
8
The task-speciï¬c-1-fc baseline has a separate last fully connected layer for each task and shares the rest of the layers for all tasks. The task speciï¬c-all-fc baseline has a separate set of all the fully con- nected layers for each task. These baseline architectures allow considerable sharing of parameters but also grant the network private parameters for each task to avoid interference. However, unlike routing networks, the choice of which parameters are shared for which tasks, and which parameters are task-private is made statically in the architecture, independent of task.
The results are shown in Figures 6, 7, and 8. In each case the routing net routing-all-fc performs consistently better than the cross-stitch networks and the baselines. On CIFAR-MTL, the routing net beats cross-stitch networks by 7% and the next closest baseline task-speciï¬c-1-fc by 11%. On MIN-MTL, the routing net beats cross-stitch networks by about 2% and the nearest baseline task- speciï¬c-1-fc by about 6%. We surmise that the results are better on CIFAR-MTL because the task instances have more in common whereas the MIN-MTL tasks are randomly constructed, making sharing less proï¬table.
On MNIST-MTL the random baseline is 90%. We experimented with several learning rates but were unable to get the cross-stitch networks to train well here. Routing nets beats the cross-stitch networks by 9% and the nearest baseline (task-speciï¬c-all-fc) by 3%. The soft version also had trouble learning on this dataset.
In all these experiments routing makes a signiï¬cant difference over both cross-stitch networks and the baselines and we conclude that a dynamic policy which learns the function blocks to compose on a per-task basis yields better accuracy and sharper convergence than simple static sharing baselines or a soft attention approach.
In addition, router training is much faster. On CIFAR-MTL for example, training time on a sta- ble compute cluster was reduced from roughly 38 hours to 5.6, an 85% improvement. We have conducted a set of scaling experiments to compare the training computation of routing networks and cross-stitch networks trained with 2, 3, 5, and 10 function blocks. The results are shown in the appendix in Figure 15. Routing networks consistently perform better than cross-stitch networks and the baselines across all these problems. Adding function blocks has no apparent effect on the computation involved in training routing networks on a dataset of a given size. On the other hand, cross-stitch networks has a soft routing policy that scales computation linearly with the number of function blocks. Because the soft policy backpropagates through all function blocks and the hard routing policy only backpropagates through the selected block, the hard policy can much more easily scale to many task learning scenarios that require many diverse types of functional primitives.
To explore why the multi-agent approach seems to do better than the single-agent, we manually compared their policy dynamics for several CIFAR-MTL examples. For these experiments Ï = 0.0 so there is no collaboration reward which might encourage less diversity in the agent choices. In the cases we examined we found that the single agent often chose just 1 or 2 function blocks at each depth, and then routed all tasks to those. We suspect that there is simply too little signal available to the agent in the early, random stages, and once a bias is established its decisions suffer from a lack of diversity.
100 took specttetfe task spocicalc crossstitchvalhfc ae 4 oa prone iy noocccccce . 88 : : - - Accuracy in % Epoch
The routing network on the other hand learns a policy which, unlike the baseline static models, partitions the network quite differently for each task, and also achieves considerable diversity in its choices as can be seen in Figure 11. This ï¬g- ure shows the routing decisions made over the whole MNIST MTL dataset. Each task is la- beled at the top and the decisions for each of the three routed layers are shown below. We believe that because the routing network has separate policies for each task, it is less sen- sitive to a bias for one or two function blocks and each agent learns more independently what works for its assigned task.
Figure 8: Results on domain MNIST-MTL
9
° âââ = âs oo | âae ll âe â == in =e a ee 0 ââ = ââ = x â= a a a â â ES © â ee | Lâââ_â_â_â_âââââ E oo ee © SL F a 2 wn To] â_ 5 20 40 co 30 Processed Samples
10 08 0.6 3 * oa 0.2 0.0 # 0 20 40 60 80 100 Processed Samples
Figure 10: The Probabilities of all Agents of taking Block 7 for the ï¬rst 100 samples of each task (totalling 1000 samples) of MNIST-MTL
Figure 9: The Policies of all Agents for the ï¬rst function block layer for the ï¬rst 100 samples of each task of MNIST-MTL
# 5 QUALITATIVE RESULTS
To better understand the agent interaction we have created several views of the policy dynamics. First, in Figure 9, we chart the policy over time for the ï¬rst decision. Each rectangle labeled Ti on the left represents the evolution of the agentâs policy for that task. For each task, the horizontal axis is number of samples per task and the vertical axis is actions (decisions). Each vertical slice shows the probability distribution over actions after having seen that many samples of its task, with darker shades indicating higher probability. From this picture we can see that, in the beginning, all task agents have high entropy. As more samples are processed each agent develops several candidate function blocks to use for its task but eventually all agents converge to close to 100% probability for one particular block. In the language of games, the agents ï¬nd a pure strategy for routing.
In the next view of the dynamics, we pick one partic- ular function block (block 7) and plot the probabil- ity, for each agent, of choosing that block over time. The horizontal axis is time (sample) and the verti- cal axis is the probability of choosing block 7. Each colored curve corresponds to a different task agent. Here we can see that there is considerable oscillation over time until two agents, pink and green, emerge as the âvictorsâ for the use of block 7 and each assign close to 100% probability for choosing it in routing their respective tasks. It is interesting to see that the eventual winners, pink and green, emerge earlier as well as strongly interested in block 7. We have no- ticed this pattern in the analysis of other blocks and speculate that the agents who want to use the block are being pulled away from their early Nash equilibrium as other agents try to train the block away.
# Figure 11: An actual MNIST-MTL.
routing map for
Finally, in Figure 11 we show a map of the routing for MNIST-MTL. Here tasks are at the top and each layer below represents one routing decision. Conventional wisdom has it that networks will beneï¬t from sharing early, using the ï¬rst layers for common representations, diverging later to accommodate differences in the tasks. This is the setup for our baselines. It is interesting to see
10
that this is not what the network learns on its own. Here we see that the agents have converged on a strategy which ï¬rst uses 7 function blocks, then compresses to just 4, then again expands to use 5. It is not clear if this is an optimal strategy but it does certainly give improvement over the static baselines.
# 6 FUTURE WORK
We have presented a general architecture for routing and multi-agent router training algorithm which performs signiï¬cantly better than cross-stitch networks and baselines and other single-agent ap- proaches. The paradigm can easily be applied to a state-of-the-art network to allow it to learn to dynamically adjust its representations.
As described in the section on Routing Networks, the state space to be learned grows exponentially with the depth of the routing, making it challenging to scale the routing to deeper networks in their entirety. It would be interesting to try hierarchical RL techniques (Barto & Mahadevan (2003)) here.
Our most successful experiments have used the multi-agent architecture with one agent per task, trained with the Weighted Policy Learner algorithm (Algorithm 3). Currently this approach is tabular but we are investigating ways to adapt it to use neural net approximators.
We have also tried routing networks in an online setting, training over a sequence of tasks for few shot learning. To handle the iterative addition of new tasks we add a new routing agent for each and overï¬t it on the few shot examples while training the function modules with a very slow learning rate. Our results so far have been mixed, but this is a very useful setting and we plan to return to this problem.
# REFERENCES
Sherief Abdallah and Victor Lesser. Learning the task allocation game. In Proceedings of the ï¬fth international joint conference on Autonomous agents and multiagent systems, pp. 850â857. ACM, 2006. URL http://dl.acm.org/citation.cfm?id=1160786.
Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. arXiv preprint arXiv:1611.06194, 2016.
Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architec- tures using reinforcement learning. ICLR, 2017.
Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(4):341â379, 2003. URL http://link.springer. com/article/10.1023/A:1025696116075.
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. CoRR, abs/1511.06297, 2015. URL http://arxiv.org/ abs/1511.06297.
Andrew Brock, Theodore Lim, James M. Ritchie, and Nick Weston. SMASH: one-shot model archi- tecture search through hypernetworks. CoRR, abs/1708.05344, 2017. URL http://arxiv. org/abs/1708.05344.
Timothy J Buschman and Earl K Miller. Shifting the spotlight of attention: evidence for discrete computations in cognition. Frontiers in human neuroscience, 4, 2010.
Rich Caruana. Multitask learning. Machine Learning, 28(1):41â75, Jul 1997. ISSN 1573-0565. doi: 10.1023/A:1007379606734. URL https://doi.org/10.1023/A:1007379606734.
Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yang. Adanet: Adap- tive structural learning of artiï¬cial neural networks. arXiv preprint arXiv:1607.01097, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
11
Ludovic Denoyer and Patrick Gallinari. Deep sequential neural network. arXiv preprint arXiv:1410.0510, 2014.
Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. CoRR, abs/1701.08734, 2017. URL http://arxiv.org/abs/1701. 08734.
Kevin Gurney, Tony J Prescott, and Peter Redgrave. A computational model of action selection in the basal ganglia. i. a new functional anatomy. Biological cybernetics, 84(6):401â410, 2001.
David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Jessica B Hamrick, Andrew J Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W Battaglia. Metacontrol for adaptive imagination-based optimization. ICLR, 2017.
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79â87, 1991.
Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181â214, 1994.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980.
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278â2324, 1998.
Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efï¬ciency trade-offs by selective execution. arXiv preprint arXiv:1701.00299, 2017.
Mason McGill and Pietro Perona. Deciding how to decide: Dynamic routing in artiï¬cial neural networks. International Conference on Machine Learning, 2017.
Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Dan Fink, Olivier Francon, Bala Raju, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. Evolving deep neural networks. arXiv preprint arXiv:1703.00548, 2017.
Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for In Proceedings of the IEEE Conference on Computer Vision and Pattern multi-task learning. Recognition, pp. 3994â4003, 2016.
Tsendsuren Munkhdalai and Hong Yu. Meta networks. International Conference on Machine Learn- ing, 2017.
Janarthanan Rajendran, P. Prasanna, Balaraman Ravindran, and Mitesh M. Khapra. ADAAPT: attend, adapt, and transfer: Attentative deep architecture for adaptive policy transfer from multiple sources in the same domain. ICLR, abs/1510.02879, 2017. URL http://arxiv.org/abs/ 1510.02879.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. ICLR, 2017.
Matthew Riemer, Aditya Vempaty, Flavio Calmon, Fenno Heath, Richard Hull, and Elham Khabiri. Correcting forecasts with multifactor neural attention. In International Conference on Machine Learning, pp. 3010â3019, 2016.
Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. Sluice networks: Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142, 2017.
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. ICLR, 2017.
12
Andrea Stocco, Christian Lebiere, and John R Anderson. Conditional routing of information to the cortex: A model of the basal ganglias role in cognitive coordination. Psychological review, 117 (2):541, 2010.
Marijn F Stollenga, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber. Deep networks with internal selective attention through feedback connections. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro- cessing Systems 27, pp. 3545â3553. Curran Associates, Inc., 2014.
Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. CoRR, abs/1606.04080, 2016. URL http://arxiv. org/abs/1606.04080.
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, Kingâs College, Cambridge, 1989.
Olga Wichrowska, Niru Maheswaranathan, Matthew W Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. arXiv preprint arXiv:1703.04813, 2017.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. ISSN 0885-6125.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. ICLR, 2017.
13
# 7 APPENDIX
7.1 IMPACT OF RHO
70. 604 Accuracy in % 304 20 20 40 60 80 100 Epoch
Figure 12: Inï¬uence of the âcollaboration rewardâ Ï on CIFAR-MTL. The architecture is routing- all-fc with WPL routing agents.
0.035. _--¢ 0.034 eorre F a § 0.025] , 3 a z â a 4 x + 8 2 oo g SER routing © oor =O = cross-stitch 0.05) +&â~âeâ_â_ââââ_8 ° num tasks:
Figure 13: Comparison of per-task training cost for cross-stitch and routing networks. We add a function block per task and normalize the training time per epoch by dividing by the number of tasks to isolate the effect of adding function blocks on computation.
7.2 THE PASS ACTION
When routing networks, some resulting sets of function blocks can be applied repeatedly. While there might be other constraints, the prevalent one is dimensionality - input and output dimensions need to match. Applied to the SimpleConvNet architecture used throughout the paper, this means that of the fc layers - (convolution â 48), (48 â 48), (48 â #classes), the middle transforma- tion can be applied an arbitrary number of times. In this case, the routing network becomes fully recurrent and the PASS action is applicable. This allows the network to shorten the recursion depth.
7.3 OVERVIEW OF IMPLEMENTATIONS
We have tested 9 different implementation variants of the routing architectures. The architectures are summarized in Tables 3 and 4. The columns are:
#Agents refers to how many agents are used to implement the router. In most of the experiments, each router consists of one agent per task. However, as described in 3.1, there are implementations with 1 and #tasks + 1 agents.
14
RL (Figure 4) arch (Figure 5) CIFAR (Figure 6) MIN (Figure 7) MNIST (Figure 8) Epoch REINFORCE: approx Qlearning: approx Qlearning: table MARL-WPL: table routing-all-fc routing-all-fc recursive routing-all-fc dispatched soft mixture-all-fc routing-all-fc single agent routing-all-fc task speciï¬c-all-fc task speciï¬c-1-fc cross stitch-all-fc routing-all-fc task speciï¬c-all-fc task speciï¬c-1fc cross-stitch-all-fc routing-all-fc task speciï¬c-all-fc task speciï¬c-1fc soft mixture-all-fc cross-stitch-all-fc 1 20 20 20 31 31 31 20 20 20 31 21 27 26 34 22 29 29 90 90 90 90 90 5 20 20 36 53 53 43 23 24 23 53 29 34 37 54 30 38 41 90 91 90 90 90 10 20 20 47 57 57 45 28 27 33 57 33 39 42 57 37 43 48 98 94 91 90 90 20 20 20 50 58 58 48 37 30 42 58 36 42 49 55 43 46 53 99 95 92 90 90 50 20 24 55 60 60 48 42 32 44 60 42 48 52 58 47 51 56 99 95 93 90 90 100 20 25 55 60 60 46 41 35 44 60 42 49 53 57 48 51 55 99 96 95 90 90
Table 2: Numeric results (in % accuracy) for Figures 4 through 8
70 cod x © 505 id g 5 g 40 3 304 |: ââ routing-alk fe SS task speciicaltfe seeeeeee, cross stitch-alkfe 20. T T T. T 20 40 oo 80 100 Epoch
70 604 x ⬠504 id g 5 ¥ 40 & 30-f q 20 M x 40 oo 100 Epoch
(a) ï¬rst 2 tasks (b) ï¬rst 3 tasks
70 £ > g g 3 g fd â routing arte F TIT task spociicaltfe a oss stitch-alkfc 20. T T T T 20 4 0 80 100 Epoch
70 604 ¥ 50 ig = 5 g 40 = r 20 x 4 0 100 Epoch
(c) ï¬rst 5 tasks
(d) ï¬rst 10 tasks
Figure 15: Results on the ï¬rst n tasks of CIFAR-MTL
15
Num Agents Name MARL:WPL Num Tasks REINFORCE Num Tasks Num Tasks Q-Learning Num Tasks Q-Learning Policy Representation Tabular (num layers x num function blocks) Vector (num layers) of approx functions Vector (num layers) of approx functions Tabular (num layers x num function blocks) Part of State = (v, t, d) Used t, d v, t, d v, t, d t, d
Table 3: Implementation details for Figure 4. All approx functions are 2 layer MLPs with a hidden dim of 64.
Name routing-all-fc routing-all-fc non-layered soft-routing-all-fc dispatched-routing-all-fc single-agent-routing-all-fc Num Agents Num Tasks Num Tasks Num Tasks Num Tasks + 1 Vector (num layers) of appox functions + dispatcher 1 Policy Representation Tabular (num layers x num function blocks) tabular (num layers x num function blocks) Vector (num layers) of appox functions Vector (num layers) of approx functions)
Part of State = (v, t, d) Used t, d t, d v, t, d v, t, d v, t, d
Table 4: Implementation details for Figure 5. All approx functions are 2 layer MLPâs with a hidden dim of 64.
Policy Representation There are two dominant representation variations, as described in 3.1. In the ï¬rst, the policy is stored as a table. Since the table needs to store values for each of the different layers of the routing network, it is of size num layersà num actions. In the second, it is represented either as vector of MLPâs with a hidden layer of dimension 64, one for each layer of the routing network. In this case the input to the MLP is the representation vector v concatenated with a one-hot representation of the task identiï¬er.
Policy Input describes which parts of the state are used in the decision of the routing action. For tabular policies, the task is used to index the agent responsible for handling that task. Each agent then uses the depth as a row index into into the table. For approximation-based policies, there are two variations. For the single agent case the depth is used to index an approximation function which takes as input concat(v, one-hot(t)). For the multi-agent (non-dispatched) case the task label is used to index the agent and then the depth is used to index the corresponding approximation function for that depth, which is given concat(v, one-hot(t)) as input. In the dispatched case, the dispatcher is given concat(v, one-hot(t)) and predicts an agent index. That agent uses the depth to ï¬nd the approximation function for that depth which is then given concat(v, one-hot(t)) as input.
7.4 EXPLANATION OF THE WEIGHTED POLICY LEARNER (WPL) ALGORITHM
The WPL algorithm is a multi-agent policy gradient algorithm designed to help dampen policy oscillation and encourage convergence. It does this by slowly scaling down the learning rate for an agent after a gradient change in that agents policy. It determines when there has been a gradient change by using the difference between the immediate reward and historical average reward for the action taken. Depending on the sign of the gradient the algorithm is in one of two scenarios. If the gradient is positive then it is scaled by 1 â Ï(ai). Over time if the gradient remains positive it will cause Ï(ai) to increase and so 1 â Ï(ai) will go to 0, slowing the learning. If the gradient is negative then it is scaled by Ï(ai). Here again if the gradient remains negative over time it will cause Ï(ai) to decrease eventually to 0, slowing the learning again. Slowing the learning after gradient changes dampens the policy oscillation and helps drive the policies towards convergence.
16 | {
"id": "1701.00299"
} |
1711.00740 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | 8 1 0 2
y a M 4 ] G L . s c [
3 v 0 4 7 0 0 . 1 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# LEARNING TO REPRESENT PROGRAMS WITH GRAPHS
Miltiadis Allamanis Microsoft Research Cambridge, UK miallama@microsoft.com
# Marc Brockschmidt Microsoft Research Cambridge, UK mabrocks@microsoft.com
Mahmoud Khademiâ Simon Fraser University Burnaby, BC, Canada mkhademi@sfu.ca
# ABSTRACT
Learning tasks on source code (i.e., formal languages) have been considered re- cently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by codeâs known sematics. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VARNAMING, in which a network attempts to predict the name of a variable given its usage, and VARMISUSE, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VARMISUSE task in many cases. Additionally, our testing showed that VARMISUSE identiï¬es a number of bugs in mature open-source projects.
# INTRODUCTION
The advent of large repositories of source code as well as scalable machine learning methods naturally leads to the idea of âbig codeâ, i.e., largely unsupervised methods that support software engineers by generalizing from existing source code (Allamanis et al., 2017). Currently, existing deep learning models of source code capture its shallow, textual structure, e.g. as a sequence of tokens (Hindle et al., 2012; Raychev et al., 2014; Allamanis et al., 2016), as parse trees (Maddison & Tarlow, 2014; Bielik et al., 2016), or as a ï¬at dependency networks of variables (Raychev et al., 2015). Such models miss out on the opportunity to capitalize on the rich and well-deï¬ned semantics of source code. In this work, we take a step to alleviate this by including two additional signal sources in source code: data ï¬ow and type hierarchies. We do this by encoding programs as graphs, in which edges represent syntactic relationships (e.g. âtoken before/afterâ) as well as semantic relationships (âvariable last used/written hereâ, âformal parameter for argument is called streamâ, etc.). Our key insight is that exposing these semantics explicitly as structured input to a machine learning model lessens the requirements on amounts of training data, model capacity and training regime and allows us to solve tasks that are beyond the current state of the art.
We explore two tasks to illustrate the advantages of exposing more semantic structure of programs. First, we consider the VARNAMING task (Allamanis et al., 2014; Raychev et al., 2015), in which given some source code, the âcorrectâ variable name is inferred as a sequence of subtokens. This requires some understanding of how a variable is used, i.e., requires reasoning about lines of code far
âWork done as an intern in Microsoft Research, Cambridge, UK.
1
Published as a conference paper at ICLR 2018
var clazz=classTypes["Root"].Single() as JsonCodeGenerator.ClassType; Assert.NotNull(clazz); var first=classTypes["RecClass"].Single() as JsonCodeGenerator.ClassType; Assert.NotNull( clazz ); Assert.Equal("string", first.Properties["Name"].Name); Assert.False(clazz.Properties["Name"].IsArray);
Figure 1: A snippet of a detected bug in RavenDB an open-source C# project. The code has been slightly simpliï¬ed. Our model detects correctly that the variable used in the highlighted (yellow) slot is incorrect. Instead, first should have been placed at the slot. We reported this problem which was ï¬xed in PR 4138.
apart in the source ï¬le. Secondly, we introduce the variable misuse prediction task (VARMISUSE), in which the network aims to infer which variable should be used in a program location. To illustrate the task, Figure 1 shows a slightly simpliï¬ed snippet of a bug our model detected in a popular open-source project. Speciï¬cally, instead of the variable clazz, variable first should have been used in the yellow highlighted slot. Existing static analysis methods cannot detect such issues, even though a software engineer would easily identify this as an error from experience.
To achieve high accuracy on these tasks, we need to learn representations of program semantics. For both tasks, we need to learn the semantic role of a variable (e.g., âis it a counter?â, âis it a ï¬lename?â). Additionally, for VARMISUSE, learning variable usage semantics (e.g., âa ï¬lename is needed hereâ) is required. This âï¬ll the blank elementâ task is related to methods for learning distributed representations of natural language words, such as Word2Vec (Mikolov et al., 2013) and GLoVe (Pennington et al., 2014). However, we can learn from a much richer structure such as data ï¬ow information. This work is a step towards learning program representations, and we expect them to be valuable in a wide range of other tasks, such as code completion (âthis is the variable you are looking forâ) and more advanced bug ï¬nding (âyou should lock before using this objectâ).
To summarize, our contributions are: (i) We deï¬ne the VARMISUSE task as a challenge for machine learning modeling of source code, that requires to learn (some) semantics of programs (cf. section 3). (ii) We present deep learning models for solving the VARNAMING and VARMISUSE tasks by modeling the codeâs graph structure and learning program representations over those graphs (cf. section 4). (iii) We evaluate our models on a large dataset of 2.9 million lines of real-world source code, showing that our best model achieves 32.9% accuracy on the VARNAMING task and 85.5% accuracy on the VARMISUSE task, beating simpler baselines (cf. section 5). (iv) We document practical relevance of VARMISUSE by summarizing some bugs that we found in mature open-source software projects (cf. subsection 5.3). Our implementation of graph neural networks (on a simpler task) can be found at https://github.com/Microsoft/gated-graph-neural-network-samples and the dataset can be found at https://aka.ms/iclr18-prog-graphs-dataset.
# 2 RELATED WORK
Our work builds upon the recent ï¬eld of using machine learning for source code artifacts (Allamanis et al., 2017). For example, Hindle et al. (2012); Bhoopchand et al. (2016) model the code as a sequence of tokens, while Maddison & Tarlow (2014); Raychev et al. (2016) model the syntax tree structure of code. All works on language models of code ï¬nd that predicting variable and method identiï¬ers is one of biggest challenges in the task.
Closest to our work is the work of Allamanis et al. (2015) who learn distributed representations of variables using all their usages to predict their names. However, they do not use data ï¬ow information and we are not aware of any model that does so. Raychev et al. (2015) and Bichsel et al. (2016) use conditional random ï¬elds to model a variety of relationships between variables, AST elements and types to predict variable names and types (resp. to deobfuscate Android apps), but without considering the ï¬ow of data explicitly. In these works, all variable usages are deterministically known beforehand (as the code is complete and remains unmodiï¬ed), as in Allamanis et al. (2014; 2015).
2
Published as a conference paper at ICLR 2018
Our work is remotely related to work on program synthesis using sketches (Solar-Lezama, 2008) and automated code transplantation (Barr et al., 2015). However, these approaches require a set of speciï¬cations (e.g. input-output examples, test suites) to complete the gaps, rather than statistics learned from big code. These approaches can be thought as complementary to ours, since we learn to statistically complete the gaps without any need for speciï¬cations, by learning common variable usage patterns from code.
Neural networks on graphs (Gori et al., 2005; Li et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Gilmer et al., 2017) adapt a variety of deep learning methods to graph-structured input. They have been used in a series of applications, such as link prediction and classiï¬cation (Grover & Leskovec, 2016) and semantic role labeling in NLP (Marcheggiani & Titov, 2017). Somewhat related to source code is the work of Wang et al. (2017) who learn graph-based representations of mathematical formulas for premise selection in theorem proving.
# 3 THE VARMISUSE TASK
Detecting variable misuses in code is a task that requires understanding and reasoning about program semantics. To successfully tackle the task one needs to infer the role and function of the program elements and understand how they relate. For example, given a program such as Fig. 1, the task is to automatically detect that the marked use of clazz is a mistake and that first should be used instead. While this task resembles standard code completion, it differs signiï¬cantly in its scope and purpose, by considering only variable identiï¬ers and a mostly complete program.
Task Description We view a source code ï¬le as a sequence of tokens t0 . . . tN = T , in which some tokens tλ0, tλ1 . . . are variables. Furthermore, let Vt â V refer to the set of all type-correct variables in scope at the location of t, i.e., those variables that can be used at t without raising a compiler error. We call a token tokλ where we want to predict the correct variable usage a slot. We deï¬ne a separate task for each slot tλ: Given t0 . . . tλâ1 and tλ+1, . . . , tN , correctly select tλ from Vtλ. For training and evaluation purposes, a correct solution is one that simply matches the ground truth, but note that in practice, several possible assignments could be considered correct (i.e., when several variables refer to the same value in memory).
# 4 MODEL: PROGRAMS AS GRAPHS
In this section, we discuss how to transform program source code into program graphs and learn representations over them. These program graphs not only encode the program text but also the semantic information that can be obtained using standard compiler tools.
Gated Graph Neural Networks Our work builds on Gated Graph Neural Networks (Li et al., 2015) (GGNN) and we summarize them here. A graph G = (V, E, X) is composed of a set of nodes V, node features X, and a list of directed edge sets E = (E1, . . . , EK) where K is the number of edge types. We annotate each v â V with a real-valued vector x(v) â RD representing the features of the node (e.g., the embedding of a string label of that node).
We associate every node v with a state vector Aâ), initialized from the node label aâ). The sizes of the state vector and feature vector are typically the same, but we can use larger state vectors through padding of node features. To propagate information throughout the graph, âmessagesâ of type k are sent from each v to its neighbors, where each message is computed from its current state vector as m\â) = fx(h). Here, f, can be an arbitrary function; we choose a linear layer in our case. By computing messages for all graph edges at the same time, all states can be updated at the same time. In particular, a new state for a node v is computed by aggregating all incoming messages as mi) = g{m | there is an edge of type k from u to v}). g is an aggregation function, which we implement as elementwise summation. Given the aggregated message 7°) and the current state vector h\â) of node v, the state of the next time step bhâ) is computed as bhâ) = GRU(mâ¢), A), where GRU is the recurrent cell function of gated recurrent unit (GRU) (Cho et al.| [2014). The
3
Published as a conference paper at ICLR 2018
ExpressionStatement InvocationExpression MemberAccessExpression ArgumentList Assert . NotNull ( . . .
x 1 y 2 x 3 x 4 x 5 y 6
(a) Simpliï¬ed syntax graph for line 2 of Fig. 1, where blue rounded boxes are syntax nodes, black rectan- gular boxes syntax tokens, blue edges Child edges and double black edges NextToken edges.
(b) Data ï¬ow edges for (x 1,y 2) = Foo(); while (x 3 > 0) x 4 = x 5 + y 6 (indices added for clarity), with red dotted LastUse edges, green dashed LastWrite edges and dashdotted purple ComputedFrom edges.
Figure 2: Examples of graph edges used in program representation.
dynamics deï¬ned by the above equations are repeated for a ï¬xed number of time steps. Then, we use the state vectors from the last time step as the node representations.1
Program Graphs We represent program source code as graphs and use different edge types to model syntactic and semantic relationships between different tokens. The backbone of a program graph is the programâs abstract syntax tree (AST), consisting of syntax nodes (corresponding to non- terminals in the programming languageâs grammar) and syntax tokens (corresponding to terminals). We label syntax nodes with the name of the nonterminal from the programâs grammar, whereas syntax tokens are labeled with the string that they represent. We use Child edges to connect nodes according to the AST. As this does not induce an order on children of a syntax node, we additionally add NextToken edges connecting each syntax token to its successor. An example of this is shown in Fig. 2a.
To capture the ï¬ow of control and data through a program, we add additional edges connecting different uses and updates of syntax tokens corresponding to variables. For such a token v, let DR(v) be the set of syntax tokens at which the variable could have been used last. This set may contain several nodes (for example, when using a variable after a conditional in which it was used in both branches), and even syntax tokens that follow in the program code (in the case of loops). Similarly, let DW (v) be the set of syntax tokens at which the variable was last written to. Using these, we add LastRead (resp. LastWrite) edges connecting v to all elements of DR(v) (resp. DW (v)). Additionally, whenever we observe an assignment v = expr , we connect v to all variable tokens occurring in expr using ComputedFrom edges. An example of such semantic edges is shown in Fig. 2b.
We extend the graph to chain all uses of the same variable using LastLexicalUse edges (independent of data ï¬ow, i.e., in if (...) { ... v ...} else { ... v ...}, we link the two oc- currences of v). We also connect return tokens to the method declaration using ReturnsTo edges (this creates a âshortcutâ to its name and type). Inspired by Rice et al. (2017), we connect arguments in method calls to the formal parameters that they are matched to with FormalArgName edges, i.e., if we observe a call Foo(bar) and a method declaration Foo(InputStream stream), we connect the bar token to the stream token. Finally, we connect every token corresponding to a variable to enclosing guard expressions that use the variable with GuardedBy and Guarded- ByNegation edges. For example, in if (x > y) { ... x ...} else { ... y ...}, we add a GuardedBy edge from x (resp. a GuardedByNegation edge from y) to the AST node corresponding to x > y.
Finally, for all types of edges we introduce their respective backwards edges (transposing the adjacency matrix), doubling the number of edges and edge types. Backwards edges help with propagating information faster across the GGNN and make the model more expressive.
1Graph Convolutional Networks (GCN) (Kipf & Welling, 2016; Schlichtkrull et al., 2017) would be a simpler replacement for GGNNs. They correspond to the special case of GGNNs in which no gated recurrent units are used for state updates and the number of propagation steps per GGNN layer is ï¬xed to 1. Instead, several layers are used. In our experiments, GCNs generalized less well than GGNNs.
4
Published as a conference paper at ICLR 2018
Leveraging Variable Type Information We assume a statically typed language and that the source code can be compiled, and thus each variable has a (known) type Ï (v). To use it, we deï¬ne a learnable embedding function r(Ï ) for known types and additionally deï¬ne an âUNKTYPEâ for all unknown/unrepresented types. We also leverage the rich type hierarchy that is available in many object-oriented languages. For this, we map a variableâs type Ï (v) to the set of its supertypes, i.e. Ï â(v) = {Ï : Ï (v) implements type Ï } ⪠{Ï (v)}. We then compute the type representation râ(v) of a variable v as the element-wise maximum of {r(Ï ) : Ï â Ï â(v)}. We chose the maximum here, as it is a natural pooling operation for representing partial ordering relations (such as type lattices). Using all types in Ï â(v) allows us to generalize to unseen types that implement common supertypes or interfaces. For example, List<K> has multiple concrete types (e.g. List<int>, List<string>). Nevertheless, these types implement a common interface (IList) and share common characteristics. During training, we randomly select a non-empty subset of Ï â(v) which ensures training of all known types in the lattice. This acts both like a dropout mechanism and allows us to learn a good representation for all types in the type lattice.
Initial Node Representation To compute the initial node state, we combine information from the textual representation of the token and its type. Concretely, we split the name of a node representing a token into subtokens (e.g. classTypes will be split into two subtokens class and types) on camelCase and pascal_case. We then average the embeddings of all subtokens to retrieve an embedding for the node name. Finally, we concatenate the learned type representation râ(v), computed as discussed earlier, with the node name representation, and pass it through a linear layer to obtain the initial representations for each node in the graph.
Programs Graphs for VARNAMING Given a program and an existing variable v, we build a program graph as discussed above and then replace the variable name in all corresponding variable tokens by a special <SLOT> token. To predict a name, we use the initial node labels computed as the concatenation of learnable token embeddings and type embeddings as discussed above, run GGNN propagation for 8 time steps2 and then compute a variable usage representation by averaging the representations for all <SLOT> tokens. This representation is then used as the initial state of a one-layer GRU, which predicts the target name as a sequence of subtokens (e.g., the name inputStreamBuffer is treated as the sequence [input, stream, buffer]). We train this graph2seq architecture using a maximum likelihood objective. In section 5, we report the accuracy for predicting the exact name and the F1 score for predicting its subtokens.
Program Graphs for VARMISUSE To model VARMISUSE with program graphs we need to modify the graph. First, to compute a context representation c(t) for a slot t where we want to predict the used variable, we insert a new node v<SLOT> at the position of t, corresponding to a âholeâ at this point, and connect it to the remaining graph using all applicable edges that do not depend on the chosen variable at the slot (i.e., everything but LastUse, LastWrite, LastLexicalUse, and GuardedBy edges). Then, to compute the usage representation u(t, v) of each candidate variable v at the target slot, we insert a âcandidateâ node vt,v for all v in Vt, and connect it to the graph by inserting the LastUse, LastWrite and LastLexicalUse edges that would be used if the variable were to be used at this slot. Each of these candidate nodes represents the speculative placement of the variable within the scope.
Using the initial node representations, concatenated with an extra bit that is set to one for the candidate nodes vt,v, we run GGNN propagation for 8 time steps.2 The context and usage representation are then the ï¬nal node states of the nodes, i.e., c(t) = h(v<SLOT>) and u(t, v) = h(vt,v). Finally, the correct variable usage at the location is computed as arg maxv W [c(t), u(t, v)] where W is a linear layer that uses the concatenation of c(t) and u(t, v). We train using a max-margin objective.
4.1 IMPLEMENTATION
Using GGNNs for sets of large, diverse graphs requires some engineering effort, as efï¬cient batching is hard in the presence of diverse shapes. An important observation is that large graphs are normally very sparse, and thus a representation of edges as an adjacency list would usually be advantageous to reduce memory consumption. In our case, this can be easily implemented using a sparse tensor
2We found fewer steps to be insufï¬cient for good results and more propagation steps to not help substantially.
5
Published as a conference paper at ICLR 2018
representation, allowing large batch sizes that exploit the parallelism of modern GPUs efï¬ciently. A second key insight is to represent a batch of graphs as one large graph with many disconnected components. This just requires appropriate pre-processing to make node identities unique. As this makes batch construction somewhat CPU-intensive, we found it useful to prepare minibatches on a separate thread. Our TensorFlow (Abadi et al., 2016) implementation scales to 55 graphs per second during training and 219 graphs per second during test-time using a single NVidia GeForce GTX Titan X with graphs having on average 2,228 (median 936) nodes and 8,350 (median 3,274) edges and 8 GGNN unrolling iterations, all 20 edge types (forward and backward edges for 10 original edge types) and the size of the hidden layer set to 64. The number of types of edges in the GGNN contributes proportionally to the running time. For example, a GGNN run for our ablation study using only the two most common edge types (NextToken, Child) achieves 105 graphs/second during training and 419 graphs/second at test time with the same hyperparameters. Our (generic) implementation of GGNNs is available at https://github.com/Microsoft/ gated-graph-neural-network-samples, using a simpler demonstration task.
5 EVALUATION
Dataset We collected a dataset for the VARMISUSE task from open source C# projects on GitHub. To select projects, we picked the top-starred (non-fork) projects in GitHub. We then ï¬ltered out projects that we could not (easily) compile in full using Roslyn3, as we require a compilation to extract precise type information for the code (including those types present in external libraries). Our ï¬nal dataset contains 29 projects from a diverse set of domains (compilers, databases, . . . ) with about 2.9 million non-empty lines of code. A full table is shown in Appendix D.
For the task of detecting variable misuses, we collect data from all projects by selecting all variable usage locations, ï¬ltering out variable declarations, where at least one other type-compatible replace- ment variable is in scope. The task is then to infer the correct variable that originally existed in that location. Thus, by construction there is at least one type-correct replacement variable, i.e. picking it would not raise an error during type checking. In our test datasets, at each slot there are on average 3.8 type-correct alternative variables (median 3, Ï = 2.6).
From our dataset, we selected two projects as our development set. From the rest of the projects, we selected three projects for UNSEENPROJTEST to allow testing on projects with completely unknown structure and types. We split the remaining 23 projects into train/validation/test sets in the proportion 60-10-30, splitting along ï¬les (i.e., all examples from one source ï¬le are in the same set). We call the test set obtained like this SEENPROJTEST.
Baselines For VARMISUSE, we consider two bidirectional RNN-based baselines. The local model (LOC) is a simple two-layer bidirectional GRU run over the tokens before and after the target location. For this baseline, c(t) is set to the slot representation computed by the RNN, and the usage context of each variable u(t, v) is the embedding of the name and type of the variable, computed in the same way as the initial node labels in the GGNN. This baseline allows us to evaluate how important the usage context information is for this task. The ï¬at dataï¬ow model (AVGBIRNN) is an extension to LOC, where the usage representation u(t, v) is computed using another two-layer bidirectional RNN run over the tokens before/after each usage, and then averaging over the computed representations at the variable token v. The local context, c(t), is identical to LOC. AVGBIRNN is a signiï¬cantly stronger baseline that already takes some structural information into account, as the averaging over all variables usages helps with long-range dependencies. Both models pick the variable that maximizes c(t)T u(t, v).
For VARNAMING, we replace LOC by AVGLBL, which uses a log-bilinear model for 4 left and 4 right context tokens of each variable usage, and then averages over these context representations (this corresponds to the model in Allamanis et al. (2015)). We also test AVGBIRNN on VARNAMING, which essentially replaces the log-bilinear context model by a bidirectional RNN.
6
Published as a conference paper at ICLR 2018
Table 1: Evaluation of models. SEENPROJTEST refers to the test set containing projects that have ï¬les in the training set, UNSEENPROJTEST refers to projects that have no ï¬les in the training data. Results averaged over two runs.
SEENPROJTEST UNSEENPROJTEST LOC AVGLBL AVGBIRNN GGNN LOC AVGLBL AVGBIRNN 50.0 0.788 â â â â 36.1 44.0 73.7 0.941 42.9 50.1 85.5 0.980 53.6 65.8 28.9 0.611 â â â â 22.7 30.6 60.2 0.895 23.4 32.0
Table 2: Ablation study for the GGNN model on SEENPROJTEST for the two tasks.
Ablation Description Accuracy (%) VARMISUSE VARNAMING Standard Model (reported in Table 1) 85.5 53.6 Only NextToken, Child, LastUse, LastWrite edges Only semantic edges (all but NextToken, Child) Only syntax edges (NextToken, Child) 80.6 78.4 55.3 31.2 52.9 34.3 Node Labels: Tokens instead of subtokens Node Labels: Disabled 85.6 84.3 34.5 31.8
5.1 QUANTITATIVE EVALUATION
Table 1 shows the evaluation results of the models for both tasks.4 As LOC captures very little information, it performs relatively badly. AVGLBL and AVGBIRNN, which capture information from many variable usage sites, but do not explicitly encode the rich structure of the problem, still lag behind the GGNN by a wide margin. The performance difference is larger for VARMISUSE, since the structure and the semantics of code are far more important within this setting.
Generalization to new projects Generalizing across a diverse set of source code projects with different domains is an important challenge in machine learning. We repeat the evaluation using the UNSEENPROJTEST set stemming from projects that have no ï¬les in the training set. The right side of Table 1 shows that our models still achieve good performance, although it is slightly lower compared to SEENPROJTEST. This is expected since the type lattice is mostly unknown in UNSEENPROJTEST.
We believe that the dominant problem in applying a trained model to an unknown project (i.e., domain) is the fact that its type hierarchy is unknown and the used vocabulary (e.g. in variables, method and class names, etc.) can differ substantially.
Ablation Study To study the effect of some of the design choices for our models, we have run some additional experiments and show their results in Table 2. First, we varied the edges used in the program graph. We ï¬nd that restricting the model to syntactic information has a large impact on performance on both tasks, whereas restricting it to semantic edges seems to mostly impact performance on VARMISUSE. Similarly, the ComputedFrom, FormalArgName and ReturnsTo edges give a small boost on VARMISUSE, but greatly improve performance on VARNAMING. As evidenced by the experiments with the node label representation, syntax node and token names seem to matter little for VARMISUSE, but naturally have a great impact on VARNAMING.
5.2 QUALITATIVE EVALUATION
Figure 3 illustrates the predictions that GGNN makes on a sample test snippet. The snippet recursively searches for the global directives ï¬le by gradually descending into the root folder. Reasoning about the correct variable usages is hard, even for humans, but the GGNN correctly predicts the variable
3http://roslyn.io 4Sect. A additionally shows ROC and precision-recall curves for the GGNN model on the VARMISUSE task.
7
Published as a conference paper at ICLR 2018
bool TryFindGlobalDirectivesFile(string baseDirectory, string fullPath, out string path){ baseDirectory1 = baseDirectory2.TrimEnd(Path.DirectorySeparatorChar); var directivesDirectory = Path.GetDirectoryName(fullPath3) .TrimEnd(Path.DirectorySeparatorChar); while(directivesDirectory4 != null && directivesDirectory5.Length >= baseDirectory6.Length){ path7 = Path.Combine(directivesDirectory8, GlobalDirectivesFileName9); if (File.Exists(path10)) return true; directivesDirectory11=Path.GetDirectoryName(directivesDirectory12) .TrimEnd(Path.DirectorySeparatorChar); } path13 = null; return false; }
1: path:59%, baseDirectory:35%, fullPath:6%, GlobalDirectivesFileName:1% 2: baseDirectory:92%, fullPath:5%, GlobalDirectivesFileName:2%, path:0.4% 3: fullPath:88%, baseDirectory:9%, GlobalDirectivesFileName:2%, path:1% 4: directivesDirectory:86%, path:8%, baseDirectory:2%, GlobalDirectivesFileName:1%, fullPath:0.1% 5: directivesDirectory:46%, path:24%, baseDirectory:16%, GlobalDirectivesFileName:10%, fullPath:3% 6: baseDirectory:64%, path:26%, directivesDirectory:5%, fullPath:2%, GlobalDirectivesFileName:2% 7: path:99%, directivesDirectory:1%, GlobalDirectivesFileName:0.5%, baseDirectory:7e-5, fullPath:4e-7 8: fullPath:60%, directivesDirectory:21%, baseDirectory:18%, path:1%, GlobalDirectivesFileName:4e-4 9: GlobalDirectivesFileName:61%, baseDirectory:26%, fullPath:8%, path:4%, directivesDirectory:0.5% 10: path:70%, directivesDirectory:17%, baseDirectory:10%, GlobalDirectivesFileName:1%, fullPath:0.6% 11: directivesDirectory:93%, path:5%, GlobalDirectivesFileName:1%, baseDirectory:0.1%, fullPath:4e-5% 12: directivesDirectory:65%, path:16%, baseDirectory:12%, fullPath:5%, GlobalDirectivesFileName:3% 13: path:97%, baseDirectory:2%, directivesDirectory:0.4%, fullPath:0.3%, GlobalDirectivesFileName:4e-4
Figure 3: VARMISUSE predictions on slots within a snippet of the SEENPROJTEST set for the ServiceStack project. Additional visualizations are available in Appendix B. The underlined tokens are the correct tokens. The model has to select among a number of string variables at each slot, where all of them represent some kind of path. The GGNN accurately predicts the correct variable usage in 11 out of the 13 slots reasoning about the complex ways the variables interact among them.
public ArraySegment<byte> ReadBytes(int length){ int size = Math.Min(length, _len - _pos); var buffer = EnsureTempBuffer( length ); var used = Read(buffer, 0, size);
Figure 4: A bug found (yellow) in RavenDB open-source project. The code unnecessarily ensures that the buffer is of size length rather than size (which our model predicts as the correct variable here).
usages at all locations except two (slot 1 and 8). As a software engineer is writing the code, it is imaginable that she may make a mistake misusing one variable in the place of another. Since all variables are string variables, no type errors will be raised. As the probabilities in Fig. 3 suggest most potential variable misuses can be ï¬agged by the model yielding valuable warnings to software engineers. Additional samples with comments can be found in Appendix B.
Furthermore, Appendix C shows samples of pairs of code snippets that share similar representations as computed by the cosine similarity of the usage representation u(t, v) of GGNN. The reader can notice that the network learns to group variable usages that share semantic similarities together. For example, checking for null before the use of a variable yields similar distributed representations across code segments (Sample 1 in Appendix C).
5.3 DISCOVERED VARIABLE MISUSE BUGS
We have used our VARMISUSE model to identify likely locations of bugs in RavenDB (a document database) and Roslyn (Microsoftâs C# compiler framework). For this, we manually reviewed a sample of the top 500 locations in both projects where our model was most conï¬dent about a choosing a variable differing from the ground truth, and found three bugs in each of the projects.
Figs. 1,4,5 show the issues discovered in RavenDB. The bug in Fig. 1 was possibly caused by copy-pasting, and cannot be easily caught by traditional methods. A compiler will not warn about
8
Published as a conference paper at ICLR 2018
if (IsValidBackup(backupFilename) == false) { output("Error:"+ backupLocation +" doesnât look like a valid backup"); throw new InvalidOperationException( backupLocation + " doesnât look like a valid backup");
Figure 5: A bug found (yellow) in the RavenDB open-source project. Although backupFilename is found to be invalid by IsValidBackup, the user is notiï¬ed that backupLocation is invalid instead.
unused variables (since first is used) and virtually nobody would write a test testing another test. Fig. 4 shows an issue that, although not critical, can lead to increased memory consumption. Fig. 5 shows another issue arising from a non-informative error message. We privately reported three additional bugs to the Roslyn developers, who have ï¬xed the issues in the meantime (cf. https://github.com/dotnet/roslyn/pull/23437). One of the reported bugs could cause a crash in Visual Studio when using certain Roslyn features.
Finding these issues in widely released and tested code suggests that our model can be useful during the software development process, complementing classic program analysis tools. For example, one usage scenario would be to guide the code reviewing process to locations a VARMISUSE model has identiï¬ed as unusual, or use it as a prior to focus testing or expensive code analysis efforts.
# 6 DISCUSSION & CONCLUSIONS
Although source code is well understood and studied within other disciplines such as programming language research, it is a relatively new domain for deep learning. It presents novel opportunities compared to textual or perceptual data, as its (local) semantics are well-deï¬ned and rich additional information can be extracted using well-known, efï¬cient program analyses. On the other hand, integrating this wealth of structured information poses an interesting challenge. Our VARMISUSE task exposes these opportunities, going beyond simpler tasks such as code completion. We consider it as a ï¬rst proxy for the core challenge of learning the meaning of source code, as it requires to probabilistically reï¬ne standard information included in type systems.
# REFERENCES
MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In Foundations of Software Engineering (FSE), 2014.
Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Suggesting accurate method and class names. In Foundations of Software Engineering (FSE), 2015.
Miltiadis Allamanis, Hao Peng, and Charles Sutton. A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning (ICML), pp. 2091â2100, 2016.
Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. A survey of machine learning for big code and naturalness. arXiv preprint arXiv:1709.06182, 2017.
Earl T Barr, Mark Harman, Yue Jia, Alexandru Marginean, and Justyna Petke. Automated software transplantation. In International Symposium on Software Testing and Analysis (ISSTA), 2015.
Al Bessey, Ken Block, Ben Chelf, Andy Chou, Bryan Fulton, Seth Hallem, Charles Henri-Gros, Asya Kamsky, Scott McPeak, and Dawson Engler. A few billion lines of code later: using static analysis to ï¬nd bugs in the real world. Communications of the ACM, 53(2):66â75, 2010.
Avishkar Bhoopchand, Tim Rocktäschel, Earl Barr, and Sebastian Riedel. Learning Python code suggestion with a sparse pointer network. arXiv preprint arXiv:1611.08307, 2016.
9
Published as a conference paper at ICLR 2018
Benjamin Bichsel, Veselin Raychev, Petar Tsankov, and Martin Vechev. Statistical deobfuscation of android applications. In Conference on Computer and Communications Security (CCS), 2016.
Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: probabilistic model for code. International Conference on Machine Learning (ICML), 2016. In
Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoderâdecoder approaches. Syntax, Semantics and Structure in Statistical Translation, 2014.
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral ï¬ltering. In Neural Information Processing Systems (NIPS), pp. 3844â3852, 2016.
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference Neural Networks (IJCNN). IEEE, 2005.
Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 855â864. ACM, 2016.
Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In International Conference on Software Engineering (ICSE), 2012.
Thomas N Kipf and Max Welling. Semi-supervised classiï¬cation with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR), 2015.
Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. In International Conference on Machine Learning (ICML), 2014.
Diego Marcheggiani and Ivan Titov. Encoding sentences with graph convolutional networks for semantic role labeling. In ACL, 2017.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Neural Information Processing Systems (NIPS), 2013.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In Programming Languages Design and Implementation (PLDI), pp. 419â428, 2014.
Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from Big Code. In Principles of Programming Languages (POPL), 2015.
Veselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees. In Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 2016.
Andrew Rice, Edward Aftandilian, Ciera Jaspan, Emily Johnston, Michael Pradel, and Yulissa Arroyo-Paredes. Detecting argument selection defects. Proceedings of the ACM on Programming Languages, 1(OOPSLA):104, 2017.
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional network. arXiv preprint arXiv:1703.06103, 2017.
Armando Solar-Lezama. Program synthesis by sketching. University of California, Berkeley, 2008.
Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by deep graph embedding. In Advances in Neural Information Processing Systems, pp. 2783â2793, 2017.
10
Published as a conference paper at ICLR 2018
(a) Precision-Recall Curve (b) Receiver Operating Characteristic (ROC) Curve
Figure 6: Precision-Recall and ROC curves for the GGNN model on VARMISUSE. Note that the y axis starts from 50%.
Table 3: Performance of GGNN model on VARMISUSE per number of type-correct, in-scope candidate variables. Here we compute the performance of the full GGNN model that uses subtokens.
# of candidates 2 3 4 5 6 or 7 8+ Accuracy on SEENPROJTEST (%) Accuracy on UNSEENPROJTEST (%) 91.6 85.7 84.5 77.1 81.8 75.7 78.6 69.0 75.1 71.5 77.5 62.4
# A PERFORMANCE CURVES
Figure 6 shows the ROC and precision-recall curves for the GGNN model. As the reader may observe, setting a false positive rate to 10% we get a true positive rate5 of 73% for the SEENPROJTEST and 69% for the unseen test. This suggests that this model can be practically used at a high precision setting with acceptable performance.
# B VARMISUSE PREDICTION SAMPLES
Below we list a set of samples from our SEENPROJTEST projects with comments about the model performance. Code comments and formatting may have been altered for typesetting reasons. The ground truth choice is underlined.
# Sample 1
for (var port = { #1 ; #2 < #3 ; #4 ++) if (!activePorts.Contains( #5 )) return #6 ; }
#1 startingFrom: 97%, endingAt: 3% #2 port: 100%, startingFrom: 0%, endingAt: 0% #3 endingAt: 100%, startingFrom: 0%, port: 0% #4 port: 100%, startingFrom: 0%, endingAt: 0% #5 port: 100%, startingFrom: 0%, endingAt: 0% #6 port: 100%, startingFrom: 0%, endingAt: 0%
> The model correctly predicts all variables in the loop.
5A 10% false positive rate is widely accepted in industry, with 30% as a maximum acceptable limit (Bessey et al., 2010).
11
Published as a conference paper at ICLR 2018
Sample 2
var path = CreateFileName( #1 ); bitmap.Save( #2 , ImageFormat.Png); return
#1 name: 86%, DIR_PATH: 14% #2 path: 90%, name: 8%, DIR_PATH: 2% #3 path: 76%, name: 16%, DIR_PATH: 8%
> String variables are not confused their semantic role is inferred correctly.
# Sample 3
[global::System.Diagnostics.DebuggerNonUserCodeAttribute] public void MergeFrom(pb::CodedInputStream input) { uint tag; while ((tag = input.ReadTag()) != 0) { switch(tag) { default: input.SkipLastField(); break; case 10: { #1 .AddEntriesFrom(input, _repeated_payload_codec); break; } } } }
#1 Payload: 66%, payload_: 44%
> The model is commonly confused by aliases, i.e. variables that point to the same location in memory. In this sample, either choice would have yielded identical behavior.
# Sample 4
public override bool IsDisposed { get { lock ( #1 ) { return #2 ; } } } #1 _gate: 99%, _observers: 1% #2 _isDisposed: 90%, _isStopped: 8%, HasObservers: 2%
> The ReturnsTo edge can help predict variables that otherwise would have been impossible.
12
Published as a conference paper at ICLR 2018
Sample 5
/// <summary> /// Notifies all subscribed observers about the exception. /// </summary> /// <param name="error">The exception to send to all observers.</param> public override void OnError(Exception error) { if ( #1 == null) throw new ArgumentNullException(nameof( #2 )); var os = default(IObserver<T>[]); lock ( #3 ) { CheckDisposed(); if (! #4 ) { os = _observers.Data; _observers = ImmutableList<IObserver<T>>.Empty; #5 #6 = true; = #7 ; } } if (os != null) { foreach (var o in os) { o.OnError( #8 ); } } }
#1 error: 93%, _exception: 7% #2 error: 98%, _exception: 2% #3 _gate: 100%, _observers: 0% #4 _isStopped: 86%, _isDisposed: 13%, HasObservers: 1% #5 _isStopped: 91%, _isDisposed: 9%, HasObservers: 0% #6 _exception: 100%, error: 0% #7 error: 98%, _exception: 2% #8 _exception: 99%, error: 1%
> The model predicts the correct variables from all slots apart from the last. Reasoning about the last one, requires interprocedural understanding of the code across the class file.
13
Published as a conference paper at ICLR 2018
# Sample 6
private bool BecomingCommand(object message) { if (ReceiveCommand( #1 ) return true; if ( #2 .ToString() == else return false; return true; #3 ) #4 .Tell( #5 ); } #1 message: 100%, Response: 0%, Message: 0% #2 message: 100%, Response: 0%, Message: 0% #3 Response: 91%, Message: 9% #4 Probe: 98%, AskedForDelete: 2% #5 Response: 98%, Message: 2%
> The model predicts correctly all usages except from the one in slot #3. Reasoning about this snippet requires additional semantic information about the intent of the code.
# Sample 7
var response = ResultsFilter(typeof(TResponse), #1 , #2 , request);
#1 httpMethod: 99%, absoluteUrl: 1%, UserName: 0%, UserAgent: 0% #2 absoluteUrl: 99%, httpMethod: 1%, UserName: 0%, UserAgent: 0%
> The model knows about selecting the correct string parameters because it matches them to the formal parameter names.
# Sample 8
if ( #1 >= #2 ) throw new InvalidOperationException(Strings_Core.FAILED_CLOCK_MONITORING);
#1 n: 100%, MAXERROR: 0%, SYNC_MAXRETRIES: 0% #2 MAXERROR: 62%, SYNC_MAXRETRIES: 22%, n: 16%
> It is hard for the model to reason about conditionals, especially with rare constants as in slot #2.
14
Published as a conference paper at ICLR 2018
C NEAREST NEIGHBOR OF GGNN USAGE REPRESENTATIONS
Here we show pairs of nearest neighbors based on the cosine similarity of the learned represen- tations u(t, v). Each slot t is marked in dark blue and all usages of v are marked in yellow (i.e. variableName ). This is a set of hand-picked examples showing good and bad examples. A brief description follows after each pair.
# Sample 1
... public void MoveShapeUp(BaseShape shape ) { != null) { if ( shape for(int i=0; i < Shapes.Count -1; i++){ if (Shapes[i] == shape ){ Shapes.Move(i, ++i); return; } } } } ...
... lock(lockObject) { if ( unobservableExceptionHanler != null) return false; unobservableExceptionHanler = handler; } ...
> Slots that are checked for null-ness have similar representations.
Sample 2
... public IActorRef ResolveActorRef(ActorPath actorPath ){ if(HasAddress( actorPath .Address)) return _local.ResolveActorRef(RootGuardian, actorPath .ElementsWithUid); ... ... ... ActorPath actorPath ; if (TryParseCachedPath(path, out actorPath)) { if (HasAddress( actorPath .Address)){ if ( actorPath .ToStringWithoutAddress().Equals("/")) return RootGuarding; ... } ... } ...
> Slots that follow similar API protocols have similar representations. Note that the function HasAddress is a local function, seen only in the testset.
15
Published as a conference paper at ICLR 2018
Sample 3
... foreach(var filter in configuration.Filters){ GlobalJobFilter.Filters.Add( filter ); } ... ... public void Count_ReturnsNumberOfElements(){ _collection.Add( _filterInstance ); Assert.Equal(1, _collection.Count); } ...
> Adding elements to a collection-like object yields similar representations.
# D DATASET
The collected dataset and its characteristics are listed in Table 4. The full dataset as a set of projects and its parsed JSON will become available online.
Table 4: Projects in our dataset. Ordered alphabetically. kLOC measures the number of non-empty lines of C# code. Projects marked with Devwere used as a development set. Projects marked with â were in the test-only dataset. The rest of the projects were split into train-validation-test. The dataset contains in total about 2.9MLOC.
Name Git SHA kLOCs Slots Vars Description Akka.NET 719335a1 240 51.3k Framework AutoMapper BenchmarkDotNet BotBuilder choco commandlineâ CommonMark.NETDev Dapper EntityFramework Hangï¬re Humanizerâ Leanâ Nancy Newtonsoft.Json Ninject NLog Opserver OptiKey orleans Polly 2ca7c2b5 1670ca34 190117c3 93985688 09677b16 f3d54530 931c700d fa0b7ec8 ffc4912f cc11a77e f574bfd7 72e1f614 6057d9b8 7006297f 643e326a 51b032e7 7d35c718 e0d6a150 0afdbc32 46 28 44 36 11 14 18 263 33 27 190 70 123 13 75 24 34 300 32 3.7k 5.1k 6.4k 3.8k 1.1k 2.6k 3.3k 33.4k 3.6k 2.4k 26.4k 7.5k 14.9k 0.7k 8.3k 3.7k 6.1k 30.7k 3.8k 10.7k Object-to-Object Mapping Library quartznet ravendbDev RestSharp Rx.NET scriptcs ServiceStack ShareX SignalR Wox b33e6f86 55230922 70de357b 2d146fe5 f3cc8bcb 6d59da75 718dd711 fa88089e cdaf6272 49 647 20 180 18 231 125 53 13 9.6k 78.0k 4.0k 14.0k 2.7k 38.0k 22.3k 6.5k 2.0k Library 9.8k Scheduler 82.7k Document Database 4.5k REST and HTTP API Client Library 21.9k Reactive Language Extensions 4.3k C# Text Editor 46.2k Web Framework 18.1k 10.5k 2.1k Application Launcher Sharing Application Push Notiï¬cation Framework
16
Published as a conference paper at ICLR 2018
For this work, we released a large portion of the data, with the exception of projects with a GPL license. The data can be found at https://aka.ms/iclr18-prog-graphs-dataset. Since we are excluding some projects from the data, below we report the results, averaged over three runs, on the published dataset:
Accuracy (%) PR AUC SEENPROJTEST UNSEENPROJTEST 84.0 74.1 0.976 0.934
17 | {
"id": "1611.08307"
} |
1710.11573 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | 8 1 0 2
r p A 6 1 ] G L . s c [
3 v 3 7 5 1 1 . 0 1 7 1 : v i X r a
# DEEP LEARNING AS A MIXED CONVEX- COMBINATORIAL OPTIMIZATION PROBLEM
Abram L. Friesen and Pedro Domingos Paul G. Allen School of Computer Science and Engineering University of Washington Seattle, WA 98195, USA {afriesen,pedrod}@cs.washington.edu
# ABSTRACT
As neural networks grow deeper and wider, learning networks with hard-threshold activations is becoming increasingly important, both for network quantization, which can drastically reduce time and energy requirements, and for creating large in- tegrated systems of deep networks, which may have non-differentiable components and must avoid vanishing and exploding gradients for effective learning. However, since gradient descent is not applicable to hard-threshold functions, it is not clear how to learn networks of them in a principled way. We address this problem by observing that setting targets for hard-threshold hidden units in order to minimize loss is a discrete optimization problem, and can be solved as such. The discrete opti- mization goal is to ï¬nd a set of targets such that each unit, including the output, has a linearly separable problem to solve. Given these targets, the network decomposes into individual perceptrons, which can then be learned with standard convex ap- proaches. Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justiï¬ed straight- through estimator as a special case. Empirically, we show that our algorithm improves classiï¬cation accuracy in a number of settings, including for AlexNet and ResNet-18 on ImageNet, when compared to the straight-through estimator.
# INTRODUCTION
The original approach to neural classiï¬cation was to learn single-layer models with hard-threshold ac- tivations, like the perceptron (Rosenblatt, 1958). However, it proved difï¬cult to extend these methods to multiple layers, because hard-threshold units, having zero derivative almost everywhere and being discontinuous at the origin, cannot be trained by gradient descent. Instead, the community turned to multilayer networks with soft activation functions, such as the sigmoid and, more recently, the ReLU, for which gradients can be computed efï¬ciently by backpropagation (Rumelhart et al., 1986).
This approach has enjoyed remarkable success, enabling researchers to train networks with hundreds of layers and learn models that have signiï¬cantly higher accuracy on a variety of tasks than any previous approach. However, as networks become deeper and wider, there has been a growing trend towards using hard-threshold activations for quantization purposes, where they enable binary or low-precision inference (e.g., Hubara et al. (2016); Rastegari et al. (2016); Zhou et al. (2016); Lin & Talathi (2016); Zhu et al. (2017)) and training (e.g., Lin et al. (2016); Li et al. (2017); Tang et al. (2017); Micikevicius et al. (2017)), which can greatly reduce the energy and computation time required by modern deep networks. Beyond quantization, the scale of the output of hard-threshold units is independent of (or insensitive to) the scale of their input, which can alleviate vanishing and exploding gradient issues and should help avoid some of the pathologies that occur during low-precision training with backpropagation (Li et al., 2017). Avoiding these issues is crucial for developing large systems of deep networks that can be used to perform even more complex tasks.
For these reasons, we are interested in developing well-motivated and efï¬cient techniques for learning deep neural networks with hard-threshold units. In this work, we propose a framework for learning deep hard-threshold networks that stems from the observation that hard-threshold units output discrete values, indicating that combinatorial optimization may provide a principled method for training these networks. By specifying a set of discrete targets for each hidden-layer activation, the network
1
decomposes into many individual perceptrons, each of which can be trained easily given its inputs and targets. The difï¬culty in learning a deep hard-threshold network is thus in setting the targets so that each trained perceptron â including the output units â has a linearly separable problem to solve and thus can achieve its targets. We show that networks in which this is possible can be learned using our mixed convex-combinatorial optimization framework.
Building on this framework, we then develop a recursive algorithm, feasible target propagation (FTPROP), for learning deep hard-threshold networks. Since this is a discrete optimization problem, we develop heuristics for setting the targets based on per-layer loss functions. The mini-batch version of FTPROP can be used to explain and justify the oft-used straight-through estimator (Hinton, 2012; Bengio et al., 2013), which can now be seen as an instance of FTPROP with a speciï¬c choice of per-layer loss function and target heuristic. Finally, we develop a novel per-layer loss function that improves learning of deep hard-threshold networks. Empirically, we show improvements for our algorithm over the straight-through estimator on CIFAR-10 for two convolutional networks and on ImageNet for AlexNet and ResNet-18, with multiple types of hard-threshold activation.
# RELATED WORK
The most common method for learning deep hard-threshold networks is to use backpropagation with the straight-through estimator (STE) (Hinton, 2012; Bengio et al., 2013), which simply replaces the derivative of each hard-threshold unit with the identity function. The STE is used in the quantized net- work literature (see citations above) to propagate gradients through quantized activations, and is used in Shalev-Shwartz et al. (2017) for training with ï¬at activations. Later work generalized the STE to replace the hard-threshold derivative with other functions, including saturated versions of the identity function (Hubara et al., 2016). However, while the STE tends to work quite well in practice, we know of no rigorous justiï¬cation or analysis of why it works or how to choose replacement derivatives. Beyond being unsatisfying in this regard, the STE is not well understood and can lead to gradient mis- match errors, which compound as the number of layers increases (Lin & Talathi, 2016). We show here that the STE, saturated STE, and all types of STE that we have seen are special cases of our framework, thus providing a principled justiï¬cation for it and a basis for exploring and understanding alternatives.
Another common approach to training with hard-threshold units is to use randomness, either via stochastic neurons (e.g., Bengio et al. (2013); Hubara et al. (2016)) or probabilistic training methods, such as those of Soudry et al. (2014) or Williams (1992), both of which are methods for softening hard-threshold units. In contrast, our goal is to learn networks with deterministic hard-threshold units.
Finally, target propagation (TP) (LeCun, 1986; 1987; Carreira-PerpiËn´an & Wang, 2014; Bengio, 2014; Lee et al., 2015; Taylor et al., 2016) is a method that explicitly associates a target with the output of each activation in the network, and then updates each layerâs weights to make its activations more similar to the targets. Our framework can be viewed as an instance of TP that uses combinatorial optimization to set discrete targets, whereas previous approaches employed continuous optimization to set continuous targets. The MADALINE Rule II algorithm (Winter & Widrow, 1988) can also be seen as a special case of our framework and of TP, where only one target is set at a time.
# 2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS
Given a dataset D = {(x,¢)}"â¢, with vector-valued inputs xâ ⬠Râ and binary targets ¢ ⬠{â1, +1}, we are interested in learning an ¢-layered deep neural network with hard-threshold units
y = f(x; W) = g(We g(We-1...g(Wix)...)),
with weight matrices W = {Wy : Wa ⬠R'*"-1}6_| and element-wise activation function g(x) = sign(x), where sign is the sign function such that sign(x) = 1 if x > 0 and â1 oth- erwise. Each layer d has ng units, where we define no = n for the input layer, and we let ha = g(Wa...g(W1x)...) denote the output of each hidden layer, where hg = (hat,---, Rang) and hg ⬠{â1, +1} for each layer d and each unit j. Similarly, we let zy = Wa g(...9g(W1x)...) denote the pre-activation output of layer d. For compactness, we have incorporated the bias term into the weight matrices. We denote a row or column of a matrix Wy as Wy,.; and W4,;., respectively, and the entry in the jth row and kth column as Wy, ;,. Using matrix notation, we can write this model as Y = f(X;W) = 9(We...g(W1X)...), where X is the n x m matrix of dataset instances and Y is the np x m matrix of outputs. We let J? denote the matrix of final-layer targets, H, denote the nq X m matrix of hidden activations at layer d, and Zz denote the nq x m matrix of pre-activations
2
(1)
v1) Wii. fu 11) Wor. boy ee Wy 2: fie i>) G3) Woo, too N set T) )<)) Wi
Figure 1: After setting the hidden-layer targets T1 of a deep hard-threshold network, the network decomposes into independent perceptrons, which can then be learned with standard methods.
at layer d. Our goal will be to learn f by finding the weights W that minimize an aggregate loss L(Y,Te) = 3", L(y, t) for some convex per-instance loss L(y, t). i=l In the simplest case, a hard-threshold network with no hidden layers is a perceptron Y = g(W,X), as introduced by|Rosenblatt|(1958). The goal of learning a perceptron, or any hard-threshold network, is to classify unseen data. A useful first step is to be able to correctly classify the training data, which we focus on here for simplicity when developing our framework; however, standard generalization tech- niques such as regularization are easily incorporated into this framework and we do this for the exper- iments. Since a perceptron is a linear classifier, it is only able to separate a linearly-separable dataset. Definition 1. A dataset {(x, t)}"â¢, is linearly separable iff there exists a vector w ⬠Râ anda real number y > 0 such that (w-x)t® > y for alli =1...m.
When a dataset is linearly separable, the perceptron algorithm is guaranteed to ï¬nd its separating hy- perplane in a ï¬nite number of steps (Novikoff, 1962), where the number of steps required is dependent on the size of the margin γ. However, linear separability is a very strong condition, and even simple functions, such as XOR, are not linearly separable and thus cannot be learned by a perceptron (Minsky & Papert, 1969). We would thus like to be able to learn multilayer hard-threshold networks.
Consider a simple single-hidden-layer hard-threshold network Y = f(X;W) = g(W2 g(WX)) = g(W,H;) for a dataset D = (X,T2), where H, = g(W,X) are the hidden-layer activations. An example of such a network is shown on the left side of Figure [I] Clearly, Y and Hj, are both collections of (single-layer) perceptrons. Backpropagation cannot be used to train the input layerâs weights W, because of the hard-threshold activations but, since each hidden activation h;,; is the output of a perceptron, if we knew the value t;; ⬠{â1, +1} that each hidden unit should take for each input x, we could then use the perceptron algorithm to set the first-layer weights, Wi, to produce these target values. We refer to t;; as the target of h,;. Given a matrix of hidden-layer targets T, ⬠{-1,+1}"*â¢, each layer (and in fact each perceptron in each layer) can be learned separately, as they no longer depend on each other, where the goal of perceptron learning is to update the weights of each layer d so that its activations Hy equal its targets Tz given inputs Ty_1. Figure|1]shows an example of this decomposition. We denote the targets of an ¢-layer network as T = {T),..., To}, where T), fork = 1...â¬â 1 are the hidden-layer targets and T/ are the dataset targets. We often let To = X for notational convenience.
Auxiliary-variable-based approaches, such as ADMM (Taylor et al., 2016; Carreira-PerpiËn´an & Wang, 2014) and other target propagation methods (LeCun, 1986; Lee et al., 2015) use a similar process for decomposing the layers of a network; however, these focus on continuous variables and impose (soft) constraints to ensure that each activation equals its auxiliary variable. We take a different approach here, inspired by the combinatorial nature of the problem and the perceptron algorithm.
Since the final layer is a perceptron, the training instances can only be separated if the hidden-layer activations H are linearly separable with respect to the dataset targets T>. Thus, the hidden-layer targets T; must be set such that they are linearly separable with respect to the dataset targets T, since the hidden-layer targets T) are the intended values of their activations H,. However, in order to ensure that the hidden-layer activations H will equal their targets T, after training, the hidden-layer targets T; must be able to be produced (exactly) by the first layer, which is only possible if the hidden-layer targets T, are also linearly separable with respect to the inputs X. Thus, a sufficient condition for f(X; W) to separate the data is that the hidden-layer targets induce linear separability in all units in both layers of the network. We refer to this property as feasibility. Definition 2. A setting of the targets T = {T,,..., Ty} of an ¢-layer deep hard-threshold network f(X;W) is feasible for a dataset D = (X, Ty) iff for each unit j =1...nqineach layerd =1...¢ the dataset formed by its inputs Ta_ and targets T,,;, is linearly separable, where Ty = X.
3
Feasibility is a much weaker condition than linear separability, since the output decision boundary of a multilayer hard-threshold network with feasible targets is in general highly nonlinear. It follows from the definition of feasibility and convergence of the perceptron algorithm that if a feasible setting of a networkâs targets on a dataset exists, the network can separate the training data. Proposition 1. Let D = {(x, ¢)} be a dataset and let f(X;W) be an ¢-layer hard-threshold network with feasible targets T = {T,,...,Tv} in which each layer d of f was trained separately with inputs Ty_1 and targets Ty, where Ty & X, then f will correctly classify each instance x, such that f(x©;W)t® > 0 for alli =1...m.
Learning a deep hard-threshold network thus reduces to ï¬nding a feasible setting of its targets and then optimizing its weights given these targets, i.e., mixed convex-combinatorial optimization. The simplest method for this is to perform exhaustive search on the targets. Exhaustive search iterates through all possible settings of the hidden-layer targets, updating the weights of each perceptron whose inputs or targets changed, and returns the weights and feasible targets that result in the lowest loss. While impractical, exhaustive search is worth brieï¬y examining to better understand the solution space. In particular, because of the decomposition afforded by setting the targets, exhaustive search over just the targets is sufï¬cient to learn the globally optimal deep hard-threshold network, even though the weights are learned by gradient descent. Proposition 2. If a feasible setting of a deep hard-threshold networkâs targets on a dataset D exists, then exhaustive search returns the global minimum of the loss in time exponential in the number of hidden units.
Learning can be improved and feasibility relaxed if, instead of the perceptron algorithm, a more robust method is used for perceptron learning. For example, a perceptron can be learned for a non-linearly- separable dataset by minimizing the hinge loss L(z,t) = max(0, 1 â tz), a convex loss on the per- ceptronâs pre-activation output z and target ¢ that maximizes the margin when combined with L2 reg- ularization. In general, however, any method for learning linear classifiers can be used. We denote the loss used to train the weights of a layer d as La, where the loss of the final layer Ly is the output loss.
At the other end of the search spectrum is hill climbing. In each iteration, hill climbing evaluates all neighboring states of the current state (i.e., target settings that differ from the current one by only one target) and chooses the one with the lowest loss. The search halts when none of the new states improve the loss. Each state is evaluated by optimizing the weights of each perceptron given the stateâs targets, and then computing the output loss. Hill climbing is more practical than exhaustive search, since it need not explore an exponential number of states, and it also provides the same local optima guarantee as gradient descent on soft-threshold networks. Proposition 3. Hill climbing on the targets of a deep hard-threshold network returns a local minimum of the loss, where each iteration takes time linear in the size of the set of proposed targets.
Exhaustive search and hill climbing comprise two ends of the discrete optimization spectrum. Beam search, which maintains a beam of the most promising solutions and explores each, is another powerful approach that contains both hill climbing and exhaustive search as special cases. In general, however, any discrete optimization algorithm can be used for setting targets. For example, methods from satisï¬ability solving, integer linear programming, or constraint satisfaction might work well, as the linear separability requirements of feasibility can be viewed as constraints on the search space.
We believe that our mixed convex-combinatorial optimization framework opens many new avenues for developing learning algorithms for deep networks, including those with non-differentiable modules. In the following section, we use these ideas to develop a learning algorithm that hews much closer to standard methods, and in fact contains the straight-through estimator as a special case.
# 3 FEASIBLE TARGET PROPAGATION
The open question from the preceding section is how to set the hidden-layer targets. Generating good, feasible targets for the entire network at once is a difï¬cult problem; instead, an easier approach is to propose targets for only one layer at a time. As in backpropagation, it makes sense to start from the output layer, since the ï¬nal-layer targets are given, and successively set targets for each upstream layer. Further, since it is hard to know a priori if a setting of a layerâs targets is feasible for a given network architecture, a simple alternative is to set the targets for a layer d and then optimize the upstream weights (i.e., weights in layers j ⤠d ) to check if the targets are feasible. Since the goals
4
when optimizing a layerâs weights and when setting its upstream targets (i.e., its inputs) are the same â namely, to induce feasibility â a natural method for setting target values is to choose targets that reduce the layerâs loss Ld. However, because the targets are discrete, moves in target space are large and non-smooth and cannot be guaranteed to lower the loss without actually performing the move. Thus, heuristics are necessary. We discuss these in more detail below.
Determining feasibility of the targets at layer d can be done by recursively updating the weights of layer d and proposing targets for layer d â 1 given the targets for layer d. This recursion continues until the input layer is reached, where feasibility (i.e., linear separability) can be easily determined by optimizing that layerâs weights given its targets and the dataset inputs. The targets at layer d can then be updated based on the information gained from the recursion and, if the upstream weights were altered, based on the new outputs of layer d â 1. We call this recursive algorithm feasible target propagation, or FTPROP. Pseudocode is shown in Algorithm 1.
Algorithm 1 Train an ¢-layer hard-threshold network Y = f(X;W) on dataset D = (X,7) with feasible target propagation (FTPROP) using loss functions L = {La}4_y- 1: initialize weights W = {W,,..., We} randomly 2: initialize targets T),...,TZ_1 as the outputs of their hidden units in f(X;W) 3: set Ty < X and set T <~ {T,T,...,Ty} 4: FTPROP(W, T, L, £) // train the network by searching for a feasible target setting
d=1.
5: function FTPROP(weights W , targets T , losses L, and layer index d) 6: 7: 8: 9: 10: 11: 12: 13: 14:
optimize Wd with respect to layer loss Ld(Zd, Td) if activations Hd = g(WdTdâ1) equal the targets Td then return True else if this is the ï¬rst layer (i.e., d = 1) then return False while computational budget of this layer not exceeded do
// check feasibility; Zd = WdTdâ1 // feasible // infeasible // e.g., determined by beam search
6: optimize W, with respect to layer loss La(Za, Ta) /I check feasibility; Za = WaTaâ1
Tdâ1 â heuristically set targets for upstream layer to reduce layer loss Ld(Zd, Td) if FTPROP(W, T, L, d â 1) then
// check if targets Tdâ1 are feasible
optimize Wd with respect to layer loss Ld(Zd, Td) if activations Hd = g(WdTdâ1) equal the targets Td then return True
// feasible
As the name implies, FTPROP is a form of target propagation (LeCun, 1986; 1987; Lee et al., 2015) that uses discrete optimization to set discrete targets, instead of using continuous optimization to set continuous targets. FTPROP is also highly related to RDIS (Friesen & Domingos, 2015), a powerful nonconvex optimization algorithm based on satisï¬ability (SAT) solvers that recursively chooses and sets subsets of variables in order to decompose the underlying problem into simpler subproblems. While RDIS is applied only to continuous problems, the ideas behind RDIS can be generalized to discrete variables via the sum-product theorem (Friesen & Domingos, 2016). This suggests an interesting connection between FTPROP and SAT that we leave for future work.
Of course, modern deep networks will not always have a feasible setting of their targets for a given dataset. For example, a convolutional layer imposes a large amount of structure on its weight matrix, making it less likely that the layerâs input will be linearly separable with respect to its targets. Further, ensuring feasibility will in general cause learning to overï¬t the training data, which will worsen generalization performance. Thus, we would like to relax the feasibility requirements.
In addition, there are many beneï¬ts of using mini-batch instead of full-batch training, including improved generalization gap (e.g., see LeCun et al. (2012) or Keskar et al. (2016)), reduced memory usage, the ability to exploit data augmentation, and the prevalence of tools (e.g., GPUs) designed for it.
Fortunately, it is straightforward to convert FTPROP to a mini-batch algorithm and to relax the feasibility requirements. In particular, since it is important not to overcommit to any one mini-batch, the mini-batch version of FTPROP (i) only updates the weights and targets of each layer once per mini-batch; (ii) only takes a small gradient step on each layerâs weights, instead of optimizing them fully; (iii) sets the targets of the downstream layer in parallel with updating the current layerâs weights, since the weights will not change much; and (iv) removes all checks for feasibility. We call this algorithm FTPROP-MB and present pseudocode in Algorithm 2. FTPROP-MB closely resembles backpropagation-based methods, allowing us to easily implement it with standard libraries.
5
Algorithm 2 Train an ¢-layer hard-threshold network Y = f(X;W) on dataset D = (X,T;) with mini-batch feasible target propagation (FTPROP-MB) using loss functions L = {La}§_4. 1: initialize weights W = {W,,..., We} randomly 2: for each minibatch (X,, Ty) from D do 3: initialize targets T),..., ne 1 as the outputs of their hidden units in f (Xp; W) // forward pass 4: set Ty < Xp, set Tp < Ty, and set T < {T,..., Te} 5: | FTPROP-MB(W,T, L, 0)
initialize targets T),..., 1 as the outputs of their hidden units in f (Xp; W) // forward pass set Ty < Xp, set Tp < Ty, and set T < {T,..., Te} FTPROP-MB(W,T, L, 0)
6: function FTPROP-MB(weights W , targets T , losses L, and layer index d) 7: 8: 9:
Ti + set targets for upstream layer based on current weights Wa and loss La(Za, Ta) update Wg with respect to layer loss La(Za, Ta) M where Za = WaTa-1 = WaHa-1 if d > 1 then FTPROP-MB(W, {Tp,...,Tu-1,-.-,Te}, L, dâ1)
3.1 TARGET HEURISTICS
When the activations of each layer are differentiable, backpropagation provides a method for telling each layer how to adjust its outputs to improve the loss. Conversely, in hard-threshold networks, target propagation provides a method for telling each layer how to adjust its outputs to improve the next layerâs loss. While gradients cannot propagate through hard-threshold units, the derivatives within a layer can still be computed. An effective and efï¬cient heuristic for setting the target tdj for an activation hdj of layer d is to use the (negative) sign of the partial derivative of the next layerâs loss. Speciï¬cally, we set tdj = r(hdj), where
a] r(haj) = sien (- Sha, 5p bati(Za41, Tun) (2)
and Zd+1 is either the pre-activation or post-activation output, depending on the choice of loss.
When used to update only a single target at a time, this heuristic will often set the target value that correctly results in the lowest loss. In particular, when Lg+1 is convex, its negative partial derivative with respect to hg; by definition points in the direction of the global minimum of Lg+1. Without loss of generality, let hyy = â1. Now, if r(haj) = â1, then it follows from the convexity of the loss that flipping hg; and keeping all other variables the same would increase L441. On the other hand, if r(hg;) = +1, then flipping hy; may or may not reduce the loss, since convexity cannot tell us which of ha; = +1 or hg; = â1 results in a smaller L4,1. However, the discrepancy between hg; and r(hq;) indicates a lack of confidence in the current value of hy. A natural choice is thus to set ta; to push the pre-activation value of ha; towards 0, making hg; more likely to flip. Setting taj = (ha) = +1 accomplishes this. We note that, while this heuristic performs well, there is still room for improvement, for example by extending r(-) to better handle the hy # r(haj) case or by combining information across the batch. We leave such investigations for future work.
3.2 LAYER LOSS FUNCTIONS
The hinge loss, shown in Figure 2a, is a robust version of the perceptron criterion and is thus a natural per-layer loss function to use for ï¬nding good settings of the targets and weights, even when there are no feasible target settings. However, in preliminary experiments we found that learning tended to stall and become erratic over time when using the hinge loss for each layer. We attribute this to two separate issues. First, the hinge loss is sensitive to noisy data and outliers (Wu & Liu, 2007), which can cause learning to focus on instances that are unlikely to ever be classiï¬ed correctly, instead of on instances near the separator. Second, since with convolutional layers and large, noisy datasets it is unlikely that a layerâs inputs are entirely linearly separable, it is important to prioritize some targets over others. Ideally, the highest priority targets would be those with the largest effect on the output loss.
The ï¬rst issue can be solved by saturating (truncating) the hinge loss, thus making it less sensitive to outliers (Wu & Liu, 2007). The saturated hinge loss, shown in Figure 2b, is sat hinge(z, t; b) = max(0, 1 â max(tz, b)) for some threshold b, where we set b = â1 to make its derivative symmetric. The second problem can be solved in a variety of ways, including randomly subsampling targets or weighting the loss associated with each target according to some heuristic. The simplest and most accurate method that we have found is to weight the loss for each target tdj by the magnitude of the
6
(a) (b) (c) (d)
Figure 2: Figures (a)-(c) show different per-layer loss functions (solid blue line) and their derivatives (dashed red line). Figure (d) shows the quantized ReLU activation (solid blue line), which is a sum of step functions, its corresponding sum of saturated-hinge-loss derivatives (dashed red line), and the soft-hinge-loss approximation to this sum that was found to work best (dotted yellow line).
partial derivative of the next layerâs loss Ld+1 with respect to the targetâs hidden unit hdj, such that
OLati 8) La(zaj, taj) = sat_hinge(zg,taj) - oh dj
While the saturated hinge loss works well, if the input zdj ever moves out of the range [â1, +1] then its derivative will become zero and the unit will no longer be trainable. To avoid this, we propose the soft hinge loss, shown in Figure 2c, where soft hinge(z, t) = tanh(âtz) + 1. Like the saturated hinge, the soft hinge has slope 1 at the threshold and has a symmetric derivative; however, it also beneï¬ts from having a larger input region with non-zero derivative. Note that Bengio et al. (2013) report that using the derivative of a sigmoid as the STE performed worse than the identity function. Based on our experiments with other loss functions, including variations of the squared hinge loss and the log loss, this is most likely because the slope of the sigmoid is less than unity at the threshold, which causes vanishing gradients. Loss functions with asymmetric derivatives around the threshold also seemed to perform worse than those with symmetric derivatives (e.g., the saturating and soft hinge losses). In our experiments, we show that the soft hinge loss outperforms the saturated hinge loss for both sign and quantized ReLU activations, which we discuss below.
3.3 RELATIONSHIP TO THE STRAIGHT-THROUGH ESTIMATOR
When each loss term in each hidden layer is scaled by the magnitude of the partial derivative of its downstream layerâs loss and each target is set based on the sign of the same partial derivative, then target propagation transmits information about the output loss to every layer in the network, despite the hard-threshold units. Interestingly, this combination of loss function and target heuristic can exactly reproduce the weight updates of the straight-through estimator (STE). Speciï¬cally, the weight updates that result from using the scaled saturated hinge loss from (3) and the target heuristic in (2) are exactly those of the saturated straight-through estimator (SSTE) deï¬ned in Hubara et al. (2016), which replaces the derivative of sign(z) with 1|z|â¤1, where 1(·) is the indicator function. Other STEs correspond to different choices of per-layer loss function. For example, the original STE corresponds to the linear loss L(z, t) = âtz with the above target heuristic. This connection provides a justiï¬cation for existing STE approaches, which can now each be seen as an instance of FTPROP with a particular choice of per-layer loss function and target heuristic. We believe that this will enable more principled investigations and extensions of these methods in future work.
3.4 QUANTIZED ACTIVATIONS
Straight-through estimation is also commonly used to backpropagate through quantized variants of standard activations, such as the ReLU. Figure 2d shows a quantized ReLU (qReLU) with 6 evenly-spaced quantization levels. The simplest and most popular straight-through estimator (STE) for qReLU is to use the derivative of the saturated (or clipped) ReLU â sat ReLU(x) = 10<x<1, where sat ReLU(x) = min(1, max(x, 0)). However, if we instead consider the qReLU activation from the viewpoint of FTPROP, then the qReLU becomes a (normalized) sum of step functions qReLU(z) = 1 kâ1 ), where step(z) = 1 if z > 0 and 0 otherwise, and is a linear k transformation of sign(z). The resulting derivative of the sum of saturated hinge losses (one for each step function) is shown in red in Figure 2d, and is clearly quite different than the STE described above. In initial experiments, this performed as well as or better than the STE; however, we achieved additional performance improvements by using the softened approximation shown in yellow in Figure 2d, which is simply the derivative of a soft hinge that has been scaled and shifted to match the
7
Table 1: The best top-1 test accuracy for each network over all epochs when trained with sign, qReLU, and full-precision baseline activations on CIFAR-10 and ImageNet. The hard-threshold activations are trained with both FTPROP-MB with per-layer soft hinge losses (FTP-SH) and the saturated straight-through estimator (SSTE). Bold numbers denote the best performing quantized activation in each experiment.
Sign qReLU Baselines SSTE FTP-SH SSTE FTP-SH ReLU Sat. ReLU 4-layer convnet (CIFAR-10) 80.6 81.3 85.6 85.5 86.5 87.3 8-layer convnet (CIFAR-10) 84.6 84.9 88.4 89.8 91.2 91.2 AlexNet (ImageNet) 46.7 47.3 59.4 60.7 61.3 61.9 ResNet-18 (ImageNet) 49.1 47.8 60.6 64.3 69.1 66.9
qReLU domain. This is a natural choice because the derivative of a sum of a small number of soft hinge losses has a shape similar to that of the derivative of a single soft hinge loss.
# 4 EXPERIMENTS
We evaluated FTPROP-MB with soft hinge per-layer losses (FTP-SH) for training deep networks with sign and 2- and 3-bit qReLU activations by comparing models trained with FTP-SH to those trained with the saturated straight-through estimators (SSTEs) described earlier (although, as discussed, these SSTEs can also be seen as instances of FTPROP-MB). We compared to these SSTEs because they are the standard approach in the literature and they signiï¬cantly outperformed the STE in our initial exper- iments (Hubara et al. (2016) observed similar behavior). Computationally, FTPROP-MB has the same performance as straight-through estimation; however, the soft hinge loss involves computing a hyper- bolic tangent, which requires more computation than a piecewise linear function. This is the same per- formance difference seen when using sigmoid activations instead of ReLUs in soft-threshold networks. We also trained each model with ReLU and saturated-ReLU activations as full-precision baselines.
We did not use weight quantization because our main interest is training with hard-threshold ac- tivations, and because recent work has shown that weights can be quantized with little effect on performance (Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016). We tested these training methods on the CIFAR-10 (Krizhevsky, 2009) and ImageNet (ILSVRC 2012) (Russakovsky et al., 2015) datasets. On CIFAR-10, we trained a simple 4-layer convolutional network and the 8-layer convolutional network of Zhou et al. (2016). On ImageNet, we trained AlexNet (Krizhevsky et al., 2012), the most common model in the quantization literature, and ResNet-18 (He et al., 2015a). Further experiment details are provided in Appendix A, along with learning curves for all experiments. Code is available at https://github.com/afriesen/ftprop.
4.1 CIFAR-10
Test accuracies for the 4-layer and 8-layer convolutional networks on CIFAR-10 are shown in Table 1. For the 4-layer model, FTP-SH shows a consistent 0.5-1% accuracy gain over SSTE for the entire training trajectory, resulting in the 0.7% improvement shown in Table 1. However, for the 2-bit qRELU activation, SSTE and FTP-SH perform nearly identically in the 4-layer model. Conversely, for the more complex 8-layer model, the FTP-SH accuracy is only 0.3% above SSTE for the sign activation, but for the qReLU activation FTP-SH achieves a consistent 1.4% improvement over SSTE.
We posit that the decrease in performance gap for the sign activation when moving from the 4- to 8- layer model is because both methods are able to effectively train the higher-capacity model to achieve close to its best possible performance on this dataset, whereas the opposite is true for the qReLU activation; i.e., the restricted capacity of the 4-layer model limits the ability of both methods to train the more expressive qReLU effectively. If this is true, then we expect that FTP-SH will outperform SSTE for both the sign and qReLU activations on a harder dataset. Unsurprisingly, none of the low- precision methods perform as well as the baseline high-precision methods; however, the narrowness of the performance gap between 2-bit qReLU with FTP-SH and full-precision ReLU is encouraging.
4.2 IMAGENET
The results from the ImageNet experiments are also shown in Table 1. As predicted from the CIFAR- 10 experiments, we see that FTP-SH improves test accuracy on AlexNet for both sign and 2-bit
8
gReLU (FTP-SH) qReLU (SSTE) ReLU Saturated ReLU â Sign (FTP-SH) â Sign (SSTE) a & a Ss - & 2 8 a Ss ES & Top-1 Accuracy Top-1 Accuracy Epoch Epoch
Figure 3: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset ï¬gures show the test accuracy for the ï¬nal 25 epochs in detail. In both ï¬gures, FTPROP-MB with soft hinge (FTP-SH, red) outperforms the saturated straight-through estimator (SSTE, blue). The left ï¬gure shows the network with sign activations. The right ï¬gure shows that the 2-bit quantized ReLU (qReLU) trained with our method (FTP-SH) performs nearly as well as the full-precision ReLU. Interestingly, saturated ReLU outperforms standard ReLU. Best viewed in color.
qReLU activations on the more challenging ImageNet dataset. This is also shown in Figure 3, which plots the top-1 train and test accuracy curves for the six different activation functions for AlexNet on ImageNet. The left-hand plot shows that training sign activations with FTP-SH provides consistently better test accuracy than SSTE throughout the training trajectory, despite the hyperparameters being optimized for SSTE. This improvement is even larger for the 2-bit qReLU activation in the right- hand plot, where the FTP-SH qReLU even outperforms the full-precision ReLU for part of its trajectory, and outperforms the SSTE-trained qReLU by almost 2%. Interestingly, we ï¬nd that the saturated ReLU outperforms the standard ReLU by almost a full point of accuracy. We believe that this is due to the regularization effect caused by saturating the activation. This may also account for the surprisingly good performance of the FTP-SH qReLU relative to full-precision ReLU, as hard-threshold activations also provide a strong regularization effect.
Finally, we ran a single experiment with ResNet-18 on ImageNet, using hyperparameters from previ- ous works that used SSTE, to check (i) whether the soft hinge loss exhibits vanishing gradient behavior due to its diminishing slope away from the origin, and (ii) to evaluate the performance of FTP-SH for a less-quantized ReLU (we used k = 5 steps, which is less than the full range of a 3-bit ReLU). While FTP-SH does slightly worse than SSTE for the sign function, we believe that this is because the hyper- parameters were tuned for SSTE and not due to vanishing gradients, as we would expect much worse accuracy in that case. Results from the qReLU activation provide further evidence against vanishing gradients as FTP-SH for qReLU outperforms SSTE by almost 4% in top-1 accuracy (Table 1).
# 5 CONCLUSION
In this work, we presented a novel mixed convex-combinatorial optimization framework for learning deep neural networks with hard-threshold units. Combinatorial optimization is used to set discrete targets for the hard-threshold hidden units, such that each unit only has a linearly-separable problem to solve. The network then decomposes into individual perceptrons, which can be learned with standard convex approaches, given these targets. Based on this, we developed a recursive algorithm for learning deep hard-threshold networks, which we call feasible target propagation (FTPROP), and an efï¬cient mini-batch variant (FTPROP-MB). We showed that the commonly-used but poorly-justiï¬ed saturating straight-through estimator (STE) is the special case of FTPROP-MB that results from using a saturated hinge loss at each layer and our target heuristic and other types of STE correspond to other heuristic and loss combinations in FTPROP-MB. Finally, we deï¬ned the soft hinge loss and showed that FTPROP-MB with a soft hinge loss at each layer improves classiï¬cation accuracy for multiple models on CIFAR-10 and ImageNet when compared to the saturating STE.
In future work, we plan to develop novel target heuristics and layer loss functions by investigating connections between our framework and constraint satisfaction and satisï¬ability. We also intend to further explore the beneï¬ts of deep networks with hard-threshold units. In particular, while recent research clearly shows their ability to reduce computation and energy requirements, they should also be less susceptible to vanishing and exploding gradients and may be less susceptible to covariate shift and adversarial examples.
9
# ACKNOWLEDGMENTS
This research was partly funded by ONR grant N00014-16-1-2697. The GPU machine used for this research was donated by NVIDIA.
# REFERENCES
Yoshua Bengio. How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation. arXiv preprint arXiv:1407.7906 [cs.LG], 2014.
Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation. arXiv preprint arXiv:1308.3432 [cs.LG], 2013.
Miguel ´A. Carreira-PerpiËn´an and Weiran Wang. Distributed optimization of deeply nested systems.
Proceedings of the International Conference on Artiï¬cial Intelligence and Statistics, 2014. In
Abram L. Friesen and Pedro Domingos. Recursive Decomposition for Nonconvex Optimization. In Qiang Yang and Michael Woolridge (eds.), Proceedings of the 24th International Joint Conference on Artiï¬cial Intelligence, pp. 253â259. AAAI Press, 2015.
Abram L. Friesen and Pedro Domingos. The Sum-Product Theorem: A Foundation for Learning Tractable Models. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. arXiv preprint arXiv:1512.03385 [cs.CV], 2015a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on ImageNet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026â1034, 2015b.
Geoffrey E. Hinton. Coursera Lectures: Neural networks for machine learning, 2012.
Itay Hubara, Daniel Soudry, and Ran El-Yaniv. Binarized Neural Networks. In Advances in Neural Information Processing Systems, pp. 1â17, 2016.
Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37, pp. 448â456, Lille, France, 2015.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. In Proceedings of the 5th International Conference on Learning Representations, 2016.
Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, 2015.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Yann LeCun. Learning Process in an Asymmetric Threshold Network. In E. Bienenstock, F. Fogelman Souli´e, and G. Weisbuch (eds.), Disordered Systems and Biological Organization, pp. 233â240. Springer, Berlin, Heidelberg, 1986.
Yann LeCun. Modeles connexionnistes de lâapprentissage (connectionist learning models). PhD thesis, Universit´e P. et M. Curie (Paris 6), 1987.
Yann LeCun, L´eon Bottou, Genevieve B. Orr, and Klaus-Robert M¨uller. Efï¬cient BackProp. In Gr´egoire Montavon, Genevi`eve B Orr, and Klaus-Robert M¨uller (eds.), Neural Networks: Tricks of the Trade: Second Edition, pp. 9â48. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
Dong Hyun Lee, Saizheng Zhang, Asja Fischer, and Yoshua Bengio. Difference target propagation.
10
Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, volume 9284, pp. 498â515, 2015.
Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein. Training Quantized Nets: A Deeper Understanding. In Advances in Neural Information Processing Systems, 2017.
Darryl D. Lin and Sachin S. Talathi. Fixed Point Quantization of Deep Convolutional Networks. In Proceedings of the 33rd International Conference on Machine Learning, pp. 2849â2858, 2016.
Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks. In Workshop on On-Device Intelligence at ICML, 2016.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaev, Ganesh Venkatesh, and Hao Wu. Mixed Precision Training. arXiv preprint arXiv:1710.03740 [cs.AI], 2017.
Marvin L. Minsky and Seymour Papert. Perceptrons: an introduction to computational geometry. The MIT Press, Cambridge, MA, 1969.
A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, pp. 615â622. Polytechnic Institute of Brooklyn, 1962.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks. In Proceedings of the 14th European Conference on Computer Vision, 2016.
Frank Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386â408, 1958.
David E. Rumelhart, Geoffrey E. Hinton, and R. J. Williams. Learining Internal Representations by Error Propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1, pp. 318â362. The MIT Press, 1986.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of Gradient-Based Deep Learning. In Proceedings of the 34th International Conference on Machine Learning, 2017.
Daniel Soudry, Itay Hubara, and Ron Meir. Expectation Backpropagation: parameter-free training of multilayer neural networks with real and discrete weights. In Advances in Neural Information Processing Systems. MIT Press Cambridge, 2014.
Wei Tang, Gang Hua, and Liang Wang. How to Train a Compact Binary Neural Network with High Accuracy ? In Proceedings of the 31st Conference on Artiï¬cial Intelligence, pp. 2625â2631, 2017.
Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training Neural Networks Without Gradients: A Scalable ADMM Approach. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229â256, 1992.
Rodney Winter and Bernard Widrow. MADALINE RULE II: A training algorithm for neural networks. In Proceedings of the IEEE International Conference on Neural Networks, San Diego, CA, USA, 1988. IEEE.
Yichao Wu and Yufeng Liu. Robust Truncated Hinge Loss Support Vector Machines. Journal of the American Statistical Association, 102(479):974â983, 2007.
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. arXiv preprint arXiv:1606.06160 [cs.NE], 2016.
Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained Ternary Quantization. In Proceedings of the 5th International Conference on Learning Representations, 2017.
11
# A EXPERIMENT DETAILS
All experiments were performed using PyTorch (http://pytorch.org/). CIFAR-10 experiments with the 4-layer convolutional network were performed on an NVIDIA Titan X. All other experiments were performed on NVIDIA Tesla P100 devices in a DGX-1. Code for the experiments is available at https://github.com/afriesen/ftprop.
# A.1 CIFAR-10
On CIFAR-10, which has 50K training images and 10K test images divided into 10 classes, we trained both a simple 4-layer convolutional network and a deeper 8-layer convolutional network used in (Zhou et al., 2016) with the above methods and then compared their top-1 accuracies on the test set. We pre-processed the images with mean / std normalization, and augmented the dataset with random horizontal ï¬ips and random crops from images padded with 4 pixels. Hyperparameters were chosen based on a small amount of exploration on a validation set.
The ï¬rst network we tested on CIFAR-10 was a simple 4-layer convolutional network (convnet) structured as: conv(32) â conv(64) â fc(1024) â fc(10), where conv(c) and fc(c) indicate a convolutional layer and fully-connected layer, respectively, with c channels. Both convolutional layers used 5 à 5 kernels. Max-pooling with stride 2 was used after each convolutional layer, and a non-linearity was placed before each of the above layers except the ï¬rst. Adam (Kingma & Ba, 2015) with learning rate 2.5e-4 and weight decay 5e-4 was used to minimize the cross-entropy loss for 300 epochs. The learning rate was decayed by a factor of 0.1 after 200 and 250 epochs.
In order to evaluate the performance of FTPROP-MB with the soft hinge loss on a deeper network, we adapted the 8-layer convnet from Zhou et al. (2016) to CIFAR-10. This network has 7 convolutional layers and one fully-connected layer for the output and uses batch normalization (Ioffe & Szegedy, 2015) before each non-linearity. We optimized the cross-entropy loss with Adam using a learning rate of 1e-3 and a weight decay of 1e-7 for the sign activation and 5e-4 for the qReLU and baseline activations. We trained for 300 epochs, decaying the learning rate by 0.1 after 200 and 250 epochs.
A.2 LEARNING CURVES FOR CIFAR-10
= Sign (FTP-SH) âgReLU (FTP-SH) 80+|âSign (SSTE) 85||ââqReLU (SSTE) ââReLU 78 iy â Saturated ReLU g s ie Wir 5 5 376 g 60 < < a7 Bo6 F F 72 70 300 70 220 240, 260280 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 Epoch Epoch
Figure 4: The top-1 test accuracies for the 4-layer convolutional network with different activation functions on CIFAR-10. The inset ï¬gures show the test accuracy for the ï¬nal 100 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 2-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
12
84 +|ââ Sign (FTP-SH) â Sign (SSTE) âqReLU (FTP-SH) âqReLU (SSTE) âReLU â Saturated ReLU ~ a s 3 Top-1 Accuracy â bh ier 20 240, 260280300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 Epoch Epoch
# Top-1 Accuracy
Figure 5: The top-1 test accuracies for the 8-layer convolutional network with different activation functions on CIFAR-10. The inset ï¬gures show the test accuracy for the ï¬nal 100 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 2-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
IMAGENET (ILSVRC 2012)
On ImageNet, a much more challenging dataset with roughly 1.2M training images and 50K validation images divided into 1000 classes, we trained AlexNet, the most commonly used model in the quantization literature, with different activations and compared top-1 and top-5 accuracies of the trained models on the validation set. As is standard practice, we treat the validation set as the test data. Images were resized to 256 à 256, mean / std normalized, and then randomly cropped to 224 à 224 and randomly horizontally ï¬ipped. Models are tested on centered 224 à 224 crops of the test images. Hyperparameters were set based on Zhou et al. (2016) and Zhu et al. (2017), which both used SSTE to train AlexNet on ImageNet.
We trained the Zhou et al. (2016) variant of AlexNet (Krizhevsky et al., 2012) on ImageNet with sign, 2-bit qReLU, ReLU, and saturated ReLU activations. This version of AlexNet removes the dropout and replaces the local contrast normalization layers with batch normalization. Our implementation does not split the convolutions into two separate blocks. We used the Adam optimizer with learning rate 1e-4 on the cross-entropy loss for 80 epochs, decaying the learning rate by 0.1 after 56 and 64 epochs. For the sign activation, we used a weight decay of 5e-6 as in Zhou et al. (2016). For the ReLU and saturated ReLU activations, which are much more likely to overï¬t, we used a weight decay of 5e-4, as used in Krizhevsky et al. (2012). For the 2-bit qReLU activation, we used a weight decay of 5e-5, since it is more expressive than sign but less so than ReLU.
As with AlexNet, we trained ResNet-18 (He et al., 2015b) on ImageNet with sign, qReLU, ReLU, and saturated ReLU activations; however, for ResNet-18 we used a qReLU with k = 5 steps (i.e., 6 quantization levels, requiring 3 bits). We used the ResNet code provided by PyTorch. We optimized the cross-entropy loss with SGD with learning rate 0.1 and momentum 0.9 for 90 epochs, decaying the learning rate by a factor of 0.1 after 30 and 60 epochs. For the sign activation, we used a weight decay of 5eâ7. For the ReLU and saturated ReLU activations, we used a weight decay of 1e-4. For the qReLU activation, we used a weight decay of 1e-5.
13
A.4 LEARNING CURVES FOR IMAGENET
â Sign (FTP-SH)| |ââqReLU (FTP-SH)| 55 3 â Sign (SSTE) qReLU (SSTE) â__ âReLU 50 3B | [Saturated ReLU |, iy g i 345 3 Ey ES} 2 < = 40 7 a a & & F 35 F 30 Epoch Epoch
Figure 6: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset ï¬gures show the test accuracy for the ï¬nal 25 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 2-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
Ane Ne âgReLU (FTP-SH) - â Sign (FTP-SH) a & â Sign (SSTE) 3 Reb (SSTE) pg EPPA PS |ââ Rel i ; aa p50 â Saturated ReLU | j.c-\on. funyâ ee S TEES F 8 45 =< 240 fy ssf i i H 65, 70,75 90 8 90 305 65, 70 475 90 8 90 o 10 2 30 40 50 60 70 80 9 0 10 2 30 40 50 60 70 80 90 Epoch Epoch
Figure 7: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for ResNet-18 with different activation functions on ImageNet. The inset ï¬gures show the test accuracy for the ï¬nal 60 epochs in detail. The left ï¬gure shows the network with sign activations. The right ï¬gure shows the network with 3-bit quantized ReLU (qReLU) activations and with the full-precision baselines. Best viewed in color.
14 | {
"id": "1710.03740"
} |
1710.11469 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | 9 1 0 2
r p A 3 1 ] L M . t a t s [ 5 v 9 6 4 1 1 . 0 1 7 1 : v i X r a
# Conditional Variance Penalties and Domain Shift Robustness
Christina Heinze-Deml & Nicolai Meinshausen Seminar for Statistics ETH Zurich Zurich, Switzerland {heinzedeml,meinshausen}@stat.math.ethz.ch
Abstract When training a deep neural network for image classiï¬cation, one can broadly distinguish between two types of latent features of images that will drive the classiï¬cation. We can divide latent features into (i) âcoreâ or âconditionally invariantâ features X core whose distri- bution X core|Y , conditional on the class Y , does not change substantially across domains and (ii) âstyleâ features X style whose distribution X style|Y can change substantially across domains. Examples for style features include position, rotation, image quality or brightness but also more complex ones like hair color, image quality or posture for images of persons. Our goal is to minimize a loss that is robust under changes in the distribution of these style features. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identiï¬er or âID variableâ. In some applications we know, for example, that two images show the same person, and ID then refers to the identity of the person. The proposed method requires only a small fraction of images to have ID information. We group observations if they share the same class and identiï¬er (Y, ID) = (y, id) and penalize the conditional variance of the prediction or the loss if we condition on (Y, ID). Using a causal framework, this conditional variance regularization (CoRe) is shown to protect asymptotically against shifts in the distribution of the style variables. Empirically, we show that the CoRe penalty improves predictive accuracy substantially in settings where domain changes occur in terms of image quality, brightness and color while we also look at more complex changes such as changes in movement and posture. Keywords: Domain shift; Dataset shift; Causal models; Distributional robustness; Anti- causal prediction; Image classiï¬cation
# 1. Introduction
Deep neural networks (DNNs) have achieved outstanding performance on prediction tasks like visual object and speech recognition (Krizhevsky et al., 2012; Szegedy et al., 2015; He et al., 2015). Issues can arise when the learned representations rely on dependencies that vanish in test distributions (see for example Quionero-Candela et al. (2009); Torralba and Efros (2011); Csurka (2017) and references therein). Such domain shifts can be caused by changing conditions such as color, background or location changes. Predictive performance is then likely to degrade. For example, consider the analysis presented in Kuehlkamp et al. (2017) which is concerned with the problem of predicting a personâs gender based on images of their iris. The results indicate that this problem is more diï¬cult than previous studies
1
have suggested due to the remaining eï¬ect of cosmetics after segmenting the iris from the whole image.1 Previous analyses obtained good predictive performance on certain datasets but when testing on a dataset only including images without cosmetics accuracy dropped. In other words, the high predictive performance previously reported relied to a signiï¬cant extent on exploiting the confounding eï¬ect of mascara on the iris segmentation which is highly predictive for gender. Rather than the desired ability of discriminating based on the irisâ texture the systems would mostly learn to detect the presence of cosmetics.
More generally, existing biases in datasets used for training machine learning algorithms tend to be replicated in the estimated models (Bolukbasi et al., 2016). For an example involving Googleâs photo app, see Crawford (2016) and Emspak (2016). In §5 we show many examples where unwanted biases in the training data are picked up by the trained model. As any bias in the training data is in general used to discriminate between classes, these biases will persist in future classiï¬cations, raising also considerations of fairness and discrimination (Barocas and Selbst, 2016).
Addressing the issues outlined above, we propose Conditional variance Regularization (CoRe) to give diï¬erential weight to diï¬erent latent features. Conceptually, we take a causal view of the data generating process and categorize the latent data generating factors into âconditionally invariantâ (core) and âorthogonalâ (style) features, as in Gong et al. (2016). The core and style features are unobserved and can in general be highly nonlinear transformations of the observed input data. It is desirable that a classiï¬er uses only the core features as they pertain to the target of interest in a stable and coherent fashion. Basing a prediction on the core features alone yields stable predictive accuracy even if the style features are altered. CoRe yields an estimator which is approximately invariant under changes in the conditional distribution of the style features (conditional on the class labels) and it is asymptotically robust with respect to domain shifts, arising through interventions on the style features. CoRe relies on the fact that for certain datasets we can observe grouped observations in the sense that we observe the same object under diï¬erent conditions. Rather than pooling over all examples, CoRe exploits knowledge about this grouping, i.e., that a number of instances relate to the same object. By penalizing between-object variation of the prediction less than variation of the prediction for the same object, we can steer the prediction to be based more on the latent core features and less on the latent style features. While the proposed methodology can be motivated from the desire the achieve representational invariance with respect to the style features, the causal framework we use throughout this work allows to precisely formulate the distribution shifts we aim to protect against.
The remainder of this manuscript is structured as follows: §1.1 starts with a few mo- tivating examples, showing simple settings where the style features change in the test dis- tribution such that standard empirical risk minimization approaches would fail. In §1.2 we review related work, introduce notation in §2 and in §3 we formally introduce conditional variance regularization CoRe. In §4, CoRe is shown to be asymptotically equivalent to minimizing the risk under a suitable class of strong interventions in a partially linear classi- ï¬cation setting, provided one chooses suï¬ciently strong CoRe penalties. We also show that
1. Segmenting eyelashes from the iris is not entirely accurate which implies that the iris images can still contain parts of eyelashes, occluding the iris. As mascara causes the eyelashes to be thicker and darker, it is diï¬cult to entirely remove the presence of cosmetics from the iris images.
2
the population CoRe penalty induces domain shift robustness for general loss functions to ï¬rst order in the intervention strength. The size of the conditional variance penalty can be shown to determine the size of the distribution class over which we can expect distributional robustness. In §5 we evaluate the performance of CoRe in a variety of experiments.
(i) Causal framework and distributional robustness. We provide a causal frame- work to deï¬ne distributional shifts for style variables. Our framework allows that the domain variable itself is latent.
(ii) Conditional variance penalties. We introduce conditional variance penalties and show two robustness properties in Theorems 1 and 2.
(iii) Software. We illustrate our ideas using synthetic and real-data experiments. A TensorFlow implementation of CoRe as well as code to reproduce some of the exper- imental results are available at https://github.com/christinaheinze/core.
# 1.1 Motivating examples
To motivate the methodology we propose, consider the examples shown in Figures 1 and 2. Example 1 shows a setting where a linear decision boundary is suitable. Panel (a) in Figure 1 shows a subsample of the training data where class 1 is associated with red points, dark blue points correspond to class 0. If we were asked to draw a decision boundary based on the training data, we would probably choose one that is approximately horizontal. The style feature here corresponds to a linear direction (1, â0.75)t. Panel (b) shows a subsample of the test set where the style feature is intervened upon for class 1 observations: class 1 is associated with orange squares, cyan squares correspond to class 0. Clearly, a horizontal decision boundary would have misclassiï¬ed all test points of class 1.
Example 2 shows a setting where a nonlinear decision boundary is required. Here, the core feature corresponds to the distance from the origin while the style feature corresponds to the angle between the x1-axis and the vector from the origin to (x1, x2). Panel (c) shows a subsample of the training data and panel (d) additionally shows a subsample of the test data where the styleâi.e. the distribution of the angleâis intervened upon. Clearly, a circular decision boundary yields optimal performance on both training and test set but is unlikely to be found by a standard classiï¬cation algorithm when only using the training set for the estimation. We will return to these examples in §3.4.
Lastly, we introduce a strong dependence between the class label and the style feature âimage qualityâ in the third example by manipulating the face images from the CelebA dataset (Liu et al., 2015): in the training set images of class âwearing glassesâ are associated with a lower image quality than images of class ânot wearing glassesâ. Examples are shown in Figure 2(a). In the test set, this relation is reversed, i.e. images showing persons wearing glasses are of higher quality than images of persons without glasses, with examples in Figure 2(b). We will return to this example in §5.3 and show that training a convolutional neural network to distinguish between people wearing glasses or not works well on test data that are drawn from the same distribution (with error rates below 2%) but fails entirely on the shown test data, with error rates worse than 65%.
3
(a) Example 1, training set. (b) Example 1, test set. (c) Example 2, training set. (d) Example 2, test set.
Figure 1: Motivating examples 1 and 2: a linear example in (a) and (b) and a nonlinear example in (c) and (d). The distributions are shifted in test data by style interventions where style in example (a/b) is the linear direction (1, â0.75) and the polar angle in example (c/d). Standard estimators achieve error rates of 0% on the training data and test data drawn from the same distribution as the training data (panels (a) and (c), respectively). On the shown test set where the distribution of the style conditional on Y has changed the error rates are > 50% (panels (b) and (d), respectively).
4
(a) Example 3, training set. (b) Example 3, test set.
Figure 2: Motivating example 3: The goal is to predict whether a person is wearing glasses. The distributions are shifted in test data by style interventions where style is the image quality. A 5-layer CNN achieves 0% training error and 2% test error for images that are sampled from the same distribution as the training images (a), but a 65% error rate on images where the confounding between image quality and glasses is changed (b). See §5.3 for more details.
# 1.2 Related work
For general distributional robustness, the aim is to learn
argming sup Epr(¢(Y, fo(X))) (1) FEF
for a given set F of distributions, twice differentiable and convex loss ¢, and prediction fo(x). The set F is the set of distributions on which one would like the estimator to achieve a guaranteed performance bound.
Causal inference can be seen to be a speciï¬c instance of distributional robustness, where we take F to be the class of all distributions generated under do-interventions on X (Mein- shausen, 2018; Rothenh¨ausler et al., 2018). Causal models thus have the deï¬ning advantage that the predictions will be valid even under arbitrarily large interventions on all predictor variables (Haavelmo, 1944; Aldrich, 1989; Pearl, 2009; Sch¨olkopf et al., 2012; Peters et al., 2016; Zhang et al., 2013, 2015; Yu et al., 2017; Rojas-Carulla et al., 2018; Magliacane et al., 2018). There are two diï¬culties in transferring these results to the setting of domain shifts in image classiï¬cation. The ï¬rst hurdle is that the classiï¬cation task is typically anti-causal since the image we use as a predictor is a descendant of the true class of the object we are interested in rather than the other way around. The second challenge is that we do not want (or could) guard against arbitrary interventions on any or all variables but only would like to guard against a shift of the style features. It is hence not immediately obvious how standard causal inference can be used to guard against large domain shifts.
Another line of work uses a class of distributions of the form F = F,.(Fo) with
Fo) := {distributions Fâ such that D(F, Fo) < e}, (2)
with ⬠> 0 asmall constant and D(F, Fo) being, for example, a ¢-divergence (Namkoong and Fo can be the true (but generally unknown) population distribution P from which the data were drawn or its empirical counterpart P,. The distributionally robust targets in Eq. (2) can often be expressed in penalized form (Gao et al.| 2017} Sinha et al.| 2018} Xu et al.
5
2009). A Wasserstein-ball is a suitable class of distributions for example in the context of adversarial examples (Sinha et al., 2018; Szegedy et al., 2014; Goodfellow et al., 2015).
In this work, we do not try to achieve robustness with respect to a set of distributions that are pre-deï¬ned by a Kullback-Leibler divergence or a Wasserstein metric as in Eq. (2). We try to achieve robustness against a set of distributions that are generated by interven- tions on latent style variables. We will formulate the class of distributions over which we try to achieve robustness as in Eq. (1) but with the class of distributions in Eq. (2) now replaced with
Fξ = {F : Dstyle(F, F0) ⤠ξ}, (3)
where F0 is again the distribution the training data are drawn from. The diï¬erence to standard distributional robustness approaches listed below Eq. (2) is now that the metric Dstyle measures the shift of the orthogonal style features. We do not know a priori which features are prone to distributional shifts and which features have a stable (conditional) distribution. The metric is hence not known a priori and needs to be inferred in a suitable sense from the data.
Similar to this work in terms of their goals are the work of Gong et al. (2016) and Domain-Adversarial Neural Networks (DANN) proposed in Ganin et al. (2016), an approach motivated by the work of Ben-David et al. (2007). The main idea of Ganin et al. (2016) is to learn a representation that contains no discriminative information about the origin of the input (source or target domain). This is achieved by an adversarial training procedure: the loss on domain classiï¬cation is maximized while the loss of the target prediction task is minimized simultaneously. The data generating process assumed in Gong et al. (2016) is similar to our model, introduced in §2.1, where we detail the similarities and diï¬erences between the models (cf. Figure 3). Gong et al. (2016) identify the conditionally independent features by adjusting a transformation of the variables to minimize the squared MMD distance between distributions in diï¬erent domains2. The fundamental diï¬erence between these very promising methods and our approach is that we use a diï¬erent data basis. The domain identiï¬er is explicitly observable in Gong et al. (2016) and Ganin et al. (2016), while it is latent in our approach. In contrast, we exploit the presence of an identiï¬er variable ID that relates to the identity of an object (for example identifying a person). In other words, we do not assume that we have data from diï¬erent domains but just diï¬erent realizations of the same object under diï¬erent interventions. This also diï¬erentiates this work from latent domain adaptation papers from the computer vision literature (Hoï¬man et al., 2012; Gong et al., 2013). Further related work is discussed in §6.
# 2. Setting
We introduce the assumed underlying causal graph and some notation before discussing notions of domain shift robustness.
2. The distinction between âconditionally independentâ features and âconditionally transferableâ (which is the former modulo location and scale transformations) is for our purposes not relevant as we do not make a linearity assumption in general.
6
(a) (b) Domain D Domain D Y â ID Y â X core X style(â) X core X style(â) image X(â) fθ ËY (X(â)) image X(â) fθ ËY (X(â))
â
1
1
,
Figure 3: Observed quantities are shown as shaded nodes; nodes of latent quantities are transparent. Left: data generating process for the considered model as in Gong et al. (2016), where the eï¬ect of the domain on the orthogonal features X style is mediated via unobserved noise â. The style interventions and all its descendants are shown as nodes with dashed borders to highlight variables that are aï¬ected by style interventions. Right: our setting. The domain itself is unobserved but we can now observe the (typically discrete) ID variable we use for grouping. The arrow between ID and Y can be reversed, depending on the sampling scheme.
7
# 2.1 Causal graph
Let Y â Y be a target of interest. Typically Y = R for regression or Y = {1, . . . , K} in classiï¬cation with K classes. Let X â Rp be predictor variables, for example the p pixels of an image. The causal structural model for all variables is shown in the panel (b) of Figure 3. The domain variable D is latent, in contrast to Gong et al. (2016) whose model is shown in panel (a) of Figure 3. We add the ID variable whose distribution can change conditional on Y . In Figure 3, Y â ID but in some settings it might be more plausible to consider ID â Y . For the proposed method both options are possible. Together with Y , the ID variable is used to group observations. It is typically discrete and relates to the identity of the underlying object (identity of a person, for example). The variable can be assumed to be latent in the setting of Gong et al. (2016).
The rest of the graph is in analogy to Gong et al. (2016). The prediction is anti- causal, that is the predictor variables X that we use for ËY are non-ancestral to Y . In other words, the class label is here seen to be causal for the image and not the other way around3. The causal eï¬ect from the class label Y on the image X is mediated via two types of latent variables: the so-called core or âconditionally invariantâ features X core and the orthogonal or style features X style. The distinguishing factor between the two is that external interventions â are possible on the style features but not on the core features. If the interventions â have diï¬erent distributions in diï¬erent domains, then the conditional distributions X core|Y = y, ID = id are invariant for all (y, id) while X style|Y = y, ID = id can change. The style variable can include point of view, image quality, resolution, rotations, color changes, body posture, movement etc. and will in general be context-dependent4. The style intervention variable â inï¬uences both the latent style X style, and hence also In potential outcome notation, we let X style(â = δ) be the style under the image X. intervention â = δ and X(Y, ID, â = δ) the image for class Y , identity ID and style intervention â. The latter is sometimes abbreviated as X(â = δ) for notational simplicity. Finally, fθ(X(â = δ)) is the prediction under the style intervention â = δ. For a formal justiï¬cation of using a causal graph and potential outcome notation simultaneously see Richardson and Robins (2013).
To be speciï¬c, if not mentioned otherwise we will assume a causal graph as follows. For independent εY , εID, εstyle in R, R, Rq respectively with positive density on their support and continuously diï¬erentiable functions ky, kid, and kstyle, kcore, kx,
Y â ky(D, εY ) identiï¬er ID â kid(Y, εID) core or conditionally invariant features X core â kcore(Y, ID) style or orthogonal features X style â kstyle(Y, ID, εstyle) + â image X â kx(X core, X style). (4)
If an existing image is classiï¬ed by a human, then the image is certainly ancestral for the attached label. If the label Y refers, however, to the underlying true object (say if you generate images by asking people to take pictures of objects), then the more ï¬tting model is the one where Y is ancestral for X.
4. The type of features we regard as style and which ones we regard as core features can conceivably change depending on the circumstancesâfor instance, is the color âgrayâ an integral part of the object âelephantâ or can it be changed so that a colored elephant is still considered to be an elephant?
8
Hence, the core features are assumed to be a deterministic function of Y and ID. The prediction Ëy for y, given X = x, is of the form fθ(x) for a suitable function fθ with parameters θ â Rd, where the parameters θ correspond to the weights in a DNN, for example.
# 2.2 Data
We assume we have n data points (xj, y;,id;) for i = 1,...,n, where the observations id; with i = 1,...,n of variable ID can also contain unobserved values. Let m < n be the number of unique realizations of (Y,ID) and let $1,...,Sm be a partition of {1,...,n} such that, for each j ⬠{1,...,m}, the realizations (y;,id;) are identical} for all i ⬠Sj. While our prime application is classification, regression settings with continuous Y can be approximated in this framework by slicing the range of the response variable into distinct bins in analogy to the approach in sliced inverse regression {I991). The cardinality of S; is denoted by nj; := |Sj| > 1. Then n = 0, n, is again the total number of samples and c= n-â mis the total number of grouped observations. Typically nj; = 1 for most samples and occasionally n; > 2 but one can also envisage scenarios with larger groups of the same identifier (y, id).
# 2.3 Domain shift robustness
In this section, we clarify against which classes of distributions we hope to achieve robust- ness. Let £ be a suitable loss that maps y and 7 = f(x) to R*. The risk under distribution F and parameter 0 is given by
Ep|Y, fo(X))].
Let F0 be the joint distribution of (ID, Y, X style) in the training distribution. A new domain and explicit interventions on the style features can now shift the distribution of (ID, Y, ËX style) to F . We can measure the distance between distributions F0 and F in dif- ferent ways. Below we will deï¬ne the distance considered in this work and denote it by Dstyle(F, F0). Once deï¬ned, we get a class of distributions
Fξ = {F : Dstyle(F0, F ) ⤠ξ} (5)
and the goal will be to optimize a worst-case loss over this distribution class in the sense of Eq. (1), where larger values of ξ aï¬ord protection against larger distributional changes. The relevant loss for distribution class Fξ is then
1e(0) = sup Er [e(Â¥: fo(X))]- (6)
In the limit of arbitrarily strong interventions on the style features X style, the loss is given by
Loo(@) = Jim, sup Ep [e(Â¥, fo(X))]. (7)
5. Observations where the ID variable is unobserved are not grouped, that is each such observation is counted as a unique observation of (Y, ID).
9
Minimizing the loss Lâ(θ) with respect to θ guarantees an accuracy in prediction which will work well across arbitrarily large shifts in the conditional distribution of the style features. A natural choice to deï¬ne Dstyle is to use a Wasserstein-type distance (see e.g. Villani, 2003). We will ï¬rst deï¬ne a distance Dy,id for the conditional distributions
X style|Y = y, ID = id and ËX style|Y = y, ID = id,
and then set D(F0, F ) = E(DY,ID), where the expectation is with respect to random ID and labels Y . The distance Dy,id between the two conditional distributions of X style will be deï¬ned as a Wasserstein W 2 2 (F0, F )-distance for a suitable cost function c(x, Ëx). Specif- ically, let Î y,id be the couplings between the conditional distributions of X style and ËX style, meaning measures supported on Rq à Rq such that the marginal distribution over the ï¬rst q components is equal to the distribution of X style and the marginal distribution over the remaining q components equal to the distribution of ËX style. Then the distance between the conditional distributions is deï¬ned as
Ele(a, %)],
Dy,id = min M âÎ y,id
where c: R? x R?+4 Rt is a nonnegative, lower semi-continuous cost function. Here, we focus on a Mahalanobis distance as cost
c2(x, Ëx) = (x â Ëx)tΣâ1 y,id(x â Ëx).
The cost of a shift is hence measured against the variability under the distribution F0, Σy,id = Cov(X style|Y, ID)6.
# 3. Conditional variance regularization
# 3.1 Pooled estimator
Let (ai,yi) for i = 1,...,n be the observations that constitute the training data and Ui = fo(a;) the prediction for y;. The standard approach is to simply pool over all avail- able observations, ignoring any grouping information that might be available. The pooled estimator thus treats all examples identically by summing over the empirical loss as GP! â areming Bley, fo(X))| +7-pen(9), (8)
Ëθpool = argminθ ËE + γ · pen(θ), (8)
where the ï¬rst part is simply the empirical loss over the training data,
n 1 n BEY. fo(X))] = â D7 e(yi- folas)). Mia1
In the second part, pen(@) is a complexity penalty, for example a squared f:-norm of the weights 6 in a convolutional neural network as a ridge penalty. All examples that compare to the pooled estimator will include a ridge penalty as default.
6. As an example, if the change in distribution for X style is caused by random shift-interventions â, then
Xstvle L xestyle 4 A, and the distance Dgtyie induced in the distributions is Datyie(Fo, F) < E[E(AâD 7 aAlY = y, ID = id)],
# Datyie(Fo, F) < E[E(AâD 7
ensuring that the strength of the shifts is measured against the natural variability Σy,id of the style features.
10
3.2 CoRe estimator The CoRe estimator is deï¬ned in Lagrangian form for penalty λ ⥠0 as
6°"°(\) = argming Bay, fo(X))| +2A-Cy. (9)
The penalty ËCθ is a conditional variance penalty of the form
# Crue := B Cove =
[Var( fo(X)|Y, ID)â B[Var (ey, fo(X))|Y,ID)â],
(10)
# conditional-variance-of-prediction:
# conditional-variance-of-prediction:
conditional-variance-of-loss: (11)
where typically ν â {1/2, 1}. For ν = 1/2, we also refer to the respective penalties as âconditional-standard-deviationâ penalties. In the equivalent constrained form, the estima- tor can be viewed as an instance of a restricted maximum likelihood estimator (Harville, In practice in the context of classiï¬cation and 1974; Verbeke and Molenberghs, 2009). DNNs, we apply the penalty (10) to the predicted logits. The conditional-variance-of-loss penalty (11) takes a similar form to Namkoong and Duchi (2017). The crucial diï¬erence of our approach to Namkoong and Duchi (2017) is that we penalize with the expected condi- tional variance or standard deviation. The fact that we take a conditional variance is here important as we try to achieve distributional robustness with respect to interventions on the style variables. Conditioning on ID allows to guard speciï¬cally against these interventions. An unconditional variance penalty, in contrast, can achieve robustness against a pre-deï¬ned class of distributions such as a ball of distributions deï¬ned in a Kullback-Leibler or Wasser- stein metric. The population CoRe estimator is deï¬ned as in Eq. (9) where empirical estimates are replaced by their respective population quantities.
Before showing numerical examples, we discuss the estimation of the expected condi- tional variance in §3.3 and return to the simple examples of §1.1 in §3.4. Domain shift robustness in a classiï¬cation setting for a partially linear version of the structural equation model (4) is shown in §4.1. Furthermore, we discuss the population limit of Ëθcore(λ) in §4.2, where we show that the regularization parameter λ ⥠0 is proportional to the size of the future style interventions that we want to guard against for future test data.
# 3.3 Estimating the expected conditional variance
Recall that Sj â {1, . . . , n} contains samples with identical realizations of (Y, ID) for j â {1, . . . , m}. For each j â {1, . . . , m}, deï¬ne ˵θ,j as the arithmetic mean across all fθ(xi), i â Sj. The canonical estimator of the conditional variance ËCf,1,θ is then
m A ESD 1 OS p(a,) â fy)? ig < 2S fol; Cho =F > 5) Se (fo(2:) jte.j) , where Hog = Sil > fo(xi) j=l
and analogously for the conditional-variance-of-loss, deï¬ned in Eq. (11)7. If there are no groups of samples that share the same identiï¬er (y, id), we deï¬ne ËCf,1,θ to vanish. The CoRe estimator is then identical to pooled estimation in this special case.
7. The right hand side can also be interpreted as the graph Laplacian (Belkin et al., 2006) of an appropriately weighted graph that fully connects all observations i â Sj for each j â {1, . . . , m}.
11
# 3.4 Motivating examples (continued)
We revisit the ï¬rst and the second example from §1.1. Figure 4 shows subsamples of the respective training and test sets with the estimated decision boundaries for diï¬erent values of the penalty parameter λ; in both examples, n = 20000 and c = 500. Additionally, grouped examples that share the same (y, id) are visualized: two grouped observations are In each example, there are ten such groups connected by a line or curve, respectively. visualized (better visible in the nonlinear example).
Panel (a) shows the linear decision boundaries for λ = 0, equivalent to the pooled estimator, and for CoRe with λ â {.1, 1}. The pooled estimator misclassiï¬es all test points of class 1 as can be seen in panel (b), suï¬ering from a test error of â 51%. In contrast, the decision boundary of the CoRe estimator with λ = 1 aligns with the direction along which the grouped observations vary, classifying the test set with almost perfect accuracy (test error is â 0%).
Panels (c) and (d) show the corresponding plots for the second example for penalty values λ â {0, 0.05, 0.1, 1}. While all of them yield good performance on the training set, only a value of λ = 1, which is associated with a circular decision boundary, achieves almost perfect accuracy on the test set (test error is â 0%). The pooled estimator suï¬ers from a test error of â 58%.
4. Domain shift robustness for the CoRe estimator
We show two properties of the CoRe estimator. First, consistency is shown under the risk deï¬nition (7) for an inï¬nitely large conditional variance penalty and the logistic loss in a partially linear structural equation model. Second, the population CoRe estimator is shown to achieve distributional robustness against shift interventions in a ï¬rst order expansion.
# 4.1 Asymptotic domain shift robustness under strong interventions
We analyze the loss under strong domain shifts, as given in Eq. (7), for the pooled and the CoRe estimator in a one-layer network for binary classiï¬cation (logistic regression) in an asymptotic setting of large sample size and strong interventions.
Assume the structural equation for the image X ⬠R? is linear in the style features Xstyle ⬠R@ (with generally p >> q) and we use logistic regression to predict the class label Y ⬠{-1,1}. Let the interventions A ⬠R¢ act additively on the style features X*Y'* (this is only for notational convenience) and let the style features X**Â¥!* act in a linear way on the image X via a matrix W ⬠R?*? (this is an important assumption without which results are more involved). The core or âconditionally invariantâ features are X°® ⬠Râ, where in general r < p but this is not important for the following. For independent ey, erp, éstyle in R, R, R? respectively with positive density on their support and continuously differentiable
12
(a) Example 1, training set.
# (b) Example 1, test set.
# Y=0 (train)
e Y=0 (train)
e Y=1 (train) aa] 10 7 3 o *Â¥ â wt a 4 o 4 N _I oe, 1 ~ 1 12
# x
# Y=0 (test)
Q¥-5 ° Y=1 (train) Y=1 (test) 10 o 7 3 | a 4 o J N _J 1 1 10 12
(c) Example 2, training set.
# (d) Example 2, test set.
tO to x 4 a 4 *s xo 4 xo 4 x | nx | 1 1 ~75--0.05--- 777-0.05--. e Y=0 (train) 7 7 4 Y=0 (test) â0 (train) ° Y=1 (train) ° Y=1 (train) Y=1 (test) T T T T 1 i T T T 1 -4 -2 i} 2 4 -4 -2 i?) 2 4 X x
Figure 4: The decision boundary as function of the penalty parameters λ for the examples 1 and 2 from Figure 1. There are ten pairs of samples visualized that share the same identiï¬er (y, id) and these are connected by a line resp. a curve in the ï¬gures (better visible in panels (c) and (d)). The decision boundary associated with a solid line corresponds to λ = 0, the standard pooled estimator that ignores the groupings. The broken lines are decision boundaries for increasingly strong penalties, taking into account the groupings in the data. Here, we only show a subsample of the data to avoid overplotting.
13
functions ky, kid, kstyle, kcore, kx,
class Y â ky(D, εY ) identiï¬er ID â kid(Y, εID) core or conditionally invariant features X core â kcore(Y, ID) style or orthogonal features X style â kstyle(Y, ID, εstyle) + â image X â kx(X core) + W X style. (12)
We assume a logistic regression as a prediction of Y from the image data X:
fθ(x) := exp(xtθ) 1 + exp(xtθ) .
Given training data with n samples, we estimate 6 with 6 and use here a logistic loss f6(yi, vi) = log(1 + exp(âyi(x{6))).
The formulation of Theorem 1 relies on the following assumptions.
Assumption 1 We require the following conditions:
(A1) Assume the conditional distribution X*¥'°|Y = y,ID = id under the training distri- bution Fo has positive density (with respect to the Lebesgue measure) in an e-ball in fo-norm around the origin for some «> 0 for ally ⬠Y and id é⬠T.
(A2) Assume the matrix W has full rank q.
(A3) Let M ⤠n be the number of unique realizations among n iid samples of (Y, ID) and let pn := P (M ⤠n â q). Assume that pn â 1 for n â â.
Assumption (A3) guarantees that the number c = n â m of grouped examples is at least as large as the dimension of the style variables. If we have too few or no grouped examples (small c), we cannot estimate the conditional variance accurately. Under these assumptions we can prove domain shift robustness.
Theorem 1 (Asymptotic domain shift robustness under strong interventions) Under model (12) and Assumption 1, with probability 1, the pooled estimator (8) has inï¬nite loss (7) under arbitrarily large shifts in the distribution of the style features,
Lâ(Ëθpool) = â. The CoRe estimator (9) Ëθcore with λ â â is domain shift robust under strong interventions in the sense that for n â â,
Lâ(Ëθcore) âp inf θ Lâ(θ).
A proof is given in §A. The respective ridge penalties in both estimators (8) and (9) are assumed to be zero for the proof, but the proof can easily be generalized to include ridge penalties that vanish suï¬ciently fast for large sample sizes. The Lagrangian regularizer λ is assumed to be inï¬nite for the CoRe estimator to achieve domain shift robustness under these strong interventions. The next section considers the population CoRe estimator in a setting with weak interventions and ï¬nite values of the penalty parameter.
14
4.2 Population domain shift robustness under weak interventions The previous theorem states that the CoRe estimator can achieve domain shift robustness under strong interventions for an inï¬nitely strong penalty in an asymptotic setting. An open question is how the loss (6),
L,(9) = up Ep es fo(X))] â¬
behaves under interventions of small to medium size and correspondingly smaller values of the penalty. Here, we aim to minimize this loss for a given value of ξ and show that domain shift robustness can be achieved to ï¬rst order with the population CoRe estimator using the conditional-standard-deviation-of-loss penalty, i.e., Eq. (11) with ν = 1/2, by choosing an appropriate value of the penalty λ. Below we will show this appropriate choice of the penalty weight is λ =
Assumption 2 (B1) Deï¬ne the loss under a deterministic shift δ as
ho(d) = Er le(Y, fo(X))],
where the expectation is with respect to random (ID, Y, ËX style) â¼ Fθ, with Fθ deï¬ned by the deterministic shift intervention ËX style = X style + δ and (ID, Y, ËX style) â¼ F0. Assume that for all θ â Î, hθ(δ) is twice continuously diï¬erentiable with bounded second derivative for a deterministic shift δ â Rq.
(B2) The spectral norm of the conditional variance Σy,id of X style|Y, ID under F0 is assumed to be smaller or equal to some ζ â R for all y â Y and id â I.
The ï¬rst assumption (B1) ensures that the loss is well behaved under interventions on the style variables. The second assumption (B2) allows to take the limit of small conditional variances in the style variables.
â
If setting λ = ξ and using the conditional-standard-deviation-of-loss penalty, the
CoRE estimator optimizes according to gore(\/E) = argming
gore(\/E) = argming Ep, [0(Y, fo(X))] + VE+Cerpre-
The next theorem shows that this is to ï¬rst order equivalent to minimizing the worst-case loss over the distribution class Fξ. The following result holds for the population CoRe estimator, see below for a discussion about consistency.
Theorem 2 The supremum of the loss over the class of distribution Fe is to first-order given by the expected loss under distribution Fo with an additional conditional-standard- deviation-of-loss penalty Cp 12,6
sup Ep(e(Y, fo(X))] = Er [â¬(Y, fo(X))] + VE+ Coro + O(max{E,¢}). (13)
A proof is given in Appendix §B. The objective of the population CoRe estimator matches ξ. Larger thus to ï¬rst order the loss under domain shifts if we set the penalty weight λ =
15
anticipated domain shifts thus require naturally a larger penalty λ in the CoRe estimation. The result is possible as we have chosen the Mahalanobis distance to measure shifts in the style variable and deï¬ne Fξ, ensuring that the strength of shifts on style variables are measured against the natural variance on the training distribution F0.
In practice, the choice of λ involves a somewhat subjective choice about the strength of the distributional robustness guarantee. A stronger distributional robustness property is traded oï¬ against a loss in predictive accuracy if the distribution is not changing in the future. One option for choosing λ is to choose the largest penalty weight before the validation loss increases considerably. This approach would provide the best distributional robustness guarantee that keeps the loss of predictive accuracy in the training distribution within a pre-speciï¬ed bound.
As a caveat, the result takes the limit of small conditional variance of X style in the training distribution and small additional interventions. Under larger interventions higher- order terms could start to dominate, depending on the geometry of the loss function and fθ. A further caveat is that the result looks at the population CoRe estimator. For ï¬nite sample sizes, we would optimize a noisy version on the rhs of (13). To show domain shift robustness in an asymptotic sense, we would need additional uniform convergence (in θ) of both the empirical loss and the conditional variance in that for n â â,
sup |Em [â¬(Y, fo(X))] â Ero [â¬(Y, fo(X))]| 4p 0, and sup \Ce1/2,0 â Co,1/2,9| +p 0.
While this is in general a reasonable assumption to make, the validity of the assumption will depend on the speciï¬c function class and on the chosen estimator of the conditional variance.
# 5. Experiments
We perform an array of diï¬erent experiments, showing the applicability and advantage of the conditional variance penalty for two broad settings:
1. Settings where we do not know what the style variables correspond to but still want to protect against a change in their distribution in the future. In the examples we show cases where the style variable ranges from fashion (§5.2), image quality (§5.3), movement (§5.4) and brightness (§5.7), which are all not known explicitly to the method. We also include genuinely unknown style variables in §5.1 (in the sense that they are unknown not only to the methods but also to us as we did not explicitly create the style interventions).
2. Settings where we do know what type of style interventions we would like to protect against. This is usually dealt with by data augmentation (adding images which are, say, rotated or shifted compared to the training data if we want to protect against rotations or translations in the test data; see for example Sch¨olkopf et al. (1996)). The conditional variance penalty is here exploiting that some augmented samples were generated from the same original sample and we use as ID variable the index
16
Figure 5: Eyeglass detection for CelebA dataset with small sample size. The goal is to predict whether a person wears glasses or not. Random samples from training and test data are shown. Groups of observations in the training data that have common (Y, ID) here correspond to pictures of the same person with either glasses on or oï¬. These are labelled by red boxes in the training data and the conditional variance penalty is calculated across these groups of pictures.
of the original image. We show that this approach generalizes better than simply pooling the augmented data, in the sense that we need fewer augmented samples to achieve the same test error. This setting is shown in §5.5.
Details of the network architectures can be found in Appendix §C. All reported error rates are averaged over ï¬ve runs of the respective method. A TensorFlow (Abadi et al., 2015) implementation of CoRe can be found at https://github.com/christinaheinze/core.
# 5.1 Eyeglasses detection with small sample size
In this example, we explore a setting where training and test data are drawn from the same distribution, so we might not expect a distributional shift between the two. However, we consider a small training sample size which gives rise to statistical ï¬uctuations between training and test data. We assess to which extent the conditional variance penalty can help to improve test accuracies in this setting.
Speciï¬cally, we use a subsample of the CelebA dataset (Liu et al., 2015) and try to classify images according to whether or not the person in the image wears glasses. For construction of the ID variable, we exploit the fact that several photos of the same person are available and set ID to be the identiï¬er of the person in the dataset. Figure 5 shows examples from both the training and the test data set The conditional variance penalty is estimated across groups of observations that share a common (Y, ID). Here, this corresponds to pictures of the same person where all pictures show the person either with glasses (if Y = 1) or all pictures show the person without glasses (Y = 0). Statistical ï¬uctuations between training and test set could for instance arise if by chance the background of eyeglass wearers is darker in the training sample than in test samples, the eyeglass wearers happen to be outdoors more often or might be more often female than male etc.
Below, we present the following analyses. First, we look at ï¬ve diï¬erent datasets and analyze the eï¬ect of adding the CoRe penalty (using conditional-variance-of-prediction)
17
to the cross-entropy loss. Second, we focus on one dataset and compare the four diï¬erent variants of the CoRe penalty in Eqs. (10) and (11) with ν â {1/2, 1}.
5.1.1 CoRe penalty using the conditional variance of the predicted logits
We consider ï¬ve diï¬erent training sets which are created as follows. For each person in the standard CelebA training data we count the number of available images and select the 50 identities for which most images are available individually. We partition these 50 identities into 5 disjoint subsets of size 10 and consider the resulting 5 datasets, containing the images of 10 unique identities each. The resulting 5 datasets have sizes {289, 296, 292, 287, 287}. For the validation and the test set, we consider the usual CelebA validation and test split but balance these with respect to the target variable âEyeglassesâ. The balanced validation set consists of 2766 observations; the balanced test set contains 2578 images. The identities in the validation and test sets are disjoint from the identities in the training sets.
Given a training dataset, the standard approach would be to pool all examples. The only additional information we exploit is that some observations can be grouped. If using a 5-layer convolutional neural network with a standard ridge penalty (details can be found in Table C.1) and pooling all data, the test error on unseen images ranges from 18.08% to 25.97%. Exploiting the group structure with the CoRe penalty (in addition to a ridge penalty) results in test errors ranging from 14.79% to 21.49%, see Table 1. The relative improvements when using the CoRe penalty range from 9% to 28.6%.
The test error is not very sensitive to the weight of the CoRe penalty as shown in Figure 6(a): for a large range of penalty weights, adding the CoRe penalty decreases the test error compared to the pooled estimator (identical to a CoRe penalty weight of 0). This holds true for various ridge penalty weights.
While test error rates shown in Figure 6 suggests already that the CoRe penalty diï¬er- entiates itself clearly from a standard ridge penalty, we examine next the diï¬erential eï¬ect of the CoRe penalty on the between- and within-group variances. Concretely, the variance of the predictions can be decomposed as
Var(fo(X)) = E[Var(fo(X)]Â¥,1D)] + Var [B(fo(X)IY.1D)]. where the first term on the rhs is the within-group variance that CORE penalizes, while a ridge penalty would penalize both the within- and also the between-group variance (the second term on the rhs above). In Figure we show the ratio between the CORE penalty and the between-group variance where groups are defined by conditioning on (Y, ID). Specifically, the ratio is computed as
E[Var( fo(X)|Y,1D)] /Var[B(fo(X)|Â¥, 1D)]. (14) The results shown in Figure are computed on dataset 1 (DS 1). While increasing ridge penalty weights do lead to a smaller value of the CORE penalty, the between-group variance is also reduced such that the ratio between the two terms does not decrease with larger weights of the ridge penalty With increasing weight of the CORE penalty, the variance ratio decreases, showing that the CORE penalty indeed penalizes the within-group variance more than the between-group variance.
8. In Figure D.1 in the Appendix, the numerator and the denominator are plotted separately as a function of the CoRe penalty weight.
18
Error Penalty value Method Training Test Training Test 1 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 18.08% (0.24%) 0.0% (0.00%) 15.08% (0.43%) 19.14 (1.70) 0.01 (0.01) 18.86 (1.87) 0.70 (0.05) 2 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 23.81% (0.51%) 0.0% (0.00%) 17.00% (0.75%) 6.20 (0.35) 0.00 (0.00) 6.97 (0.46) 0.41 (0.04) 3 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 18.61% (0.52%) 0.0% (0.00%) 14.79% (0.89%) 7.33 (1.40) 0.00 (0.00) 7.91 (1.13) 0.26 (0.03) 4 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 25.97% (0.24%) 0.0% (0.00%) 21.12% (0.40%) 6.19 (0.43) 0.00 (0.00) 7.13 (0.54) 0.63 (0.04) 5 5-layer CNN S D 5-layer CNN + CoRe 0.0% (0.00%) 23.64% (0.64%) 0.0% (0.00%) 21.49% (1.27%) 20.20 (2.46) 0.00 (0.00) 24.85 (3.56) 0.59 (0.10)
Table 1: Eyeglass detection, trained on small subsets (DS1âDS5) of the CelebA dataset with disjoint identities. We report training and test error as well as the value of the CoRe penalty ËCf,1,θ on the training and the test set after training, evaluated for both the pooled estimator and the CoRe estimator. The weights of the ridge and the CoRe penalty were chosen based on their performance on the validation set.
19
(a) (b)
# Fi F
Figure 6: Eyeglass detection, trained on a small subset (DS1) of the CelebA dataset with disjoint identities. (a) Average test error as a function of both the CoRe penalty on x-axis and various levels of the ridge penalty. The results can be seen to be fairly insensitive to the ridge penalty. (b) The variance ratio (14) on test data as a function of both the CoRe and ridge penalty weights. The CoRe penalty can be seen to penalize the within- group variance selectively, whereas a strong ridge penalty decreases both the within- and between-group variance.
Table 1 also reports the value of the CoRe penalty after training when evaluated for the pooled and the CoRe estimator on the training and the test set. As a qualitative measure to assess the presence of sample bias in the data (provided the model assumptions hold), we can compare the value the CoRe penalty takes after training when evaluated for the pooled estimator and the CoRe estimator. The diï¬erence yields a measure for the extent the respective estimators are functions of â. If the respective hold-out values are both small, this would indicate that the style features are not very predictive for the target variable. If, on the other hand, the CoRe penalty evaluated for the pooled estimator takes a much larger value than for the CoRe estimator (as in this case), this would indicate the presence of sample bias.
# 5.1.2 Other CoRe penalty types
We now compare all CoRe penalty types, i.e., penalizing with (i) the conditional variance of the predicted logits ËCf,1,θ, (ii) the conditional standard deviation of the predicted logits ËCf,1/2,θ, (iii) the conditional variance of the loss ËCl,1,θ and (iv) the conditional standard deviation of the loss ËCl,1/2,θ. For this comparison, we use the training dataset 1 (DS 1) from above. Table 2 contains the test error (training error was 0% for all methods) as
20
Error Penalty value Method Test Training Test 5-layer CNN 5-layer CNN + CoRe w/ ËCf,1,θ 5-layer CNN + CoRe w/ ËCf,1/2,θ 5-layer CNN + CoRe w/ ËCl,1,θ 5-layer CNN + CoRe w/ ËCl,1/2,θ 18.08% (0.24%) 15.08% (0.43%) 15.34% (0.83%) 15.12% (0.27%) 15.59% (0.36%) 19.14 (1.70) 0.01 (0.01) 0.03 (0.01) 0.00 (0.00) 0.00 (0.00) 18.86 (1.87) 0.70 (0.05) 0.89 (0.03) 0.38 (0.03) 0.35 (0.02)
Table 2: Eyeglass detection, trained on a small subset (DS1) of the CelebA dataset with disjoint identities. We report training and test error as well as the value of the CoRe penalties ËCf,1,θ, ËCf,1/2,θ, ËCl,1,θ and ËCl,1/2,θ on the training and the test set after training, evaluated for both the pooled estimator and the CoRe estimator. The weights of the ridge and the CoRe penalty were chosen based on their performance on the validation set. The four CoRe penalty variantsâ performance diï¬erences are not statistically signiï¬cant.
well as the value the respective CoRe penalty took after training on the training set and the test set. The four CoRe penalty variantsâ performance diï¬erences are not statistically signiï¬cant. Hence, we mostly focus on the conditional variance of the predicted logits ËCf,1,θ in the other experiments.
# 5.1.3 Discussion
While the distributional shift in this example arises due to statistical ï¬uctuations which will diminish as the sample size grows, the following examples are more concerned with biases that will persist even if the number of training and test samples is very large. A second diï¬erence to the subsequent examples is the grouping structureâin this example, we consider only a few identities, namely m = 10, with a relatively large number ni of associated observations (about thirty observations per individual). In the following examples, m is much larger while ni is typically smaller than ï¬ve.
# 5.2 Gender classiï¬cation with unknown confounding
In the following set of experiments, we work again with the CelebA dataset and the 5-layer convolutional neural network architecture described in Table C.1. This time we consider the problem of classifying whether the person shown in the image is male or female. We create a confounding in training and test set I by including mostly images of men wearing glasses and women not wearing glasses. In test set 2 the association between gender and glasses is ï¬ipped: women always wear glasses while men never wear glasses. Examples from the training and test sets 1 and 2 are shown in Figure 7. The training set, test set 1 and 2 are subsampled such that they are balanced with respect to Y , resulting in 16982, 4224 and 1120 observations, respectively.
21
Training data (n = 16982): Test data 1 (n = 4224): Test data 2 (n = 1120):
Figure 7: Classiï¬cation for Y â {woman, man}. There is an unknown confounding here as men are very likely to wear glasses in training and test set 1 data, while it is women that are likely to wear glasses in test set 2. Estimators that pool all observations are making use of this confounding and hence fail for test set 2. The conditional variance penalty for the CoRe estimator is computed over groups of images of the same person (and consequently same class label), such as the images in the red box on the left. The number of grouped examples c is 500. We vary the proportion of males in the grouped examples between 50% and 100% (cf. §5.2.1).
To compute the conditional variance penalty, we use again images of the same person. The ID variable is, in other words, the identity of the person and gender Y is constant across all examples with the same ID. Conditioning on (Y, ID) is hence identical to conditioning on ID alone. Another diï¬erence to the other experiments is that we consider a binary style feature here.
# 5.2.1 Label shift in grouped observations
We compare six diï¬erent datasets that vary with respect to the distribution of Y in the grouped observations. In all training datasets, the total number of observations is 16982 and the total number of grouped observations is 500. In the ï¬rst dataset, 50% of the grouped observations correspond to males and 50% correspond to females. In the remaining 5 datasets, we increase the number of grouped observations with Y = âmanâ, denoted by κ, to 75%, 90%, 95%, 99% and 100%, respectively. Table 3 shows the performance obtained for these datasets when using the pooled estimator compared to the CoRe estimator with ËCf,1,θ. The results show that both the pooled estimator as well as the CoRe estimator perform better if the distribution of Y in the grouped observations is more balanced. The CoRe estimator improves the error rate of the pooled estimator by â 28 â 39% on a relative scale. Figure 8 shows the performance for κ = 50% as a function of the CoRe penalty weight. Signiï¬cant improvements can be obtained across a large range of values for the CoRe penalty and the ridge penalty. Test errors become more sensitive to the chosen value of the CoRe penalty for very large values of the ridge penalty weight as the overall amount of regularization is already large.
22
(a)
(b)
(c) (d)
Figure 8: Classiï¬cation for Y â {woman, man} with κ = 0.5. Panels (a) and (b) show the test error on test data sets 1 and 2 respectively as a function of the CoRe and ridge penalty. Panels (c) and (d) show the variance ratio (14) (comparing within- and between- group variances) for females and males separately.
23
Error Penalty value Method Train Test 1 Test 2 5 5-layer CNN = κ . 5-layer CNN + CoRe 0.00% 2.00% 38.54% 22.77 6.43% 5.85% 24.07% 0.01 74.05 1.61 30.67 0.93 5 7 . = κ 5-layer CNN 5-layer CNN + CoRe 0.00% 1.98% 43.41% 8.23 7.61% 6.99% 27.05% 0.00 32.98 1.44 11.76 0.62 9 5-layer CNN = κ . 5-layer CNN + CoRe 0.00% 2.00% 47.64% 9.47 8.76% 7.74% 30.63% 0.00 40.51 1.26 14.37 0.42 5 9 . = κ 5-layer CNN 5-layer CNN + CoRe 0.00% 1.89% 48.96% 13.62 10.45% 9.35% 29.57% 0.00 61.01 0.42 21.26 0.16 9 9 . = κ 5-layer CNN 5-layer CNN + CoRe 0.00% 1.70% 50.11% 20.66 11.10% 10.51% 32.91% 0.00 70.80 0.00 27.80 0.00 1 5-layer CNN = κ 5-layer CNN + CoRe 0.00% 1.93% 49.41% 821.32 11.12% 10.11% 35.68% 0.00 2524.77 0.02 1253.21 0.01
# Train Test: Females Test: Males
Table 3: Classiï¬cation for Y â {woman, man}. We compare six diï¬erent datasets that vary with respect to the distribution of Y in the grouped observations. Speciï¬cally, we vary the proportion of images showing men between κ = 0.5 and κ = 1. In all training datasets, the total number of observations is 16982 and the total number of grouped observations is 500. Both the pooled estimator as well as the CoRe estimator perform better if the distribution of Y in the grouped observations is more balanced. The CoRe estimator improves the error rate of the pooled estimator by â 28â39% on a relative scale. Table D.2 in the Appendix additionally contains the standard error of all shown results.
24
Error Method Train Test 1 Test 2 Inception V3 Inception V3 + CoRe 5.74% 5.53% 30.29% 6.15% 5.85% 21.70%
Table 4: Classification for Y ⬠{woman, man} with « = 0.5 Here, we compared ¢2-regularized logistic regression based on Inception V3 features with and without the CORE penalty. The CoRE estimator improves the performance of the pooled estimator by ~ 28% on a relative scale.
# 5.2.2 Using pre-trained Inception V3 features
To verify that the above conclusions do not change when using more powerful features, we here compare ¢9-regularized logistic regression using pre-trained Inception V3 featured) with and without the CORE penalty. Table |4| shows the results for « = 0.5. While the results show that both the pooled estimator as well as the CORE estimator perform better using pre-trained Inception features, the relative improvement with the CORE penalty is still 28% on test set 2.
5.2.3 Additional baselines: Unconditional variance regularization and grouping by class label
As additional baselines, we consider the following two schemes: (i) we group all examples sharing the same class label and penalize with the conditional variance of the predicted logits, computed over these two groups; (ii) we penalize the overall variance of the predicted logits, i.e., a form of unconditional variance regularization. Figure 9 shows the performance of these two approaches. In contrast to the CoRe penalty, regularizing with the variance of the predicted logits conditional on Y only does not yield performance improvements on test set 2, compared to the pooled estimator (corresponding to a penalty weight of 0). Interestingly, using baseline (i) without a ridge penalty does yield an improvement on test set I, compared to the pooled estimator with various strengths of the ridge penalty.
# 5.3 Eyeglasses detection with known and unknown image quality intervention
We now revisit the third example from §1.1. We again use the CelebA dataset and consider the problem of classifying whether the person in the image is wearing eyeglasses. Here, we modify the images in the following way: in the training set and in test set 1, we sample the image quality10 for all samples {i : yi = 1} (all samples that show glasses) from a Gaussian distribution with mean µ = 30 and standard deviation Ï = 10. Samples with yi = 0 (no glasses) are unmodiï¬ed. In other words, if the image shows a person wearing glasses, the
9. Retrieved from https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1. 10. We use ImageMagick (https://www.imagemagick.org) to change the quality of the compression through
convert -quality q ij input.jpg output.jpg where qi,j â¼ N (30, 100).
25
(a) Baseline: Grouping-by-Y
(b) Baseline: Grouping-by-Y
# Ridge weight:
Ridge weight: 0.0001 Ridge weight: 0.0005 Ridge weight: 0.001 Ridge weight: 0.005 Ridge weight: 0.01 Test error 1 2 3 Penalty weight
m-
Â¥-
-*-
a-
+*-
Ridge weight: 0 Ridge weight: 0.0001 Ridge weight: 0.0005 Ridge weight: 0.001 Ridge weight: 0.005 Ridge weight: 0.01
x-
48 46 N 5 44 £ oO wp 42 wn o F 40 38 0 1 2 3 4 5 Penalty weight
# (c) Baseline: Unconditional variance penalty
(d) Baseline: Unconditional variance penalty
3.0 25 2.0 Test error 1 1.5 0.00 0.02 0.04 0.06 0.08 Penalty weight 0.10 0.12
55 50 45 40 35 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Penalty weight
~
# L 2 o
# rn wn o ~
Figure 9: Classiï¬cation for Y â {woman, man} with κ = 0.5, using the baselines which (i) penalize the variance of the predicted logits conditional on the class label Y only; and (ii) penalize the overall variance of the predicted logits (cf. §5.2.3). For baseline (i), panels (a) and (b) show the test error on test data sets 1 and 2 respectively as a function of the âbaseline penalty weightâ for various ridge penalty strengths. For baseline (ii), the equivalent plots are shown in panels (c) and (d). In contrast to the CoRe penalty, regularizing with these two baselines does not yield performance improvements on test set 2, compared to the pooled estimator (corresponding to a penalty weight of 0).
26
Training data (n = 20000):
Test set 1 (n = 5344):
Test set 2 (n = 5344):
5-layer CNN training error: 0% with add. CoRe penalty: 10%
5-layer CNN test error: 2% with add. CoRe penalty: 13%
5-layer CNN test error: 65% with add. CoRe penalty: 29%
SHARBARS AAA As ehe
Figure 10: Eyeglass detection for CelebA dataset with image quality interventions (which are un- known to any procedure used). The JPEG compression level is lowered for Y = 1 (glasses) samples on training data and test set 1 and lowered for Y = 0 (no glasses) samples for test set 2. To the human eye, these interventions are barely visible but the CNN that uses pooled data without CoRe penalty has exploited the correlation between image quality and outcome Y to achieve a (arguably spurious) low test error of 2% on test set 1. However, if the correlation between image quality and Y breaks down, as in test set 2, the CNN that uses pooled data without a CoRe penalty has a 65% misclassiï¬cation rate. The training data on the left show paired observations in two red boxes: these observations share the same label Y and show the same person ID. They are used to compute the conditional variance penalty for the CoRe estimator that does not suï¬er from the same degradation in performance for test set 2.
27
Training data (n = 20000):
Test set 1 (n = 5344):
Test set 2 (n = 5344):
5-layer CNN training error: 0% with added CoRe penalty: 3%
5-layer CNN test error: 2% with added CoRe penalty: 7%
5-layer CNN test error: 65% with add. CoRe penalty: 13%
AssMabceta
Figure 11: Eyeglass detection for CelebA dataset with image quality interventions. The only dif- ference to Figure 10 is in the training data where the paired images now use the same underlying image in two diï¬erent JPEG compressions. The compression level is drawn from the same distribution. The CoRe penalty performs better than for the experiment in Figure 10 since we could explicitly control that only X style â¡ image quality varies between grouped examples. On the other hand, the performance of the pooled estima- tor is not changed in a noticeable way if we add augmented images as the (spurious) correlation between image quality and outcome Y still persists in the presence of the extra augmented images. Thus, the pooled estimator continues to be susceptible to image quality interventions.
image quality tends to be lower. In test set 2, the quality is reduced in the same way for yi = 0 samples (no glasses), while images with yi = 1 are not changed. Figure 10 shows examples from the training set and test sets 1 and 2. For the CoRe penalty, we calculate the conditional variance across images that share the same ID if Y = 1, that is across images that show the same person wearing glasses on all images. Observations with Y = 0 (not wearing glasses) are not grouped. Two examples are shown in the red box of Figure 10. Here, we have c = 5000 grouped observations among a total sample size of n = 20000.
Figure [10|shows misclassification rates for CORE and the pooled estimator on test sets 1 and 2. The pooled estimator (only penalized with an 2 penalty) achieves low error rates of 2% on test set 1, but suffers from a 65% misclassification error on test set 2, as now the relation between Y and the implicit X** variable (image quality) has been flipped. The CoRE estimator has a larger error of 13% on test set 1 as image quality as a feature is penalized by CORE implicitly and the signal is less strong if image quality has been removed as a dimension. However, in test set 2 the performance of the CORE estimator is 28% and improves substantially on the 65% error of the pooled estimator. The reason is again the same: the CORE penalty ensures that image quality is not used as a feature to the same extent as for the pooled estimator. This increases the test error slightly if the samples are generated from the same distribution as training data (as here for test set 1) but substantially improves the test error if the distribution of image quality, conditional on the class label, is changed on test data (as here for test set 2).
28
Eyeglasses detection with known image quality intervention To compare to the above results, we repeat the experiment by changing the grouped observations as follows. Above, we grouped images that had the same person ID when Y = 1. We refer to this scheme of grouping observations with the same (Y, ID) as âGrouping setting 2â. Here, we use an explicit augmentation scheme and augment c = 5000 images with Y = 1 in the following way: each image is paired with a copy of itself and the image quality is adjusted In other words, the only diï¬erence between the two images is that as described above. image quality diï¬ers slightly, depending on the value that was drawn from the Gaussian distribution with mean µ = 30 and standard deviation Ï = 10, determining the strength of the image quality intervention. Both the original and the copy get the same value of identiï¬er variable ID. We call this grouping scheme âGrouping setting 1â. Compare the left panels of Figures 10 and 11 for examples.
While we used explicit changes in image quality in both above and here, we referred to grouping setting 2 as âunknown image quality interventionsâ as the training sample as in the left panel of Figure 10 does not immediately reveal that image quality is the important style variable. In contrast, the augmented data samples (grouping setting 1) we use here diï¬er only in their image quality for a constant (Y, ID).
Figure 11 shows examples and results. The pooled estimator performs more or less identical to the previous dataset. The explicit augmentation did not help as the association between image quality and whether eyeglasses are worn is not changed in the pooled data after including the augmented data samples. The misclassiï¬cation error of the CoRe esti- mator is substantially better than the error rate of the pooled estimator. The error rate on test set 2 of 13% is also improving on the rate of 28% of the CoRe estimator in grouping setting 2. We see that using grouping setting 1 works best since we could explicitly control that only X style â¡ image quality varies between grouped examples. In grouping setting 2, diï¬erent images of the same person can vary in many factors, making it more challenging to isolate image quality as the factor to be invariant against.
# 5.4 Stickmen image-based age classiï¬cation with unknown movement interventions
In this example we consider synthetically generated stickmen images; see Figure 12 for some examples. The target of interest is Y â {adult, child}. The core feature X core is here the height of each person. The class Y is causal for height and height cannot be easily intervened on or change in diï¬erent domains. Height is thus a robust predictor for diï¬erentiating between children and adults. As style feature we have here the movement of a person (distribution of angles between body, arms and legs). For the training data we created a dependence between age and the style feature âmovementâ, which can be thought to arise through a hidden common cause D, namely the place of observation. The data generating process is illustrated in Figure D.6. For instance, the images of children might mostly show children playing while the images of adults typically show them in more âstaticâ postures. The left panel of Figure 12 shows examples from the training set where large movements are associated with children and small movements are associated with adults. Test set 1 follows the same distribution, as shown in the middle panel. A standard CNN will exploit this relationship between movement and the label Y of interest, whereas this is discouraged
29
Training data (n = 20000):
Test set 1 (n = 20000):
Test set 2 (n = 20000):
5-layer CNN training error: 4% with added CoRe penalty: 4% 5-layer CNN test error: 3% with added CoRe penalty: 4% 5-layer CNN test error: 41% with added CoRe penalty: 9%
Figure 12: Classiï¬cation into {adult, child} based on stickmen images, where children tend to be smaller and adults taller. In training and test set 1 data, children tend to have stronger movement whereas adults tend to stand still. In test set 2 data, adults show stronger movement. The two red boxes in the panel with the training data show two out of the c = 50 pairs of examples over which the conditional variance is calculated. The CoRe penalty leads to a network that generalizes better for test set 2 data, where the spurious correlation between age and movement is reversed, if compared to the training data.
by the conditional variance penalty of CoRe. The latter is pairing images of the same person in slightly diï¬erent movements as shown by the red boxes in the leftmost panel of Figure 12. If the learned model exploits this dependence between movement and age for predicting Y , it will fail when presented images of, say, dancing adults. The right panel of Figure 12 shows such examples (test set 2). The standard CNN suï¬ers in this case from a 41% misclassiï¬cation rate, as opposed to the 3% on test set 1 data. For as few as c = 50 paired observations, the network with an added CoRe penalty, in contrast, achieves also 4% on test set 1 data and succeeds in achieving an 9% performance on test set 2, whereas the pooled estimator fails on this dataset with a test error of 41%.
These results suggest that the learned representation of the pooled estimator uses move- ment as a predictor for age while CoRe does not use this feature due to the conditional variance regularization. Importantly, including more grouped examples would not improve the performance of the pooled estimator as these would be subject to the same bias and hence also predominantly have examples of heavily moving children and âstaticâ adults (also see Figure D.7 which shows results for c â {20, 500, 2000}).
5.5 MNIST: more sample eï¬cient data augmentation The goal of using CoRe in this example is to make data augmentation more eï¬cient in terms of the required samples. In data augmentation, one creates additional samples by modifying the original inputs, e.g. by rotating, translating, or ï¬ipping the images (Sch¨olkopf et al., 1996). In other words, additional samples are generated by interventions on style fea- tures. Using this augmented data set for training results in invariance of the estimator with respect to the transformations (style features) of interest. For CoRe we can use the group- ing information that the original and the augmented samples belong to the same object. This enforces the invariance with respect to the style features more strongly compared to normal data augmentation which just pools all samples. We assess this for the style feature
30
Training data (n = 10200):
Test set (n = 10000):
3-layer CNN training error: 0% with added CoRe penalty: 1%
3-layer CNN test error: 22% with added CoRe penalty: 10%
Figure 13: Data augmentation for MNIST images. The left shows training data with a few ro- tated images. Evaluating on only rotated images from the test set, a standard network achieves only 22% accuracy. We can add the CoRe penalty by computing the condi- tional variance over images that were generated from the same original image. The test error is then lowered to 10% on the test data of rotated images.
ârotationâ on MNIST (LeCun et al., 1998) and only include c = 200 augmented training examples for m = 10000 original samples, resulting in a total sample size of n = 10200. The degree of the rotations is sampled uniformly at random from [35, 70]. Figure 13 shows examples from the training set. By using CoRe the average test error on rotated exam- ples is reduced from 22% to 10%. Very few augmented sample are thus suï¬cient to lead to stronger rotational invariance. The standard approach of creating augmented data and pooling all images requires, in contrast, many more samples to achieve the same eï¬ect. Additional results for m â {1000, 10000} and c ranging from 100 to 5000 can be found in Figure D.5 in Appendix §D.4.
# 5.6 Elmer the Elephant
In this example, we want to assess whether invariance with respect to the style feature âcolorâ can be achieved. In the childrenâs book âElmer the elephantâ11 one instance of a colored elephant suï¬ces to recognize it as being an elephant, making the color âgrayâ no longer an integral part of the object âelephantâ. Motivated by this process of concept formation, we would like to assess whether CoRe can exclude âcolorâ from its learned representation by penalizing conditional variance appropriately.
We work with the âAnimals with attributes 2â (AwA2) dataset (Xian et al., 2017) and consider classifying images of horses and elephants. We include additional examples by adding grayscale images for c = 250 images of elephants. These additional examples do not distinguish themselves strongly from the original training data as the elephant images are already close to grayscale images. The total training sample size is 1850.
11. https://en.wikipedia.org/wiki/Elmer_the_Patchwork_Elephant
31
Training data (n = 1850): Test data 1 (n = 414): Test data 2 (n = 414):
5-layer CNN training error: 0% with added CoRe penalty: 0% 5-layer CNN test error: 24% with add. CoRe penalty: 30% 5-layer CNN test error: 52% with add. CoRe penalty: 30%
Figure 14: Elmer-the-Elephant dataset. The left panel shows training data with a few additional grayscale elephants. The pooled estimator learns that color is predictive for the animal class and achieves test error of 24% on test set 1 where this association is still true but suï¬ers a misclassiï¬cation error of 53% on test set 2 where this association breaks down. By adding the CoRe penalty, the test error is consistently around 30%, irrespective of the color distribution of horses and elephants.
Figure 14 shows examples and misclassiï¬cation rates from the training set and test sets for CoRe and the pooled estimator on diï¬erent test sets. Examples from these and more test sets can be found in Figure D.10. Test set 1 contains original, colored images only. In test set 2 images of horses are in grayscale and the colorspace of elephant images is modiï¬ed, eï¬ectively changing the color gray to red-brown. We observe that the pooled estimator does not perform well on test set 2 as its learned representation seems to exploit the fact that âgrayâ is predictive for âelephantâ in the training set. This association is no longer valid for test set 2. In contrast, the predictive performance of CoRe is hardly aï¬ected by the changing color distributions. More details can be found in Appendix §D.7.
It is noteworthy that a colored elephant can be recognized as an elephant by adding a few examples of a grayscale elephant to the very lightly colored pictures of natural elephants. If we just pool over these examples, there is still a strong bias that elephants are gray. The CoRe estimator, in contrast, demands invariance of the prediction for instances of the same elephant and we can learn color invariance with a few added grayscale images.
# 5.7 Eyeglasses detection: unknown brightness intervention
As in §5.3 we work with the CelebA dataset and try to classify whether the person in the image is wearing eyeglasses. Here we analyze a confounded setting that could arise as follows. Say the hidden common cause D of Y and X style is a binary variable and indicates whether the image was taken outdoors or indoors. If it was taken outdoors, then the person tends to wear (sun-)glasses more often and the image tends to be brighter. If the image was taken indoors, then the person tends not to wear (sun-)glasses and the image tends to be darker. In other words, the style variable X style is here equivalent to brightness and the structure of the data generating process is equivalent to the one shown in Figure D.6. Figure 15 shows examples from the training set and test sets. As previously, we compute the conditional variance over images of the same person, sharing the same class label (and
32
Training data (n = 20000):
Test set 1 (n = 5344):
Test set 2 (n = 5344):
5-layer CNN training error: 0% with added CoRe penalty: 6% 5-layer CNN test error: 4% with added CoRe penalty: 6% 5-layer CNN test error: 37% with add. CoRe penalty: 25%
AAG SE AAEBE AAG BOASH COBae Gh6.8 onhgeO?:.85 BONA
Figure 15: Eyeglass detection for CelebA dataset with brightness interventions (which are unknown to any procedure used). On training data and test set 1 data, images where people wear glasses tend to be brighter whereas on test set 2 images where people do not wear glasses tend to be brighter.
the CoRe estimator is hence not using the knowledge that brightness is important). Two alternatives for constructing grouped observations in this setting are discussed in §D.2. We use c = 2000 and n = 20000. For the brightness intervention, we sample the value for the magnitude of the brightness increase resp. decrease from an exponential distribution with mean β = 20. In the training set and test set 1, we sample the brightness value as bi,j = [100+yiei,j]+ where ei,j â¼ Exp(βâ1) and yi â {â1, 1}, where yi = 1 indicates presence of glasses and yi = â1 indicates absence.12 For test set 2, we use instead bi,j = [100âyiei,j]+, so that the relation between brightness and glasses is ï¬ipped.
Figure 15 shows misclassiï¬cation rates for CoRe and the pooled estimator on diï¬erent test sets. Examples from all test sets can be found in Figure D.3. First, we notice that the pooled estimator performs better than CoRe on test set 1. This can be explained by the fact that it can exploit the predictive information contained in the brightness of an image while CoRe is restricted not to do so. Second, we observe that the pooled estimator does not perform well on test set 2 as its learned representation seems to use the imageâs brightness as a predictor for the response which fails when the brightness distribution in the test set diï¬ers signiï¬cantly from the training set. In contrast, the predictive performance of CoRe is hardly aï¬ected by the changing brightness distributions. Results for β â {5, 10, 20} and c â {200, 5000} can be found in Figure D.4 in Appendix §D.2.
# 6. Further related work
Encoding certain invariances in estimators is a well-studied area in computer vision and machine learning with an extensive body of literature. While a large part of this work assumes the desired invariance to be known, fewer approaches aim to learn the required
12. Speciï¬cally, we use ImageMagick (https://www.imagemagick.org) and modify the brightness of each image by applying the command convert -modulate b ij,100,100 input.jpg output.jpg to the im- age.
33
invariances from data and the focus often lies on geometric transformations of the input data or explicitly creating augmented observations (Sohn and Lee, 2012; Khasanova and Frossard, 2017; Hashimoto et al., 2017; Devries and Taylor, 2017). The main diï¬erence between this line of work and CoRe is that we do not require to know the style feature explicitly, the set of possible style features is not restricted to a particular class of transformations and we do not aim to create augmented observations in a generative framework.
Recently, various approaches have been proposed that leverage causal motivations for deep learning or use deep learning for causal inference, related to e.g. the problems of cause- eï¬ect inference and generative adversarial networks (Chalupka et al., 2014; Lopez-Paz et al., 2017; Lopez-Paz and Oquab, 2017; Goudet et al., 2017; Bahadori et al., 2017; Besserve et al., 2018; Kocaoglu et al., 2018).
Kilbertus et al. (2017) exploit causal reasoning to characterize fairness considerations in machine learning. Distinguishing between the protected attribute and its proxies, they derive causal non-discrimination criteria. The resulting algorithms avoiding proxy discrim- ination require classiï¬ers to be constant as a function of the proxy variables in the causal graph, thereby bearing some structural similarity to our style features.
Distinguishing between core and style features can be seen as some form of disentangling factors of variation. Estimating disentangled factors of variation has gathered a lot of interested in the context of generative modeling. As in CoRe, Bouchacourt et al. (2018) exploit grouped observations. In a variational autoencoder framework, they aim to separate style and contentâthey assume that samples within a group share a common but unknown value for one of the factors of variation while the style can diï¬er. Denton and Birodkar (2017) propose an autoencoder framework to disentangle style and content in videos using an adversarial loss term where the grouping structure induced by clip identity is exploited. Here we try to solve a classiï¬cation task directly without estimating the latent factors explicitly as in a generative framework.
In the computer vision literature, various works have used identity information to achieve pose invariance in the context of face recognition 2017). More generally, the idea of exploiting various observations of the same underlying object is related to multi-view learning 2013). In the context of adversarial examples, recently proposed the defense âAdversarial logit pairingâ which is methodologically equivalent to the CORE penalty Cy1,9 when using the squared error loss. Several empirical studies have shown mixed results regarding the performance on fs perturbations (Engstrom et al.| |2018} [Mosbach et al.||2018), so far this setting has not been analyzed theoretically and hence it is an open question whether a CORE-type penalty constitutes an effective defense against adversarial examples.
# 7. Conclusion
Distinguishing the latent features in an image into core and style features, we have proposed conditional variance regularization (CoRe) to achieve robustness with respect to arbitrarily large interventions on the style or âorthogonalâ features. The main idea of the CoRe estimator is to exploit the fact that we often have instances of the same object in the training data. By demanding invariance of the classiï¬er amongst a group of instances that relate to the same object, we can achieve invariance of the classiï¬cation performance with
34
respect to interventions on style features such as image quality, fashion type, color, or body posture. The training also works despite sampling biases in the data.
There are two main application areas:
1. If the style features are known explicitly, we can achieve the same classiï¬cation perfor- mance as standard data augmentation approaches with substantially fewer augmented samples, as shown for example in §5.5.
2. Perhaps more interesting are settings in which it is unknown what the style features are, with examples in §5.1, §5.2, §5.3, §5.4 and §5.7. CoRe regularization forces predictions to be based on features that do not vary strongly between instances of the same object. We could show in the examples and in Theorems 1 and 2 that this regularization achieves distributional robustness with respect to changes in the distribution of the (unknown) style variables.
An interesting line of work would be to use larger models such as Inception or large ResNet architectures (Szegedy et al., 2015; He et al., 2016). These models have been trained to be invariant to an array of explicitly deï¬ned style features. In §5.2 we include results which show that using Inception V3 features does not guard against interventions on more implicit style features. We would thus like to assess what beneï¬ts CoRe can bring for training Inception-style models end-to-end, both in terms of sample eï¬ciency and in terms of generalization performance.
35
# Acknowledgments
We thank Brian McWilliams, Jonas Peters, and Martin Arjovsky for helpful comments and discussions and CSCS for provision of computational resources. A preliminary version of this work was presented at the NIPS 2017 Interpretable ML Symposium and we thank participants of the symposium for very helpful discussions. We would also like to thank three anonymous referees and the action editor Edo Airoli for detailed and very helpful feedback on an earlier version of the manuscript.
# References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wat- tenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorï¬ow.org.
J. Aldrich. Autonomy. Oxford Economic Papers, 41:15â34, 1989.
J. Bagnell. Robust supervised learning. In Proceedings of the national conference on artiï¬cial intelligence, volume 20, page 714. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.
M. T. Bahadori, K. Chalupka, E. Choi, R. Chen, W. F. Stewart, and J. Sun. Causal regularization. arXiv preprint arXiv:1702.02604, 2017.
S. Barocas and A. D. Selbst. Big Dataâs Disparate Impact. 104 California Law Review 671, 2016.
M. S. Bartlett and T. J. Sejnowski. Viewpoint invariant face recognition using independent component analysis and attractor networks. In Proceedings of the 9th International Con- ference on Neural Information Processing Systems, NIPSâ96, pages 817â823, Cambridge, MA, USA, 1996. MIT Press.
M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399â2434, 2006.
S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems 19. 2007.
A. Ben-Tal, D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen. Robust solu- tions of optimization problems aï¬ected by uncertain probabilities. Management Science, 59(2):341â357, 2013.
36
M. Besserve, N. Shajarisales, B. Sch¨olkopf, and D. Janzing. Group invariance principles for causal generative models. In Proceedings of the 21st International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), volume 84 of Proceedings of Machine Learning Research, pages 557â565. PMLR, 2018.
T. Bolukbasi, K.-W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems 29. 2016.
D. Bouchacourt, R. Tomioka, and S. Nowozin. Multi-level variational autoencoder: Learn- In AAAI Conference on ing disentangled representations from grouped observations. Artiï¬cial Intelligence. 2018.
K. Chalupka, P. Perona, and F. Eberhardt. Visual Causal Feature Learning. Uncertainty in Artiï¬cial Intelligence, 2014.
The New York Times, June 25 2016, 2016. URL https://www.nytimes.com/2016/06/26/opinion/sunday/ artificial-intelligences-white-guy-problem.html.
G. Csurka. A comprehensive survey on domain adaptation for visual applications. Domain Adaptation in Computer Vision Applications., pages 1â35. 2017. In
E. L. Denton and V. Birodkar. Unsupervised learning of disentangled representations from video. In Advances in Neural Information Processing Systems 30. 2017.
T. Devries and G. W. Taylor. Dataset augmentation in feature space. ICLR Workshop Track, 2017.
J. Emspak. How a machine Scientiï¬c American, De- URL https://www.scientificamerican.com/article/ learns prejudice. cember 29 2016, 2016. how-a-machine-learns-prejudice/.
L. Engstrom, A. Ilyas, and A. Athalye. Evaluating and understanding the robustness of adversarial logit pairing. arXiv preprint arXiv:1807.10272, 2018.
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1), 2016.
R. Gao, X. Chen, and A. Kleywegt. arXiv preprint arXiv:1712.06050, 2017.
B. Gong, K. Grauman, and F. Sha. Reshaping visual datasets for domain adaptation. In Advances in Neural Information Processing Systems 26, pages 1286â1294. Curran Associates, Inc., 2013.
M. Gong, K. Zhang, T. Liu, D. Tao, C. Glymour, and B. Sch¨olkopf. Domain adaptation with conditional transferable components. In International Conference on Machine Learning, 2016.
37
I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
O. Goudet, D. Kalainathan, P. Caillou, D. Lopez-Paz, I. Guyon, M. Sebag, A. Tritas, and P. Tubaro. Learning Functional Causal Models with Generative Neural Networks. arXiv preprint arXiv:1709.05321, 2017.
T. Haavelmo. The probability approach in econometrics. Econometrica, 12:S1âS115 (sup- plement), 1944.
David A Harville. Bayesian inference for variance components using only error contrasts. Biometrika, 61(2):383â385, 1974.
T. B Hashimoto, P. S Liang, and J. C Duchi. Unsupervised transformation learning via In Advances in Neural Information Processing Systems 30, pages convex relaxations. 6875â6883. Curran Associates, Inc., 2017.
K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human- level performance on imagenet classiï¬cation. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pages 1026â1034, 2015.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770â778, 2016.
J. Hoï¬man, B. Kulis, T. Darrell, and K. Saenko. Discovering latent domains for multi- source domain adaptation. In Computer Vision â ECCV 2012, pages 702â715, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
H. Kannan, A. Kurakin, and I. J. Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018.
R. Khasanova and P. Frossard. Graph-based isometry invariant representation learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1847â1856, 2017.
N. Kilbertus, M. Rojas Carulla, G. Parascandolo, M. Hardt, D. Janzing, and B. Sch¨olkopf. Avoiding discrimination through causal reasoning. Advances in Neural Information Pro- cessing Systems 30, pages 656â666, 2017.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015.
M. Kocaoglu, C. Snyder, A. Dimakis, and S. Vishwanath. CausalGAN: Learning causal im- plicit generative models with adversarial training. International Conference on Learning Representations, 2018.
A. Krizhevsky, I. Sutskever, and G. E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in Neural Information Processing Systems 25. 2012.
38
A. Kuehlkamp, B. Becker, and K. Bowyer. Gender-from-iris or gender-from-mascara? In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), 2017.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haï¬ner. Gradient-based learning applied to docu- ment recognition. Proceedings of the IEEE, 1998.
Ker-Chau Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414):316â327, 1991.
Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. Proceedings of International Conference on Computer Vision (ICCV), 2015. In
D. Lopez-Paz and M. Oquab. Revisiting Classiï¬er Two-Sample Tests. International Con- ference on Learning Representations (ICLR), 2017.
D. Lopez-Paz, R. Nishihara, S. Chintala, B. Sch¨olkopf, and L. Bottou. Discovering causal signals in images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), 2017.
S. Magliacane, T. van Ommen, T. Claassen, S. Bongers, P. Versteeg, and J. Mooij. Do- main adaptation by using causal inference to predict invariant conditional distributions. Advances in Neural Information Processing Systems, 2018.
N. Meinshausen. Causality from a distributional robustness point of view. In 2018 IEEE Data Science Workshop (DSW), pages 6â10, 2018.
M. Mosbach, M. Andriushchenko, T. Trost, M. Hein, and D. Klakow. Logit pairing methods can fool gradient-based attacks. arXiv preprint arXiv:1810.12042, 2018.
H. Namkoong and J.C. Duchi. Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems, pages 2975â2984, 2017.
J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, USA, 2nd edition, 2009.
J. Peters, P. B¨uhlmann, and N. Meinshausen. Causal inference using invariant prediction: identiï¬cation and conï¬dence intervals. Journal of the Royal Statistical Society, Series B, 78:947â1012, 2016.
J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence. Dataset Shift in Machine Learning. The MIT Press, 2009.
T. Richardson and J. M. Robins. Single world intervention graphs (SWIGs): A uniï¬cation of the counterfactual and graphical approaches to causality. Center for the Statistics and the Social Sciences, University of Washington Series. Working Paper 128, 30 April 2013, 2013.
M. Rojas-Carulla, B. Sch¨olkopf, R. Turner, and J. Peters. Causal transfer in machine learning. To appear in Journal of Machine Learning Research., 2018.
39
D. Rothenh¨ausler, P. B¨uhlmann, N. Meinshausen, and J. Peters. Anchor regression: het- erogeneous data meets causality. arXiv preprint arXiv:1801.06229, 2018.
B. Sch¨olkopf, C. Burges, and V. Vapnik. Incorporating invariances in support vector learning machines. In Artiï¬cial Neural Networks â ICANN 96, pages 47â52, Berlin, Heidelberg, 1996. Springer Berlin Heidelberg.
B. Sch¨olkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal learning. In Proceedings of the 29th International Conference on Machine Learning (ICML), pages 1255â1262, 2012.
S. Shaï¬eezadeh-Abadeh, D. Kuhn, and P. Esfahani. Regularization via mass transportation. arXiv preprint arXiv:1710.10016, 2017.
A. Sinha, H. Namkoong, and J. Duchi. Certiï¬able distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018.
In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICMLâ12, pages 1339â1346, USA, 2012. Omnipress.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fer- gus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015.
A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011.
L. Tran, X. Yin, and X. Liu. Disentangled representation learning gan for pose-invariant face recognition. In In Proceeding of IEEE Computer Vision and Pattern Recognition, Honolulu, HI, July 2017.
Geert Verbeke and Geert Molenberghs. Linear mixed models for longitudinal data. Springer Science & Business Media, 2009.
C. Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003.
Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. arXiv preprint arXiv:1805.12018, 2018.
Y. Xian, C. H. Lampert, B. Schiele, and Z. Akata. Zero-shot learning - A comprehensive evaluation of the good, the bad and the ugly. arXiv preprint arXiv:1707.00600, 2017.
C. Xu, D. Tao, and C. Xu. A survey on multi-view learning. arXiv preprint arXiv:1304.5634, 2013.
40
H. Xu, C. Caramanis, and S. Mannor. Robust regression and lasso. In Advances in Neural Information Processing Systems, pages 1801â1808, 2009.
X. Yu, T. Liu, M. Gong, K. Zhang, and D. Tao. Transfer learning with label noise. arXiv preprint arXiv:1707.09724, 2017.
K. Zhang, B. Sch¨olkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional shift. In International Conference on Machine Learning, 2013.
K. Zhang, M. Gong, and B. Sch¨olkopf. Multi-source domain adaptation: A causal view. In Proceedings of the Twenty-Ninth AAAI Conference on Artiï¬cial Intelligence, 2015.
41
# Appendix
# Appendix A. Proof of Theorem 1
First part. To show the ï¬rst part, namely that with probability 1,
Lio (OP) = 0, we need to show that W#6P22 # 0 with probability 1. The reason this is sufficient is as follows: if Wâ@ 4 0, then L,(0) = oo as we can then find av ⬠R% such that y := 0° Wo £ 0. Assume without limitation of generality that v is normed such that E(E(v'S) gvlÂ¥ = y,ID = id)) = 1. Setting Ag = â¬v for ⬠⬠R, we have that (ID, Y, Xstvle + Ae) is in the class Fig if the distribution of (ID, Y, X*Â¥'*) is equal to Fy. Furthermore, x(Ag)'@ = «(A = 0)'@ + â¬y. Hence log(1 + exp(ây - #(Ag)'8)) + 00 for either ⬠-+ 00 or ⬠+ âon.
To show that W#6Pee! # 0 with probability 1, let 6* be the oracle estimator that is constrained to be orthogonal to the column space of W:
P ; ; i< . & = argming.yegâo Ln(9) with L,(8) = YS? (yi, fo(as)). (15) i=l
We show W#6pee! # 0 by contradiction. Assume hence that W'gre! â 0. If this is indeed the case, then the constraint W'@ = 0 in becomes non-active and we have 6?! = 6. This would imply that taking the directional derivative of the training loss with respect to any 6 ⬠R? in the column space of W should vanish at the solution 6*. In other words, define the gradient as g(0) = VoLn(@) ⬠RP. The implication is then that for all 6 in the column-space of W,
δtg(Ëθâ) = 0 (16)
and we will show the latter condition is violated almost surely.
As we work with the logistic loss and Y ⬠{â1,1}, the loss is given by ¢(y;, fo(ai)) = log(1+exp(ây;x'0)). Define r;(9) := y;/(1t+exp(yx'6)). For alli =1,...,n we have r; 4 0. Then
9(0*) = on). (17) i=1
The training images can be written according to the model as xi = x0 , where X 0 := kx(X core, εX ) are the images in absence of any style variation. Since the style features only have an eï¬ect on the column space of W in X, the oracle estimator Ëθâ is identical under the true training data and the (hypothetical) training data x0 i , i = 1, . . . , n in absence of style variation. As X â X 0 = W X style, equation (17) can also be written as
n n jx 1 ix 1 jx 1 6'g(6*) = - » 1(0*)(a2)'6 + - > ri(O*) (as) WS, (18) i= i=
Since δ is in the column-space of W , there exists u â Rq such that δ = W u and we can write (18) as
1 n 1 n jis jis ie 1 5'g(6") = = So ri(6")(a?)'Wu + - Sori (O(a) WW. (19) i=l i=l
42
From (A2) we have that the eigenvalues of W tW are all positive. Also ri(Ëθâ) is not a function of the interventions xstyle , i = 1, . . . , n since, as above, the estimator Ëθâ is identical whether trained on the original data xi or on the intervention-free data x0 i , i = 1, . . . , n. If we condition on everything except for the random interventions by conditioning on (x0 i , yi) for i = 1, . . . , n, then the rhs of (19) can be written as
atu + Btu,
where a ⬠Ris fixed (conditionally) and B = +57", r)(6")(a"°)'W'W ⬠R¢ is a random vector and B 4 âa ⬠R¢ with probability 1 by (Al) and (A2) Hence the left hand side of is not identically 0 with probability 1 for any given 6 in the column-space of W. This shows that the implication is incorrect with probability 1 and hence completes the proof of the first part by contradiction.
Invariant parameter space. Before continuing with the second part of the proof, some deï¬nitions. Let I be the invariant parameter space
I := {θ : fθ(x(â)) is constant as function of â â Rq for all x â Rp}.
For all θ â I, the loss (7) for any F â Fξ is identical to the loss under F0. That is for all ξ ⥠0,
if eT, then sup Er |é(Y, fo(X))| = Em e(. folX))]- FeFe
The optimal predictor in the invariant space I is
6* = argming Ep, lew, fo(X))| such that 6 ⬠I. (20)
If fθ is only a function of the core features X core, then θ â I. The challenge is that the core features are not directly observable and we have to infer the invariant space I from data.
Second part. For the second part, we ï¬rst show that with probability at least pn, as deï¬ned in (A3), Ëθcore = Ëθâ with Ëθâ deï¬ned as in (15). The invariant space for this model is the linear subspace I = {θ : W tθ = 0} and by their respective deï¬nitions,
- 1~ 0* = argming â Seyi, fo(xi)) such that 6 ⬠I, n i=l - 1 ere = argming â Seu, fo(xi)) such that 6 ⬠In. no
Since we use In = In(Ï ) with Ï = 0,
I, = {0 : E(Var(fa(X)|Y, ID)) = 0}. This implies that for 6 ⬠In, fo(wi) = fo(ai) if i,iâ ⬠S; for some j ⬠{1,... a Since fo(x) = fo(2â) implies (x â x')'0 = 0, it follows that (x; â x)â = 0 if 7,7â ⬠S; for some j ⬠{1,...,m} and hence
In C {0 (aj â aj)'0 = 0 if 1,7â ⬠S; for some j ⬠{1,.. .,mh}.
13. recall that (yi,id;) = (yi, idj-) if i,iâ ⬠S; as the subsets S;, 7 = 1,...,m, collect all observations that have a unique realization of (Y,ID)
43
Since XS*'© has a linear influence on X in (P), ajâ cy = W(A; â A;) if i,â are in the same group S; of observations for some j ⬠{1,...,m}. Note that the number of grouped examples n â m is equal to or exceeds the rank q of W with probability pn, using (A3), and p, â 1 for n + oo. By (A2), it follows then with probability at least p, that I, C {0: WO =0} =I. As, by definition, I C I, is always true, we have with probability Pn that I = In. Hence, with probability pp (and pn > 1 for n â ov), gore = §*. It thus remains to be shown that
Lâ(Ëθâ) âp inf θ Lâ(θ). (21)
0 Since 6* is in I, we have (y,7(A)) = C(y, 2°), where x° are the previously defined data in absence of any style variance. Hence
~ 1~ 0* = argming â > (yi, fo(x2)) such that 6 ⬠J, (22) n i=1
that is the estimator is unchanged if we use the (hypothetical) data x0 training data. The population optimal parameter vector deï¬ned in (20) as
0* = argming Er, lew, fo(X))| such that 6 ⬠I. (23)
is for all ξ ⥠0 identical to
argming sup Er lew, fo (x))] such that 6 ⬠I. FeFe
Hence (22) and (23) can be written as
1 n ; 6* = argming.g<y L{(8) with L{) (6) := So ey, fo(#?)) i=1 O* = argmi LO (6) with L (6) := E[e(Y, fo(X°))] SMiNg.ge7 with : C(Y, to :
i ))
By uniform convergence of L(0) n to the population loss L(0), we have L(0)(Ëθâ) âp L(0)(θâ). â = Lâ(θâ) = L(0)(θâ). As Ëθâ is in I, we also have By deï¬nition of I and θâ, we have Lâ Lâ(Ëθâ) = L(0)(Ëθâ). Since, from above, L(0)(Ëθâ) âp L(0)(θâ), this also implies Lâ(Ëθâ) âp â. Using the previously established result that Ëθcore = Ëθâ with probability at Lâ(θâ) = Lâ least pn and pn â 1 for n â â, this completes the proof.
Appendix B. Proof of Theorem 2 Let F0 be the training distribution of (ID, Y, X style) and F a distribution for (ID, Y, ËX style) in Fξ. By deï¬nition of Fξ, we can write ËX style = X style + â for a suitable random variable â â Rq with
â â Uξ, where Uξ = {â : E(E(âtΣâ1 Y,IDâ|Y, ID)) ⤠ξ}.
if we can write ËX style = X style + â with â â Uξ, then the distribution is in Vice versa: Fξ. While X under F0 can be written as X(â = 0), the distribution of X under F is of
44
â
the form X(â) or, alternatively, X( constraint that U â U1, and using (B2), ξU ) with U â U1. Adopting from now on the latter
Ep|E(Y, fo(X)] = Er, [ho(0)] + VE Ex, | (Who)'U] + 0(6),
where âhθ is the gradient of hθ(δ) with respect to δ, evaluated at δ â¡ 0. Hence
sup Er [ro(A)] = Ep, [0(0)] + /⬠sup En [(Who)'0] +0(â¬). FeFe Ue
The proof is complete if we can show that
Coi/20 = sup Er, [(Who)'U| + O(¢). Uh
On the one hand,
sup ER, [(Wh0)'U| = En I / (Vho)!Dya0(Vho)) UM
.
This follows for a matrix Σ with Cholesky decomposition Σ = CtC,
max (Vhe)'u= max (Vhe)'Ctw uutS-lu<. w:|jwl|3<1 = ||C(VA)|l2 = V(VA)EX(VA).
On the other hand, the conditional-variance-of-loss can be expanded as
Coajo9 = Em [V Var, fo X))1Â¥s D)] = Exy | y/(Vho)'E ya (Vho)] + 06),
which completes the proof.
# Appendix C. Network architectures
We implemented the considered models in TensorFlow (Abadi et al., 2015). The model architectures used are detailed in Table C.1. CoRe and the pooled estimator use the same network architecture and training procedure; merely the loss function diï¬ers by the CoRe regularization term. In all experiments we use the Adam optimizer (Kingma and Ba, 2015). All experimental results are based on training the respective model ï¬ve times (using the same data) to assess the variance due to the randomness in the training procedure. In each epoch of the training, the training data xi, i = 1, . . . , n are randomly shuï¬ed, keeping the grouped observations (xi)iâIj for j â {1, . . . , m} together to ensure that mini batches will contain grouped observations. In all experiments the mini batch size is set to 120. For small c this implies that not all mini batches contain grouped observations, making the optimization more challenging.
45
Dataset Optimizer Architecture MNIST Adam Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32 28 Ã 28 Ã 1 Stickmen CelebA (all experiments using CelebA) AwA2 Adam Adam Adam (same padding, strides = 2, ReLu activation), fully connected, softmax layer 64 Ã 64 Ã 1 Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32, 5 Ã 5 Ã 64, 5 Ã 5 Ã 128 (same padding, strides = 2, leaky ReLu activation), fully connected, softmax layer 64 Ã 48 Ã 3 Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32, 5 Ã 5 Ã 64, 5 Ã 5 Ã 128 (same padding, strides = 2, leaky ReLu activation), fully connected, softmax layer 32 Ã 32 Ã 3 Input CNN Conv 5 Ã 5 Ã 16, 5 Ã 5 Ã 32, 5 Ã 5 Ã 64, 5 Ã 5 Ã 128 (same padding, strides = 2, leaky ReLu activation), fully connected, softmax layer
Table C.1: Details of the model architectures used.
# Appendix D. Additional experiments
# D.1 Eyeglasses detection with small sample size
Figure D.1 shows the numerator and the denominator of the variance ratio deï¬ned in Eq. (14) separately as a function of the CoRe penalty weight. In conjunction with Fig- ure 6(b), we observe that a ridge penalty decreases both the within- and between-group variance while the CoRe penalty penalizes the within-group variance selectively.
# D.2 Eyeglasses detection: known and unknown brightness interventions
Here, we show additional results for the experiment discussed in §5.7. Recall that we work with the CelebA dataset and consider the problem of classifying whether the person in the image is wearing eyeglasses. We discuss two alternatives for constructing diï¬erent test sets and we vary the number of grouped observations in c â {200, 2000, 5000} as well as the strength of the brightness interventions in β â {5, 10, 20}, all with sample size n = 20000. Generation of training and test sets 1 and 2 were already described in §5.7. Here, we consider additionally test set 3 where all images are left unchanged (no brightness interventions at all) and in test set 4 the brightness of all images is increased.
In §5.7 we used images of the same person to create a grouped observation by sampling a diï¬erent value for the brightness intervention. We refer to this as âGrouping setting 2â here. An alternative is to use the same image of the same person in diï¬erent brightnesses (drawn from the same distribution) as a group over which the conditional variance is calculated. We call this âGrouping setting 1â and it can be useful if we know that we want to protect against brightness interventions in the future. For comparison, we also evaluate grouping with an image of a diï¬erent person (but sharing the same class label) as a baseline (âGrouping
46
(a) (b)
Figure D.1: Eyeglass detection, trained on a small subset (DS1) of the CelebA dataset with disjoint identities. Panel (a) shows the numerator of the variance ratio deï¬ned in Eq. (14) on test data as a function of both the CoRe and ridge penalty weights. Panel (b) shows the equivalent plot for the denominator. A ridge penalty decreases both the within- and between-group variance while the CoRe penalty penalizes the within-group variance selectively (the latter can be seen more clearly in Figure 6(b)).
(a) Examples of misclassiï¬ed observations. (b) Misclassiï¬cation rates. y â¡ glasses y â¡ no glasses y â¡ glasses ËP core(gl.) = 1.00 ËP core(no gl.) = 0.84 ËP core(gl.) = 0.90 ËP pool(gl.) = 0.21 ËP pool(no gl.) = 0.13 ËP pool(gl.) = 0.14
(a) Misclassiï¬ed examples from the test sets. (b) Misclassiï¬cation rates for β = 20 and c = 2000. Results for diï¬erent test sets, grouping settings, β â {5, 10, 20} and c â {200, 5000} can be found in Figure D.4.
47
(a) Grouping setting 1, β = 5
Asa Ab f@ âARA0oha Meskel
Asa Ab
f@
(d) Grouping setting 2, β = 5
# = * $ a i oO
= * $ a i oO
(b) Grouping setting 1, β = 10
(e) Grouping setting 2, β = 10
(c) Grouping setting 1, β = 20
ASS ABH Aeon ARONA Mack
ASS
(f) Grouping setting 2, β = 20
(g) Grouping setting 3, β = 5
(h) Grouping setting 3, β = 10
(i) Grouping setting 3, β = 20
# HSoAnR ASeAne AGAADS
Figure D.3: Examples from the CelebA eyeglasses detection with brightness interventions, grouping settings 1â3 with β â {5, 10, 20}. In all rows, the ï¬rst three images from the left have y â¡ no glasses; the remaining three images have y â¡ glasses. Connected images are grouped examples. In panels (a)â(c), row 1 shows examples from the training set, rows 2â4 contain examples from test sets 2â4, respectively. Panels (d)â(i) show examples from the respective training sets.
setting 3â). Examples from the training sets using grouping settings 1, 2 and 3 can be found in Figure D.3.
Results for all grouping settings, β â {5, 10, 20} and c â {200, 5000} can be found in Figure D.4. We see that using grouping setting 1 works best since we could explicitly control that only X style â¡ brightness varies between grouping examples. In grouping setting 2, diï¬erent images of the same person can vary in many factors, making it more challenging to isolate brightness as the factor to be invariant against. Lastly, we see that if we group images of diï¬erent persons (âGrouping setting 3â), the diï¬erence between CoRe estimator and the pooled estimator becomes much smaller than in the previous settings.
Regarding the results for grouping setting 1 in Figure D.2, we notice that the pooled estimator performs better than CoRe on test set 1. This can be explained by the fact that it can exploit the predictive information contained in the brightness of an image while CoRe is restricted not to do so. Second, we observe that the pooled estimator does not perform well on test sets 2 and 4 as its learned representation seems to use the imageâs brightness as a predictor for the response which fails when the brightness distribution in the test set diï¬ers signiï¬cantly from the training set. In contrast, the predictive performance of CoRe is hardly aï¬ected by the changing brightness distributions.
48
(a) Grouping setting 1, c = 200
(b) Grouping setting 1, c = 2000
Method [ffl] CORE })§ poote
Method [ff] core jf pooled
_~ mean: 5 mean: 10 mean: 20 40 Zz ty 30 3 = 20 oO 2 < 10 g lk = ° Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1Te2Te3 Ted Dataset
_~ mean: 5 mean: 10 mean: 20 40 Zz ty 30 3 = 20 oO 2 < 10 2 Lilia: o lle 1h: = ° Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1 Te2Te3 Ted Dataset
(c) Grouping setting 2, c = 2000
(d) Grouping setting 2, c = 5000
_~ mean: 5 mean: 10 mean: 20 40 Zz ty 30 3 = 20 7) Q < 10 Samii ool 2 Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1Te2Te3 Ted Dataset
_~ mean: 5 mean: 10 mean: 20 40 Zz iy 30 3 = 20 7) Q < 10 $ lemon om: By ah Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1 Te2Te3 Ted
# Dataset
(e) Grouping setting 3, c = 2000
(f) Grouping setting 3, c = 5000
_~ mean: 5 mean: 10 mean: 20 40 Zz ww 30 3 = 20 7) Q < 10 2 ood ool 2 Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1Te2Te3 Ted Dataset
mean: 5 mean: 10 mean: 20 40 30 = 20 10 iE oi olf om Tr Te1Te2Te3Te4 Tr TetTe2Te3Te4 Tr Te1 Te2Te3 Ted Dataset
_~
# Zz
# uw 3
7) Q < 5 2
Figure D.4: Misclassiï¬cation rates for the CelebA eyeglasses detection with brightness interven- tions, grouping settings 1â3 with c â {200, 2000, 5000} and the mean of the exponential distribution β â {5, 10, 20}.
49
# D.3 Gender classiï¬cation
Table D.2 additionally reports the standard errors for the results discussed in §5.2.
50
e u l a v y t l a n e P r o r r E s e l a M : t s e T s e l a m e F : t s e T n i a r T 2 t s e T 1 t s e T n i a r T ) 8 8 . 0 ( ) 9 0 . 0 ( 7 6 . 0 3 3 9 . 0 ) 7 1 . 2 ( ) 4 1 . 0 ( 5 0 . 4 7 1 6 . 1 ) 7 2 . 0 ( ) 1 0 . 0 ( 7 7 . 2 2 1 0 . 0 ) ) % 1 8 . 0 ( % 0 8 . 0 ( % 4 5 . 8 3 % 7 0 . 4 2 ) ) % 7 1 . 0 ( % 7 1 . 0 ( % 0 0 . 2 % 5 8 . 5 ) ) % 0 0 . 0 ( % 5 2 . 0 ( % 0 0 . 0 % 3 4 . 6 ) 9 5 . 0 ( ) 3 0 . 0 ( 6 7 . 1 1 2 6 . 0 ) 4 4 . 1 ( ) 0 2 . 0 ( 8 9 . 2 3 4 4 . 1 ) 3 3 . 0 ( ) 0 0 . 0 ( 3 2 . 8 0 0 . 0 ) ) % 4 4 . 0 ( % 2 3 . 1 ( % 1 4 . 3 4 % 5 0 . 7 2 ) ) % 9 0 . 0 ( % 2 4 . 0 ( % 8 9 . 1 % 9 9 . 6 ) ) % 0 0 . 0 ( % 4 4 . 0 ( % 0 0 . 0 % 1 6 . 7 ) 2 1 . 1 ( ) 8 0 . 0 ( 7 3 . 4 1 2 4 . 0 ) 2 6 . 1 ( ) 1 3 . 0 ( 1 5 . 0 4 6 2 . 1 ) 5 7 . 0 ( ) 1 0 . 0 ( 7 4 . 9 0 0 . 0 ) ) % 1 1 . 1 ( % 3 7 . 1 ( % 4 6 . 7 4 % 3 6 . 0 3 ) ) % 1 1 . 0 ( % 8 6 . 0 ( % 0 0 . 2 % 4 7 . 7 ) ) % 0 0 . 0 ( % 9 5 . 0 ( % 0 0 . 0 % 6 7 . 8 ) 6 6 . 1 ( ) 1 1 . 0 ( 6 2 . 1 2 6 1 . 0 ) 2 2 . 1 ( ) 0 3 . 0 ( 1 0 . 1 6 2 4 . 0 ) 4 6 . 1 ( ) 0 0 . 0 ( 2 6 . 3 1 0 0 . 0 ) ) % 2 0 . 1 ( % 3 4 . 2 ( % 6 9 . 8 4 % 7 5 . 9 2 ) ) % 6 1 . 0 ( % 9 7 . 1 ( % 9 8 . 1 % 5 3 . 9 ) % 0 0 . 0 ( ) % 2 4 . 1 ( % 0 0 . 0 % 5 4 . 0 1 ) 2 1 . 1 ( ) 0 0 . 0 ( 0 8 . 7 2 0 0 . 0 ) 1 8 . 1 ( ) 0 0 . 0 ( 0 8 . 0 7 0 0 . 0 ) 5 2 . 1 ( ) 0 0 . 0 ( 6 6 . 0 2 0 0 . 0 ) ) % 5 6 . 0 ( % 1 8 . 0 ( % 1 1 . 0 5 % 1 9 . 2 3 ) % 0 1 . 0 ( ) % 6 2 . 1 ( % 0 7 . 1 % 1 5 . 0 1 ) % 0 0 . 0 ( ) % 7 1 . 1 ( % 0 0 . 0 % 0 1 . 1 1 ) 4 8 . 4 8 1 ( ) 1 0 . 0 ( 1 2 . 3 5 2 1 1 0 . 0 ) 6 9 . 9 1 2 ( ) 2 0 . 0 ( 7 7 . 4 2 5 2 2 0 . 0 ) 8 6 . 4 4 1 ( ) 0 0 . 0 ( 2 3 . 1 2 8 0 0 . 0 ) ) % 0 5 . 1 ( % 3 0 . 2 ( % 1 4 . 9 4 % 8 6 . 5 3 ) % 5 0 . 0 ( ) % 2 3 . 0 ( % 3 9 . 1 % 1 1 . 0 1 ) % 0 0 . 0 ( ) % 4 3 . 0 ( % 0 0 . 0 % 2 1 . 1 1 e h t n i Y f o n o i t u b i r t s i d e h t o t t c e p s e r h t i w y r a v t a h t s t e s a t a d t n e r e ï¬ d i x i s e r a p m o c e W . } n a m , n a m o w { â Y r o f n o i t a c ï¬ i s s a l C g n i n i a r t l l a n I . 1 = κ d n a 5 . 0 = κ n e e w t e b n e m g n w o h s i s e g a m i f o n o i t r o p o r p e h t y r a v e w , y l l a c ï¬ i c e p S . s n o i t a v r e s b o d e p u o r g d e l o o p e h t h t o B . 0 0 5 s i s n o i t a v r e s b o d e p u o r g f o r e b m u n l a t o t e h t d n a 2 8 9 6 1 s i s n o i t a v r e s b o f o r e b m u n l a t o t e h t , s t e s a t a d . d e c n a l a b e r o m s i s n o i t a v r e s b o d e p u o r g e h t n i Y f o n o i t u b i r t s i d e h t f i r e t t e b m r o f r e p r o t a m i t s e e R o C e h t s a l l e w s a r o t a m i t s e . e l a c s e v i t a l e r a n o % 9 3 â 8 2 â y b r o t a m i t s e d e l o o p e h t f o e t a r r o r r e e h t s e v o r p m i r o t a m i t s e e R o C e h T
# d o h t e
# M
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
N N C r e y a l - 5
e R o C +
# : 2 . D e l
κ=.5
κ=.75
κ=.9
κ=.95
κ=.99
κ=1
b a T
51
(a) m = 1000 (b) m = 10000
Figure D.5: Data augmentation setting: Misclassiï¬cation rates for MNIST and X style â¡ rotation. In test set 1 all digits are rotated by a degree randomly sampled from [35, 70]. Test set 2 is the usual MNIST test set.
place of observation D person ID adult/child Y â height X core movement X style(â) image X(â) fθ ËY (X(â))
Figure D.6: Data generating process for the stickmen example.
# D.4 MNIST: more sample eï¬cient data augmentation
Here, we show further results for the experiment introduced in §5.5. We vary the number of augmented training examples c from 100 to 5000 for m = 10000 and c â {100, 200, 500, 1000} for m = 1000. The degree of the rotations is sampled uniformly at random from [35, 70]. Figure D.5 shows the misclassiï¬cation rates. Test set 1 contains rotated digits only, test set 2 is the usual MNIST test set. We see that the misclassiï¬cation rates of CoRe are always lower on test set 1, showing that it makes data augmentation more eï¬cient. For m = 1000, it even turns out to be beneï¬cial for performance on test set 2.
52
(a) Examples from test sets 1â3. (b) Misclassiï¬cation rates.
atte dadapal
Bani
a wn
Figure D.7: a) Examples from the stickmen test set 1 (row 1), test set 2 (row 2) and test sets 3 (row 3). In each row, the ï¬rst three images from the left have y â¡ child; the remaining three images have y â¡ adult. Connected images are grouped examples. b) Misclassiï¬cation rates for diï¬erent numbers of grouped examples.
# D.5 Stickmen image-based age classiï¬cation
Here, we show further results for the experiment introduced in §5.4. Figure D.6 illustrates the data generating process. Recall that test set 1 follows the same distribution as the training set. In test sets 2 and 3 large movements are associated with both children and adults, while the movements are heavier in test set 3 than in test set 2. Figure D.7b shows results for diï¬erent numbers of grouping examples. For c = 20 the misclassiï¬cation rate of CoRe estimator has a large variance. For c â {50, 500, 2000}, the CoRe estimator shows similar results. Its performance is thus not sensitive to the number of grouped examples, once there are suï¬ciently many grouped observations in the training set. The pooled estimator fails to achieve good predictive performance on test sets 2 and 3 as it seems to use âmovementâ as a predictor for âageâ.
# D.6 Eyeglasses detection: image quality intervention
Here, we show further results for the experiments introduced in §5.3. Speciï¬cally, we con- sider interventions of diï¬erent strengths by varying the mean of the quality intervention in µ â {30, 40, 50}. Recall that we use ImageMagick to modify the image quality. In the training set and in test set 1, we sample the image quality value as qi,j â¼ N (µ, Ï = 10) and apply the command convert -quality q ij input.jpg output.jpg if yi â¡ glasses. If yi â¡ no glasses, the image is not modiï¬ed. In test set 2, the above command is applied if yi â¡ no glasses while images with yi â¡ glasses are not changed. In test set 3 all images are left unchanged and in test set 4 the command is applied to all images, i.e. the quality of all images is reduced.
We run experiments for grouping settings 1â3 and for c = 5000, where the deï¬nition of the grouping settings 1â3 is identical to §D.2. Figure D.8 shows examples from the respective training and test sets and Figure D.9 shows the corresponding misclassiï¬cation rates. Again, we observe that grouping setting 1 works best, followed by grouping setting 2.
53
# tayo Boor Bor
(a) Grouping setting 1, µ = 50
(b) Grouping setting 1, µ = 40
(c) Grouping setting 1, µ = 30
Seas 5 GHsioha
5
28
(d) Grouping setting 2, µ = 50
aide
(g) Grouping setting 3, µ = 50
(e) Grouping setting 2, µ = 40
§
(h) Grouping setting 3, µ = 40
aoanne g abooh
g
abooh
(f) Grouping setting 2, µ = 30
ase ehao
(i) Grouping setting 3, µ = 30
Figure D.8: Examples from the CelebA image quality datasets, grouping settings 1â3 with µ â {30, 40, 50}. In all rows, the ï¬rst three images from the left have y â¡ no glasses; the remaining three images have y â¡ glasses. Connected images are grouped observations over which we calculate the conditional variance. In panels (a)â(c), row 1 shows exam- ples from the training set, rows 2â4 contain examples from test sets 2â4, respectively. Panels (d)â(i) show examples from the respective training sets.
Interestingly, there is a large performance diï¬erence between µ = 40 and µ = 50 for the pooled estimator. Possibly, with µ = 50 the image quality is not suï¬ciently predictive for the target.
54
70
a) z Z 50 w & 40 oe B v0 4 3 2 2
# tld
(a) Grouping setting 1
(b) Grouping setting 2
(c) Grouping setting 3
setting setting Method [core Pf pooled Method [core {if pootes Method mean: 40 mean: 50 mean: 30 mean: 40 mean: 50 mean: 0 70 70 _ 00 _ 0 z z Zs Zo w w 40 & 40 a a B 20 B 20 s s 5 20 5 2 2g 2g i alt Wr TetTedTedTed Tr Tot Te2TedTed T TetTe2TeOTed Ty Tor Te2TedTed Tr Tet Te2TeOTed Tr TetTe2TeOTed Tr Dataset Dataset
# [| core {ff pootes
30
40
# mean:
# mean:
# mean:
# il
50
# Tr TetTe2TeOTed
TetTe2TeOTed Tr Tet TedTedTed Tr Tet Te2TeOTed Dataset
Figure D.9: Misclassiï¬cation rates for the CelebA eyeglasses detection with image quality interven- tions, grouping settings 1â3 with c = 5000 and the mean of the Gaussian distribution µ â {30, 40, 50}.
55
= Slee ae
= Slee ae om LA el A ee Ae cm bse EES i Ae oad Gal A ES
om LA el A
ee Ae
cm bse EES i
Ae oad Gal A ES
Figure D.10: Examples from the subsampled and augmented AwA2 dataset (Elmer-the-Elephant dataset). Row 1 shows examples from the training set, rows 2â5 show examples from test sets 1â4, respectively.
# D.7 Elmer the Elephant
The color interventions for the experiment introduced in §5.6 were created as follows. In the training set, if yi â¡ elephant we apply the following ImageMagick command for the grouped examples convert -modulate 100,0,100 input.jpg output.jpg. Test sets 1 and 2 were already discussed in §5.6: in test set 1, all images are left unchanged. In test set 2, the above command is applied if yi â¡ horse. If yi â¡ elephant, we sample ci,j â¼ N (µ = 20, Ï = 1) and apply convert -modulate 100,100,100-c ij input.jpg output.jpg to the image. Here, we consider again some more test sets than in §5.6. In test set 4, the latter command is applied to all images. It rotates the colors of the image, in a cyclic manner14. In test set 3, all images are changed to grayscale. The causal graph for the data generating process is shown in Figure D.12. Examples from all four test sets are shown in Figure D.10 and classiï¬cation results are shown in Figure D.11.
14. For more details, see http://www.imagemagick.org/Usage/color_mods/#color_mods.
56
(a) Examples of misclassiï¬ed observations. (b) Misclassiï¬cation rates. y â¡ horse y â¡ horse y â¡ elephant ËP core(horse) = 0.72 ËP core(horse) = 1.00 ËP core(ele.) = 0.95 ËP pool(horse) = 0.01 ËP pool(horse) = 0.01 ËP pool(ele.) = 0.00
Figure D.11: Elmer-the-Elephant dataset. (a) Misclassiï¬ed examples from the test sets. (b) Mis- classiï¬cation rates on test sets 1 to 4.
place of observation D animal ID animal class Y â X core color X style(â) image X(â) fθ ËY (X(â))
Figure D.12: Data generating process for the Elmer-the-Elephant example.
57
Method [icore pooled | {
"id": "1801.06229"
} |
1710.10903 | Graph Attention Networks | We present graph attention networks (GATs), novel neural network
architectures that operate on graph-structured data, leveraging masked
self-attentional layers to address the shortcomings of prior methods based on
graph convolutions or their approximations. By stacking layers in which nodes
are able to attend over their neighborhoods' features, we enable (implicitly)
specifying different weights to different nodes in a neighborhood, without
requiring any kind of costly matrix operation (such as inversion) or depending
on knowing the graph structure upfront. In this way, we address several key
challenges of spectral-based graph neural networks simultaneously, and make our
model readily applicable to inductive as well as transductive problems. Our GAT
models have achieved or matched state-of-the-art results across four
established transductive and inductive graph benchmarks: the Cora, Citeseer and
Pubmed citation network datasets, as well as a protein-protein interaction
dataset (wherein test graphs remain unseen during training). | http://arxiv.org/pdf/1710.10903 | Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio | stat.ML, cs.AI, cs.LG, cs.SI | To appear at ICLR 2018. 12 pages, 2 figures | null | stat.ML | 20171030 | 20180204 | 8 1 0 2
b e F 4 ] L M . t a t s [ 3 v 3 0 9 0 1 . 0 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# GRAPH ATTENTION NETWORKS
# Petar VeliËckovi´câ Department of Computer Science and Technology University of Cambridge petar.velickovic@cst.cam.ac.uk
Guillem Cucurullâ Centre de Visi´o per Computador, UAB gcucurull@gmail.com
# Arantxa Casanovaâ Centre de Visi´o per Computador, UAB ar.casanova.8@gmail.com
Adriana Romero Montr´eal Institute for Learning Algorithms adriana.romero.soriano@umontreal.ca
# Pietro Li`o Department of Computer Science and Technology University of Cambridge pietro.lio@cst.cam.ac.uk
Yoshua Bengio Montr´eal Institute for Learning Algorithms yoshua.umontreal@gmail.com
# ABSTRACT
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoodsâ features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix op- eration (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural net- works simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the- art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein- protein interaction dataset (wherein test graphs remain unseen during training).
# INTRODUCTION
Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classiï¬cation (He et al., 2016), semantic segmentation (J´egou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has a grid-like structure. These architectures efï¬ciently reuse their local ï¬lters, with learnable parameters, by applying them to all the input positions.
However, many interesting tasks involve data that can not be represented in a grid-like structure and that instead lies in an irregular domain. This is the case of 3D meshes, social networks, telecommu- nication networks, biological networks or brain connectomes. Such data can usually be represented in the form of graphs.
There have been several attempts in the literature to extend neural networks to deal with arbitrarily structured graphs. Early work used recursive neural networks to process data represented in graph domains as directed acyclic graphs (Frasconi et al., 1998; Sperduti & Starita, 1997). Graph Neural Networks (GNNs) were introduced in Gori et al. (2005) and Scarselli et al. (2009) as a generalization of recursive neural networks that can directly deal with a more general class of graphs, e.g. cyclic, directed and undirected graphs. GNNs consist of an iterative process, which propagates the node states until equilibrium; followed by a neural network, which produces an output for each node
âWork performed while the author was at the Montr´eal Institute of Learning Algorithms.
1
Published as a conference paper at ICLR 2018
based on its state. This idea was adopted and improved by Li et al. (2016), which propose to use gated recurrent units (Cho et al., 2014) in the propagation step.
Nevertheless, there is an increasing interest in generalizing convolutions to the graph domain. Ad- vances in this direction are often categorized as spectral approaches and non-spectral approaches.
On one hand, spectral approaches work with a spectral representation of the graphs and have been successfully applied in the context of node classiï¬cation. In Bruna et al. (2014), the convolution operation is deï¬ned in the Fourier domain by computing the eigendecomposition of the graph Lapla- cian, resulting in potentially intense computations and non-spatially localized ï¬lters. These issues were addressed by subsequent works. Henaff et al. (2015) introduced a parameterization of the spectral ï¬lters with smooth coefï¬cients in order to make them spatially localized. Later, Defferrard et al. (2016) proposed to approximate the ï¬lters by means of a Chebyshev expansion of the graph Laplacian, removing the need to compute the eigenvectors of the Laplacian and yielding spatially localized ï¬lters. Finally, Kipf & Welling (2017) simpliï¬ed the previous method by restricting the ï¬lters to operate in a 1-step neighborhood around each node. However, in all of the aforementioned spectral approaches, the learned ï¬lters depend on the Laplacian eigenbasis, which depends on the graph structure. Thus, a model trained on a speciï¬c structure can not be directly applied to a graph with a different structure.
On the other hand, we have non-spectral approaches (Duvenaud et al., 2015; Atwood & Towsley, 2016; Hamilton et al., 2017), which deï¬ne convolutions directly on the graph, operating on groups of spatially close neighbors. One of the challenges of these approaches is to deï¬ne an operator which works with different sized neighborhoods and maintains the weight sharing property of CNNs. In some cases, this requires learning a speciï¬c weight matrix for each node degree (Duvenaud et al., 2015), using the powers of a transition matrix to deï¬ne the neighborhood while learning weights for each input channel and neighborhood degree (Atwood & Towsley, 2016), or extracting and normal- izing neighborhoods containing a ï¬xed number of nodes (Niepert et al., 2016). Monti et al. (2016) presented mixture model CNNs (MoNet), a spatial approach which provides a uniï¬ed generaliza- tion of CNN architectures to graphs. More recently, Hamilton et al. (2017) introduced GraphSAGE, a method for computing node representations in an inductive manner. This technique operates by sampling a ï¬xed-size neighborhood of each node, and then performing a speciï¬c aggregator over it (such as the mean over all the sampled neighborsâ feature vectors, or the result of feeding them through a recurrent neural network). This approach has yielded impressive performance across sev- eral large-scale inductive benchmarks.
Attention mechanisms have become almost a de facto standard in many sequence-based tasks (Bah- danau et al., 2015; Gehring et al., 2016). One of the beneï¬ts of attention mechanisms is that they allow for dealing with variable sized inputs, focusing on the most relevant parts of the input to make decisions. When an attention mechanism is used to compute a representation of a single sequence, it is commonly referred to as self-attention or intra-attention. Together with Recurrent Neural Net- works (RNNs) or convolutions, self-attention has proven to be useful for tasks such as machine reading (Cheng et al., 2016) and learning sentence representations (Lin et al., 2017). However, Vaswani et al. (2017) showed that not only self-attention can improve a method based on RNNs or convolutions, but also that it is sufï¬cient for constructing a powerful model obtaining state-of-the-art performance on the machine translation task.
Inspired by this recent work, we introduce an attention-based architecture to perform node classiï¬ca- tion of graph-structured data. The idea is to compute the hidden representations of each node in the graph, by attending over its neighbors, following a self-attention strategy. The attention architecture has several interesting properties: (1) the operation is efï¬cient, since it is parallelizable across node- neighbor pairs; (2) it can be applied to graph nodes having different degrees by specifying arbitrary weights to the neighbors; and (3) the model is directly applicable to inductive learning problems, including tasks where the model has to generalize to completely unseen graphs. We validate the proposed approach on four challenging benchmarks: Cora, Citeseer and Pubmed citation networks as well as an inductive protein-protein interaction dataset, achieving or matching state-of-the-art re- sults that highlight the potential of attention-based models when dealing with arbitrarily structured graphs.
It is worth noting that, as Kipf & Welling (2017) and Atwood & Towsley (2016), our work can also be reformulated as a particular instance of MoNet (Monti et al., 2016). Moreover, our approach of
2
Published as a conference paper at ICLR 2018
sharing a neural network computation across edges is reminiscent of the formulation of relational networks (Santoro et al., 2017) and VAIN (Hoshen, 2017), wherein relations between objects or agents are aggregated pair-wise, by employing a shared mechanism. Similarly, our proposed at- tention model can be connected to the works by Duan et al. (2017) and Denil et al. (2017), which use a neighborhood attention operation to compute attention coefï¬cients between different objects in an environment. Other related approaches include locally linear embedding (LLE) (Roweis & Saul, 2000) and memory networks (Weston et al., 2014). LLE selects a ï¬xed number of neighbors around each data point, and learns a weight coefï¬cient for each neighbor to reconstruct each point as a weighted sum of its neighbors. A second optimization step extracts the pointâs feature embed- ding. Memory networks also share some connections with our work, in particular, if we interpret the neighborhood of a node as the memory, which is used to compute the node features by attending over its values, and then is updated by storing the new features in the same position.
# 2 GAT ARCHITECTURE
In this section, we will present the building block layer used to construct arbitrary graph attention networks (through stacking this layer), and directly outline its theoretical and practical beneï¬ts and limitations compared to prior work in the domain of neural graph processing.
2.1 GRAPH ATTENTIONAL LAYER
We will start by describing a single graph attentional layer, as the sole layer utilized throughout all of the GAT architectures used in our experiments. The particular attentional setup utilized by us closely follows the work of|Bahdanau et al. (2015)âbut the framework is agnostic to the particular choice of attention mechanism. The input to our layer is a set of node features, h = {ha, he, Lee hn}, hy ⬠RÂ¥, where N is the number of nodes, and Fâ is the number of features in each node. The layer produces a new set of node features (of potentially different cardinality Fâ), hâ = {hi,h},...,hiy}, hi ⬠RFâ, as its output.
In order to obtain sufficient expressive power to transform the input features into higher-level fea- tures, at least one learnable linear transformation is required. To that end, as an initial step, a shared linear transformation, parametrized by a weight matrix, W ⬠RY 'XF is applied to every node. We then perform se/f-attention on the nodesâa shared attentional mechanism a : RY âxR >R computes attention coefficients
. . ej = a(Wh;, Wh;) (1)
that indicate the importance of node jâs features to node i. In its most general formulation, the model allows every node to attend on every other node, dropping all structural information. We inject the graph structure into the mechanism by performing masked attentionâwe only compute eij for nodes j â Ni, where Ni is some neighborhood of node i in the graph. In all our experiments, these will be exactly the ï¬rst-order neighbors of i (including i). To make coefï¬cients easily comparable across different nodes, we normalize them across all choices of j using the softmax function:
exp(ei;) fo 2 Uren, &xP(C:ik) ° aij = softmax; (ej) =
In our experiments, the attention mechanism a is a single-layer feedforward neural network, parametrized by a weight vector a ⬠R?â âand applying the LeakyReLU nonlinearity (with negative input slope a = 0.2). Fully expanded out, the coefficients computed by the attention mechanism (illustrated by Figure[T](left)) may then be expressed as:
exp (LeakyReLU (a? [Wh; ||Wi,)) ) (3) Aig Deen, OXP (LeakyReLU (a7[Whi|Wi)) )
where -ââ represents transposition and || is the concatenation operation.
Once obtained, the normalized attention coefï¬cients are used to compute a linear combination of the features corresponding to them, to serve as the ï¬nal output features for every node (after potentially
3
Published as a conference paper at ICLR 2018
softmax ;
Figure 1: Left: The attention mechanism a(Wh;, Wh;) employed by our model, parametrized by a weight vector a ⬠R?â - applying a LeakyReLU activation. Right: An illustration of multi- head attention (with KY = 3 heads) by node | on its neighborhood. Different arrow styles and colors denote independent attention computations. The aggregated features from each head are concatenated or averaged to obtain hi.
applying a nonlinearity, Ï):
i=o > aijWh, | . (4) GENG
To stabilize the learning process of self-attention, we have found extending our mechanism to em- ploy multi-head attention to be beneï¬cial, similarly to Vaswani et al. (2017). Speciï¬cally, K inde- pendent attention mechanisms execute the transformation of Equation 4, and then their features are concatenated, resulting in the following output feature representation:
K > kwh? = |) of So ok Wh, (5) k=l GEN:
where || represents concatenation, aly are normalized attention coefficients computed by the k-th attention mechanism (a"), and Wâ" is the corresponding input linear transformationâs weight matrix. Note that, in this setting, the final returned output, hâ, will consist of K Fâ features (rather than F'â) for each node.
Specially, if we perform multi-head attention on the ï¬nal (prediction) layer of the network, concate- nation is no longer sensibleâinstead, we employ averaging, and delay applying the ï¬nal nonlinear- ity (usually a softmax or logistic sigmoid for classiï¬cation problems) until then:
K ni =o zu > ak Why (6) k=1jENG
The aggregation process of a multi-head graph attentional layer is illustrated by Figure 1 (right).
# 2.2 COMPARISONS TO RELATED WORK
The graph attentional layer described in subsection 2.1 directly addresses several issues that were present in prior approaches to modelling graph-structured data with neural networks:
⢠Computationally, it is highly efï¬cient: the operation of the self-attentional layer can be par- allelized across all edges, and the computation of output features can be parallelized across
4
Published as a conference paper at ICLR 2018
all nodes. No eigendecompositions or similar costly matrix operations are required. The time complexity of a single GAT attention head computing Fâ features may be expressed as O(|V|F'Fâ + |E|F"â), where F is the number of input features, and |V| and || are the numbers of nodes and edges in the graph, respectively. This complexity is on par with the baseline methods such as Graph Convolutional Networks (GCNs) [2017). Applying multi-head attention multiplies the storage and parameter requirements by a factor of Aâ, while the individual headsâ computations are fully independent and can be parallelized.
As opposed to GCNs, our model allows for (implicitly) assigning different importances to nodes of a same neighborhood, enabling a leap in model capacity. Furthermore, analyzing the learned attentional weights may lead to beneï¬ts in interpretability, as was the case in the machine translation domain (e.g. the qualitative analysis of Bahdanau et al. (2015)). ⢠The attention mechanism is applied in a shared manner to all edges in the graph, and there- fore it does not depend on upfront access to the global graph structure or (features of) all of its nodes (a limitation of many prior techniques). This has several desirable implications:
â The graph is not required to be undirected (we may simply leave out computing αij if edge j â i is not present).
â It makes our technique directly applicable to inductive learningâincluding tasks where the model is evaluated on graphs that are completely unseen during training.
⢠The recently published inductive method of Hamilton et al. (2017) samples a ï¬xed-size neighborhood of each node, in order to keep its computational footprint consistent; this does not allow it access to the entirety of the neighborhood while performing inference. Moreover, this technique achieved some of its strongest results when an LSTM (Hochreiter & Schmidhuber, 1997)-based neighborhood aggregator is used. This assumes the existence of a consistent sequential node ordering across neighborhoods, and the authors have rec- tiï¬ed it by consistently feeding randomly-ordered sequences to the LSTM. Our technique does not suffer from either of these issuesâit works with the entirety of the neighborhood (at the expense of a variable computational footprint, which is still on-par with methods like the GCN), and does not assume any ordering within it.
e As mentioned in Section [I] GAT can be reformulated as a particular instance of MoNet a More specifically, setting the pseudo-coordinate function to be u(z,y) = f(x y), where f(a) represent (potentially MLP-transformed) features of node x and || is concatenation; and the weight function to be w;(u) = softmax(MLP(w)) (with the softmax performed over the entire neighborhood of a node) would make MoNetâs patch operator similar to ours. Nevertheless, one should note that, in comparison to previ- ously considered MoNet instances, our model uses node features for similarity computa- tions, rather than the nodeâs structural properties (which would assume knowing the graph structure upfront).
We were able to produce a version of the GAT layer that leverages sparse matrix operations, reducing the storage complexity to linear in the number of nodes and edges and enabling the execution of GAT models on larger graph datasets. However, the tensor manipulation framework we used only supports sparse matrix multiplication for rank-2 tensors, which limits the batching capabilities of the layer as it is currently implemented (especially for datasets with multiple graphs). Appropriately addressing this constraint is an important direction for future work. Depending on the regularity of the graph structure in place, GPUs may not be able to offer major performance beneï¬ts compared to CPUs in these sparse scenarios. It should also be noted that the size of the âreceptive ï¬eldâ of our model is upper-bounded by the depth of the network (similarly as for GCN and similar models). Techniques such as skip connections (He et al., 2016) could be readily applied for appropriately extending the depth, however. Lastly, parallelization across all the graph edges, especially in a distributed manner, may involve a lot of redundant computation, as the neighborhoods will often highly overlap in graphs of interest.
# 3 EVALUATION
We have performed comparative evaluation of GAT models against a wide variety of strong base- lines and previous approaches, on four established graph-based benchmark tasks (transductive as
5
Published as a conference paper at ICLR 2018
Table 1: Summary of the datasets used in our experiments.
Cora Citeseer Pubmed PPI Task # Nodes # Edges # Features/Node # Classes # Training Nodes # Validation Nodes # Test Nodes Transductive 2708 (1 graph) 5429 1433 7 140 500 1000 Transductive 3327 (1 graph) 4732 3703 6 120 500 1000 Transductive 19717 (1 graph) 44338 500 3 60 500 1000 Inductive 56944 (24 graphs) 818716 50 121 (multilabel) 44906 (20 graphs) 6514 (2 graphs) 5524 (2 graphs)
well as inductive), achieving or matching state-of-the-art performance across all of them. This sec- tion summarizes our experimental setup, results, and a brief qualitative analysis of a GAT modelâs extracted feature representations.
# 3.1 DATASETS
Transductive learning We utilize three standard citation network benchmark datasetsâCora, Citeseer and Pubmed (Sen et al., 2008)âand closely follow the transductive experimental setup of Yang et al. (2016). In all of these datasets, nodes correspond to documents and edges to (undirected) citations. Node features correspond to elements of a bag-of-words representation of a document. Each node has a class label. We allow for only 20 nodes per class to be used for trainingâhowever, honoring the transductive setup, the training algorithm has access to all of the nodesâ feature vec- tors. The predictive power of the trained models is evaluated on 1000 test nodes, and we use 500 additional nodes for validation purposes (the same ones as used by Kipf & Welling (2017)). The Cora dataset contains 2708 nodes, 5429 edges, 7 classes and 1433 features per node. The Citeseer dataset contains 3327 nodes, 4732 edges, 6 classes and 3703 features per node. The Pubmed dataset contains 19717 nodes, 44338 edges, 3 classes and 500 features per node.
Inductive learning We make use of a protein-protein interaction (PPI) dataset that consists of graphs corresponding to different human tissues (Zitnik & Leskovec, 2017). The dataset contains 20 graphs for training, 2 for validation and 2 for testing. Critically, testing graphs remain com- pletely unobserved during training. To construct the graphs, we used the preprocessed data provided by Hamilton et al. (2017). The average number of nodes per graph is 2372. Each node has 50 features that are composed of positional gene sets, motif gene sets and immunological signatures. There are 121 labels for each node set from gene ontology, collected from the Molecular Signatures Database (Subramanian et al., 2005), and a node can possess several labels simultaneously.
An overview of the interesting characteristics of the datasets is given in Table 1.
3.2 STATE-OF-THE-ART METHODS
Transductive learning For transductive learning tasks, we compare against the same strong base- lines and state-of-the-art approaches as speciï¬ed in Kipf & Welling (2017). This includes label propagation (LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifold regularization (ManiReg) (Belkin et al., 2006), skip-gram based graph embeddings (Deep- Walk) (Perozzi et al., 2014), the iterative classiï¬cation algorithm (ICA) (Lu & Getoor, 2003) and Planetoid (Yang et al., 2016). We also directly compare our model against GCNs (Kipf & Welling, 2017), as well as graph convolutional models utilising higher-order Chebyshev ï¬lters (Defferrard et al., 2016), and the MoNet model presented in Monti et al. (2016).
Inductive learning For the inductive learning task, we compare against the four different super- vised GraphSAGE inductive methods presented in Hamilton et al. (2017). These provide a variety of approaches to aggregating features within a sampled neighborhood: GraphSAGE-GCN (which extends a graph convolution-style operation to the inductive setting), GraphSAGE-mean (taking
6
Published as a conference paper at ICLR 2018
the elementwise mean value of feature vectors), GraphSAGE-LSTM (aggregating by feeding the neighborhood features into an LSTM) and GraphSAGE-pool (taking the elementwise maximization operation of feature vectors transformed by a shared nonlinear multilayer perceptron). The other transductive approaches are either completely inappropriate in an inductive setting or assume that nodes are incrementally added to a single graph, making them unusable for the setup where test graphs are completely unseen during training (such as the PPI dataset).
Additionally, for both tasks we provide the performance of a per-node shared multilayer perceptron (MLP) classiï¬er (that does not incorporate graph structure at all).
3.3 EXPERIMENTAL SETUP
Transductive learning For the transductive learning tasks, we apply a two-layer GAT model. Its architectural hyperparameters have been optimized on the Cora dataset and are then reused for Cite- seer. The first layer consists of kKâ = 8 attention heads computing Fâ = 8 features each (for a total of 64 features), followed by an exponential linear unit (ELU) 2016) nonlinearity. The second layer is used for classification: a single attention head that computes Câ features (where Câ is the number of classes), followed by a softmax activation. For coping with the small training set sizes, regularization is liberally applied within the model. During training, we apply L regulariza- tion with \ = 0.0005. Furthermore, dropout with p = 0.6 is applied to both layersâ inputs, as well as to the normalized attention coefficients (critically, this means that at each training iteration, each node is exposed to a stochastically sampled neighborhood). Similarly as observed by{Monti et al.|(2016), we found that Pubmedâs training set size (60 examples) required slight changes to the GAT architecture: we have applied AK = 8 output attention heads (instead of one), and strengthened the Lz regularization to \ = 0.001. Otherwise, the architecture matches the one used for Cora and Citeseer. °
Inductive learning For the inductive learning task, we apply a three-layer GAT model. Both of the first two layers consist of = 4 attention heads computing Fâ = 256 features (for a total of 1024 features), followed by an ELU nonlinearity. The final layer is used for (multi-label) classification: = 6 attention heads computing 121 features each, that are averaged and followed by a logistic sigmoid activation. The training sets for this task are sufficiently large and we found no need to apply Ly regularization or dropoutâwe have, however, successfully employed skip connections (2016) across the intermediate attentional layer. We utilize a batch size of 2 graphs during training. To strictly evaluate the benefits of applying an attention mechanism in this setting (i.e. comparing with a near GCN-equivalent model), we also provide the results when a constant attention mechanism, a(x, y) = 1, is used, with the same architectureâthis will assign the same weight to every neighbor.
Both models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained to mini- mize cross-entropy on the training nodes using the Adam SGD optimizer (Kingma & Ba, 2014) with an initial learning rate of 0.01 for Pubmed, and 0.005 for all other datasets. In both cases we use an early stopping strategy on both the cross-entropy loss and accuracy (transductive) or micro-F1 (inductive) score on the validation nodes, with a patience of 100 epochs1.
# 3.4 RESULTS
The results of our comparative evaluation experiments are summarized in Tables 2 and 3.
For the transductive tasks, we report the mean classiï¬cation accuracy (with standard deviation) on the test nodes of our method after 100 runs, and reuse the metrics already reported in Kipf & Welling (2017) and Monti et al. (2016) for state-of-the-art techniques. Speciï¬cally, for the Chebyshev ï¬lter- based approach (Defferrard et al., 2016), we provide the maximum reported performance for ï¬lters of orders K = 2 and K = 3. In order to fairly assess the beneï¬ts of the attention mechanism, we further evaluate a GCN model that computes 64 hidden features, attempting both the ReLU and ELU activation, and reporting (as GCN-64â) the better result after 100 runs (which was the ReLU in all three cases).
For the inductive task, we report the micro-averaged F1 score on the nodes of the two unseen test graphs, averaged after 10 runs, and reuse the metrics already reported in Hamilton et al. (2017) for
1Our implementation of the GAT layer may be found at: https://github.com/PetarV-/GAT.
7
Published as a conference paper at ICLR 2018
Table 2: Summary of results in terms of classiï¬cation accuracies, for Cora, Citeseer and Pubmed. GCN-64â corresponds to the best GCN result computing 64 hidden features (using ReLU or ELU).
# Transductive Cora
Method Citeseer Pubmed MLP ManiReg (Belkin et al., 2006) SemiEmb (Weston et al., 2012) LP (Zhu et al., 2003) DeepWalk (Perozzi et al., 2014) ICA (Lu & Getoor, 2003) Planetoid (Yang et al., 2016) Chebyshev (Defferrard et al., 2016) GCN (Kipf & Welling, 2017) MoNet (Monti et al., 2016) GCN-64â GAT (ours) 55.1% 59.5% 59.0% 68.0% 67.2% 75.1% 75.7% 81.2% 81.5% 81.7 ± 0.5% â 46.5% 60.1% 59.6% 45.3% 43.2% 69.1% 64.7% 69.8% 70.3% 71.4% 70.7% 71.7% 63.0% 65.3% 73.9% 77.2% 74.4% 79.0% 78.8 ± 0.3% 81.4 ± 0.5% 70.9 ± 0.5% 79.0 ± 0.3% 83.0 ± 0.7% 72.5 ± 0.7% 79.0 ± 0.3%
Table 3: Summary of results in terms of micro-averaged F1 scores, for the PPI dataset. GraphSAGEâ corresponds to the best GraphSAGE result we were able to obtain by just modifying its architecture. Const-GAT corresponds to a model with the same architecture as GAT, but with a constant attention mechanism (assigning same importance to each neighbor; GCN-like inductive operator).
Inductive
Method PPI Random MLP GraphSAGE-GCN (Hamilton et al., 2017) GraphSAGE-mean (Hamilton et al., 2017) GraphSAGE-LSTM (Hamilton et al., 2017) GraphSAGE-pool (Hamilton et al., 2017) GraphSAGEâ Const-GAT (ours) GAT (ours) 0.396 0.422 0.500 0.598 0.612 0.600 0.768 0.934 ± 0.006 0.973 ± 0.002
the other techniques. Speciï¬cally, as our setup is supervised, we compare against the supervised GraphSAGE approaches. To evaluate the beneï¬ts of aggregating across the entire neighborhood, we further provide (as GraphSAGEâ) the best result we were able to achieve with GraphSAGE by just modifying its architecture (this was with a three-layer GraphSAGE-LSTM with [512, 512, 726] features computed in each layer and 128 features used for aggregating neighborhoods). Finally, we report the 10-run result of our constant attention GAT model (as Const-GAT), to fairly evaluate the beneï¬ts of the attention mechanism against a GCN-like aggregation scheme (with the same architecture).
Our results successfully demonstrate state-of-the-art performance being achieved or matched across all four datasetsâin concordance with our expectations, as per the discussion in Section 2.2. More speciï¬cally, we are able to improve upon GCNs by a margin of 1.5% and 1.6% on Cora and Cite- seer, respectively, suggesting that assigning different weights to nodes of a same neighborhood may be beneï¬cial. It is worth noting the improvements achieved on the PPI dataset: Our GAT model improves by 20.5% w.r.t. the best GraphSAGE result we were able to obtain, demonstrating that our model has the potential to be applied in inductive settings, and that larger predictive power can be leveraged by observing the entire neighborhood. Furthermore, it improves by 3.9% w.r.t. Const-GAT (the identical architecture with constant attention mechanism), once again directly demonstrating the signiï¬cance of being able to assign different weights to different neighbors.
8
Published as a conference paper at ICLR 2018
The effectiveness of the learned feature representations may also be investigated qualitativelyâand for this purpose we provide a visualization of the t-SNE (Maaten & Hinton, 2008)-transformed feature representations extracted by the ï¬rst layer of a GAT model pre-trained on the Cora dataset (Figure 2). The representation exhibits discernible clustering in the projected 2D space. Note that these clusters correspond to the seven labels of the dataset, verifying the modelâs discriminative power across the seven topic classes of Cora. Additionally, we visualize the relative strengths of the normalized attention coefï¬cients (averaged across all eight attention heads). Properly interpret- ing these coefï¬cients (as performed by e.g. Bahdanau et al. (2015)) will require further domain knowledge about the dataset under study, and is left for future work.
# 4 CONCLUSIONS
We have presented graph attention networks (GATs), novel convolution-style neural networks that operate on graph-structured data, leveraging masked self-attentional layers. The graph attentional layer utilized throughout these networks is computationally efï¬cient (does not require costly ma- trix operations, and is parallelizable across all nodes in the graph), allows for (implicitly) assign- ing different importances to different nodes within a neighborhood while dealing with different sized neighborhoods, and does not depend on knowing the entire graph structure upfrontâthus addressing many of the theoretical issues with previous spectral-based approaches. Our models leveraging attention have successfully achieved or matched state-of-the-art performance across four well-established node classiï¬cation benchmarks, both transductive and inductive (especially, with completely unseen graphs used for testing).
There are several potential improvements and extensions to graph attention networks that could be addressed as future work, such as overcoming the practical problems described in subsection 2.2 to be able to handle larger batch sizes. A particularly interesting research direction would be taking advantage of the attention mechanism to perform a thorough analysis on the model interpretability. Moreover, extending the method to perform graph classiï¬cation instead of node classiï¬cation would also be relevant from the application perspective. Finally, extending the model to incorporate edge features (possibly indicating relationship among nodes) would allow us to tackle a larger variety of problems.
Figure 2: A t-SNE plot of the computed feature representations of a pre-trained GAT modelâs first hidden layer on the Cora dataset. Node colors denote classes. Edge thickness indicates ag- gregated normalized attention coefficients between nodes 7 and j, across all eight attention heads (Seat ak, + at).
9
Published as a conference paper at ICLR 2018
# ACKNOWLEDGEMENTS
The authors would like to thank the developers of TensorFlow (Abadi et al., 2015). PV and PL have received funding from the European Unionâs Horizon 2020 research and innovation programme PROPAG-AGEING under grant agreement No 634821. We further acknowledge the support of the following agencies for research funding and computing support: CIFAR, Canada Research Chairs, Compute Canada and Calcul Qu´ebec, as well as NVIDIA for the generous GPU support. Special thanks to: Benjamin Day and Fabian Jansen for kindly pointing out issues in a previous iteration of the paper; MichaÅ DroËzdËzal for useful discussions, feedback and support; and Ga´etan Marceau for reviewing the paper prior to submission.
# REFERENCES
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software avail- able from tensorï¬ow.org.
James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1993â2001, 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR), 2015.
Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric frame- work for learning from labeled and unlabeled examples. Journal of machine learning research, 7 (Nov):2399â2434, 2006.
Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. International Conference on Learning Representations (ICLR), 2014.
Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations (ICLR), 2016.
Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral ï¬ltering. In Advances in Neural Information Processing Systems, pp. 3844â3852, 2016.
Misha Denil, Sergio G´omez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Pro- grammable agents. arXiv preprint arXiv:1706.06383, 2017.
Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, arXiv preprint Pieter Abbeel, and Wojciech Zaremba. arXiv:1703.07326, 2017. One-shot imitation learning.
David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular ï¬ngerprints. In Advances in neural information processing systems, pp. 2224â2232, 2015.
10
Published as a conference paper at ICLR 2018
Paolo Frasconi, Marco Gori, and Alessandro Sperduti. A general framework for adaptive processing of data structures. IEEE transactions on Neural Networks, 9(5):768â786, 1998.
Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin. A convolutional encoder model for neural machine translation. CoRR, abs/1611.02344, 2016. URL http://arxiv. org/abs/1611.02344.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, pp. 249â256, 2010.
Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks, pp. 729734, 2005.
William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. Neural Information Processing Systems (NIPS), 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
In I. Guyon, Vain: Attentional multi-agent predictive modeling. U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett pp. 2698â URL http://papers.nips.cc/paper/ 2708. Curran Associates, 6863-vain-attentional-multi-agent-predictive-modeling.pdf.
Simon J´egou, Michal Drozdzal, David V´azquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Workshop on Computer Vision in Vehicle Technology CVPRW, 2017.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Thomas N Kipf and Max Welling. Semi-supervised classiï¬cation with graph convolutional net- works. International Conference on Learning Representations (ICLR), 2017.
Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. International Conference on Learning Representations (ICLR), 2016.
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, arXiv preprint and Yoshua Bengio. arXiv:1703.03130, 2017. A structured self-attentive sentence embedding.
Qing Lu and Lise Getoor. Link-based classiï¬cation. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 496â503, 2003.
Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodol`a, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. arXiv preprint arXiv:1611.08402, 2016.
Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net- works for graphs. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pp. 2014â2023, 2016.
11
Published as a conference paper at ICLR 2018
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre- In Proceedings of the 20th ACM SIGKDD international conference on Knowledge sentations. discovery and data mining, pp. 701â710. ACM, 2014.
Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embed- ding. Science, 290:2323â2326, 2000.
Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427, 2017.
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61â80, 2009.
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classiï¬cation in network data. AI magazine, 29(3):93, 2008.
A. Sperduti and A. Starita. Supervised neural networks for the classiï¬cation of structures. Trans. Neur. Netw., 8(3):714â735, May 1997. ISSN 1045-9227. doi: 10.1109/72.572108.
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of machine learning research, 15(1):1929â1958, 2014.
Aravind Subramanian, Pablo Tamayo, Vamsi K Mootha, Sayan Mukherjee, Benjamin L Ebert, Michael A Gillette, Amanda Paulovich, Scott L Pomeroy, Todd R Golub, Eric S Lander, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expres- sion proï¬les. Proceedings of the National Academy of Sciences, 102(43):15545â15550, 2005.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
Jason Weston, Fr´ed´eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi- supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639â655. Springer, 2012.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916.
Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning, pp. 40â48, 2016.
Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian ï¬elds and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pp. 912â919, 2003.
Marinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190âi198, 2017.
12 | {
"id": "1706.06383"
} |
1710.10723 | Simple and Effective Multi-Paragraph Reading Comprehension | We consider the problem of adapting neural paragraph-level question answering
models to the case where entire documents are given as input. Our proposed
solution trains models to produce well calibrated confidence scores for their
results on individual paragraphs. We sample multiple paragraphs from the
documents during training, and use a shared-normalization training objective
that encourages the model to produce globally correct output. We combine this
method with a state-of-the-art pipeline for training models on document QA
data. Experiments demonstrate strong performance on several document QA
datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion
of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | http://arxiv.org/pdf/1710.10723 | Christopher Clark, Matt Gardner | cs.CL | 11 pages, updated a reference | null | cs.CL | 20171029 | 20171107 | 7 1 0 2
v o N 7 ] L C . s c [
2 v 3 2 7 0 1 . 0 1 7 1 : v i X r a
# Simple and Effective Multi-Paragraph Reading Comprehension
# Christopher Clarkâ University of Washington csquared@cs.washington.edu
# Matt Gardner Allen Institute for Artiï¬cial Intelligence mattg@allenai.org
# Abstract
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated conï¬dence scores for their results on individual paragraphs. We sample multiple paragraphs from the doc- uments during training, and use a shared- normalization training objective that encour- ages the model to produce globally correct out- put. We combine this method with a state- of-the-art pipeline for training models on doc- ument QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of Triv- iaQA, a large improvement from the 56.7 F1 of the previous best system.
from the input documents, which is then passed to the paragraph model to extract an answer (Joshi et al., 2017; Wang et al., 2017a). Conï¬dence based methods apply the model to multiple para- graphs and returns the answer with the highest conï¬dence (Chen et al., 2017). Conï¬dence meth- ods have the advantage of being robust to errors in the (usually less sophisticated) paragraph selec- tion step, however they require a model that can produce accurate conï¬dence scores for each para- graph. As we shall show, naively trained models often struggle to meet this requirement.
In this paper we start by proposing an improved pipelined method which achieves state-of-the-art results. Then we introduce a method for training models to produce accurate per-paragraph conï¬- dence scores, and we show how combining this method with multiple paragraph selection further increases performance.
# Introduction
Teaching machines to answer arbitrary user- generated questions is a long-term goal of natural language processing. For a wide range of ques- tions, existing information retrieval methods are capable of locating documents that are likely to contain the answer. However, automatically ex- tracting the answer from those texts remains an open challenge. The recent success of neural mod- els at answering questions given a related para- graph (Wang et al., 2017b; Tan et al., 2017) sug- gests neural models have the potential to be a key part of a solution to this problem. Training and testing neural models that take entire documents as input is extremely computationally expensive, so typically this requires adapting a paragraph-level model to process document-level input.
There are two basic approaches to this task. Pipelined approaches select a single paragraph
âWork completed while interning at the Allen Institute
for Artiï¬cial Intelligence
Our pipelined method focuses on addressing the challenges that come with training on document- level data. We propose a TF-IDF heuristic to select which paragraphs to train and test on. Since anno- tating entire documents is very expensive, data of this sort is typically distantly supervised, mean- ing only the answer text, not the answer spans, are known. To handle the noise this creates, we use a summed objective function that marginal- izes the modelâs output over all locations the an- swer text occurs. We apply this approach with a model design that integrates some recent ideas in reading comprehension models, including self- attention (Cheng et al., 2016) and bi-directional at- tention (Seo et al., 2016).
Our conï¬dence method extends this approach to better handle the multi-paragraph setting. Pre- vious approaches trained the model on questions paired with paragraphs that are known a priori to contain the answer. This has several downsides: the model is not trained to produce low conï¬dence
scores for paragraphs that do not contain an an- swer, and the training objective does not require conï¬dence scores to be comparable between para- graphs. We resolve these problems by sampling paragraphs from the context documents, includ- ing paragraphs that do not contain an answer, to train on. We then use a shared-normalization ob- jective where paragraphs are processed indepen- dently, but the probability of an answer candidate is marginalized over all paragraphs sampled from the same document. This requires the model to produce globally correct output even though each paragraph is processed independently.
We evaluate our work on TriviaQA web (Joshi et al., 2017), a dataset of questions paired with web documents that contain the answer. We achieve 71.3 F1 on the test set, a 15 point abso- lute gain over prior work. We additionally perform an ablation study on our pipelined method, and we show the effectiveness of our multi-paragraph methods on TriviaQA unï¬ltered and a modiï¬ed version of SQuAD (Rajpurkar et al., 2016) where only the correct document, not the correct para- graph, is known. We also build a demonstration of our method by combining our model with a re- implementation of the retrieval mechanism used in TriviaQA to build a prototype end-to-end gen- eral question answering system 1. We release our code 2 to facilitate future work in this ï¬eld.
# 2 Pipelined Method
In this section we propose an approach to train- ing pipelined question answering systems, where a single paragraph is heuristically extracted from the context document(s) and passed to a paragraph- level QA model. We suggest using a TF-IDF based paragraph selection method and argue that a summed objective function should be used to handle noisy supervision. We also propose a re- ï¬ned model that incorporates some recent model- ing ideas for reading comprehension systems.
# 2.1 Paragraph Selection
Our paragraph selection method chooses the para- graph that has the smallest TF-IDF cosine dis- tance with the question. Document frequencies are computed using just the paragraphs within the relevant documents, not the entire corpus. The advantage of this approach is that if a question word is prevalent in the context, for example if
# 1documentqa.allenai.org 2github.com/allenai/document-qa
the word âtigerâ is prevalent in the document(s) for the question âWhat is the largest living sub- species of the tiger?â, greater weight will be given to question words that are less common, such as âlargestâ or âsub-speciesâ. Relative to selecting the ï¬rst paragraph in the document, this improves the chance of the selected paragraph containing the correct answer from 83.1% to 85.1% on Triv- iaQA web. We also expect this approach to do a better job of selecting paragraphs that relate to the question since it is explicitly selecting paragraphs that contain question words.
# 2.2 Handling Noisy Labels
Question: Which British general was killed at Khartoum in 1885? Answer: Gordon Context: In February 1885 Gordon returned to the Sudan to evacuate Egyptian forces. Khartoum came under siege the next month and rebels broke into the city, killing Gor- don and the other defenders. The British public reacted to his death by acclaiming âGordon of Khartoumâ, a saint. However, historians have suggested that Gordon deï¬ed orders and refused to evacuate...
Figure 1: Noisy supervision causes many spans of text that contain the answer, but are not situated in a con- text that relates to the question, to be labelled as correct answer spans (highlighted in red). This risks distract- ing the model from learning from more relevant spans (highlighted in green).
In a distantly supervised setup we label all text spans that match the answer text as being correct. This can lead to training the model to select un- wanted answer spans. Figure 1 contains an exam- ple. To handle this difï¬culty, we use a summed objective function similar to the one from Kadlec et al. (2016), that optimizes the sum of the proba- bilities of all answer spans. The models we con- sider here work by independently predicting the start and end token of the answer span, so we take this approach for both predictions. Thus the ob- jective for the span start boundaries becomes:
loe Urea e⢠â log so Si i= ©
where A is the set of tokens that start an answer span, n is the number of context tokens, and si is a scalar score computed by the model for span i. This optimizes the negative log-likelihood of se- lecting any correct start token. This objective is agnostic to how the model distributes probability
mass across the possible answer spans, thus the model can âchooseâ to focus on only the more rel- evant spans.
# 2.3 Model
Start Scores Linear Linear a Bi-GRU - Bi-GRU â_" Concat) ( ) cy Linear ReLU Layer C J rN Self-Attention + âBi-GRU Prediction Self-Attention {Attention} Pre-Process { Embedding Linear ReLU Layer 1 Bi-Attention nN Bi-GRU L 4 ne CNN + Max Pool CNN + Max Pool n k Embed * r Char Embed L . Embed Char Embed f ( Context Text Context Text J
Figure 2: High level outline of our model.
We use a model with the following layers (shown in Figure 2):
Embedding: We embed words using pre- trained word vectors. We also embed the char- acters in each word into size 20 vectors which are learned, and run a convolution neural network followed by max-pooling to get character-derived embeddings for each word. The character-level and word-level embeddings are then concatenated and passed to the next layer. We do not update the word embeddings during training. A shared
bi-directional GRU (Cho et al., 2014) is used to map the question and passage embeddings to context- aware embeddings.
Attention: The bi-directional attention mech- anism from the Bi-Directional Attention Flow (BiDAF) model (Seo et al., 2016) is used to build a query-aware context representation. Let hi be
the vector for context word i, qj be the vector for question word j, and nq and nc be the lengths of the question and context respectively. We com- pute attention between context word i and ques- tion word j as:
aij = Wy - hy + wa - qj + wg - (hi © qj)
where w1, We, and wg are learned vectors and © is element-wise multiplication. We then compute an attended vector c; for each context token as:
# pij =
c=
We also compute a query-to-context vector qc:
mi = max 1â¤jâ¤nq aij
1<j<nq emi i= Ne mM. > Mel e Ne de = » hip;
computed
The final vector for each token is built by concatenating hj, cj, hy © cj, and qe © cj. In our model we subsequently pass the result through a linear layer with ReLU activations.
Self-Attention: Next we use a layer of residual self-attention. The input is passed through another bi-directional GRU. Then we apply the same at- tention mechanism, only now between the passage In this case we do not use query-to- and itself. context attention and we set aij = âinf if i = j. As before, we pass the concatenated output through a linear layer with ReLU activations. This layer is applied residually, so this output is addi- tionally summed with the input.
Prediction: In the last layer of our model a bi- directional GRU is applied, followed by a linear layer that computes answer start scores for each token. The hidden states of that layer are con- catenated with the input and fed into a second bi- directional GRU and linear layer to predict answer end scores. The softmax operation is applied to the start and end scores to produce start and end probabilities, and we optimize the negative log- likelihood of selecting correct start and end tokens. Dropout: We also employ variational dropout, where a randomly selected set of hidden units
are set to zero across all time steps during train- ing (Gal and Ghahramani, 2016). We dropout the input to all the GRUs, including the word embed- dings, as well as the input to the attention mecha- nisms, at a rate of 0.2.
# 3 Conï¬dence Method
We adapt this model to the multi-paragraph setting by using the un-normalized and un-exponentiated (i.e., before the softmax operator is applied) score given to each span as a measure of the modelâs conï¬dence. For the boundary-based models we use here, a spanâs score is the sum of the start and end score given to its start and end token. At test time we run the model on each paragraph and se- lect the answer span with the highest conï¬dence. This is the approach taken by Chen et al. (2017).
Applying this approach without altering how the model is trained is, however, a gamble; the training objective does not require these conï¬- dence scores to be comparable between para- graphs. Our experiments in Section 5 show that in practice these models can be very poor at provid- ing good conï¬dence scores. Table 1 shows some qualitative examples of this phenomenon.
We hypothesize that there are two key reasons a modelâs conï¬dence scores might not be well cal- ibrated. First, for models trained with the soft- max objective, the pre-softmax scores for all spans can be arbitrarily increased or decreased by a con- stant value without changing the resulting softmax probability distribution. As a result, nothing pre- vents models from producing scores that are arbi- trarily all larger or all smaller for one paragraph than another. Second, if the model only sees para- graphs that contain answers, it might become too conï¬dent in heuristics or patterns that are only ef- fective when it is known a priori that an answer exists. For example, in Table 1 we observe that the model will assign high conï¬dence values to spans that strongly match the category of the answer, even if the question words do not match the con- text. This might work passably well if an answer is present, but can lead to highly over-conï¬dent extractions in other cases. Similar kinds of errors have been observed when distractor sentences are added to the context (Jia and Liang, 2017).
We experiment with four approaches to training models to produce comparable conï¬dence scores, shown in the follow subsections. In all cases we will sample paragraphs that do not contain an an- swer as additional training points.
# 3.1 Shared-Normalization
In this approach all paragraphs are processed in- dependently as usual. However, a modiï¬ed objec- tive function is used where the normalization fac- tor in the softmax operation is shared between all paragraphs from the same context. Therefore, the probability that token a from paragraph p starts an answer span is computed as:
efor Vier Wize
where P is the set of paragraphs that are from the same context as p, and sij is the score given to to- ken i from paragraph j. We train on this objective by including multiple paragraphs from the same context in each mini-batch.
This is similar to simply feeding the model mul- tiple paragraphs from each context concatenated together, except that each paragraph is processed independently until the normalization step. The key idea is that this will force the model to produce scores that are comparable between paragraphs, even though it does not have access to information about the other paragraphs being considered.
# 3.2 Merge
As an alternative to the previous method, we ex- periment with concatenating all paragraphs sam- pled from the same context together during train- ing. A paragraph separator token with a learned embedding is added before each paragraph. Our motive is to test whether simply exposing the model to more text will teach the model to be more adept at ignoring irrelevant text.
# 3.3 No-Answer Option
We also experiment with allowing the model to se- lect a special âno-answerâ option for each para- graph. First, note that the independent-bounds ob- jective can be re-written as:
1 es 1 eb og oO} = . ee ° in ef
e8e9 ea yet ei log
where sj and gj are the scores for the start and end bounds produced by the model for token j, and a and b are the correct start and end tokens. We have the model compute another score, z, to represent
Question When is the Members Debate held? Low Conï¬dence Correct Extraction Immediately after Decision Time a âMem- bers Debateâ is held, which lasts for 45 min- utes... High Conï¬dence Incorrect Extraction ...majority of the Scottish electorate voted for it in a referendum to be held on 1 March 1979 that represented at least... How many tree species are in the rainforest? Who was Warsz? How much did the ini- tial LM weight in kg? ...plant species is the highest on Earth with one 2001 study ï¬nding a quarter square kilo- meter (62 acres) of Ecuadorian rainforest supports more than 1,100 tree species ....In actuality, Warsz was a 12th/13th century nobleman who owned a village located at the modern.... The initial LM model weighed approximately 33,300 pounds (15,000 kg), and... The affected region was approximately 1,160,000 square miles (3,000,000 km2) of rainforest, compared to 734,000 square miles One of the most famous people born in War- saw was Maria Sklodowska - Curie, who achieved international... The module was 11.42 feet (3.48 m) tall, and weighed approximately 12,250 pounds (5,560 kg) What do the auricles do? ...many species of lobates have four auricles, gelatinous projections edged with cilia that produce water currents that help direct microscopic prey toward the mouth... The Cestida are ribbon - shaped planktonic animals, with the mouth and aboral organ aligned in the middle of opposite edges of the ribbon
Table 1: Examples from SQuAD where a paragraph-level model was less conï¬dent in a correct extraction from one paragraph (left) than in an incorrect extraction from another (right). Even if the passage has no correct answer, the model still assigns high conï¬dence to phrases that match the category the question is asking about. Because the conï¬dence scores are not well-calibrated, this conï¬dence is often higher than the conï¬dence assigned to the correct answer span.
the weight given to a âno-answerâ possibility. Our revised objective function becomes: 3.4 Sigmoid
As a ï¬nal baseline, we consider training models with the sigmoid loss objective function. That is, we compute a start/end probability for each token in the context by applying the sigmoid function to the start/end scores of each token. A cross entropy loss is used on each individual probability. The in- tuition is that, since the scores are being evaluated independently of one another, they will be compa- rable between different paragraphs.
(1 â d)e* + dese + Di by OF
where δ is 1 if an answer exists and 0 otherwise. If there are multiple answer spans we use the same objective, except the numerator includes the sum- mation over all answer start and end tokens.
We compute z by adding an extra layer at the end of our model. We compute a soft attention = esi over the span start scores, p; = Se and then take the weighted sum of the hidden states from the GRU used to generate those scores, h;, giving Vi= Han hyp;. We compute a second vector, v2 in the same way using the end scores. Finally, a step of learned attention is performed on the out- put of the Self-Attention layer that computes:
# 4 Experimental Setup
# 4.1 Datasets
We evaluate our approach on three datasets: Triv- iaQA unï¬ltered (Joshi et al., 2017), a dataset of questions from trivia databases paired with docu- ments found by completing a web search of the questions; TriviaQA web, a dataset derived from TriviaQA unï¬ltered by treating each question- document pair where the document contains the question answer as an individual training point; and SQuAD (Rajpurkar et al., 2016), a collection of Wikipedia articles and crowdsourced questions.
a;=w-h ev R= 7 Qj Vie ev n va = >_ hip; i=l
i=1 where w is a learned weight vector and hi is the vector for token i.
# 4.2 Preprocessing
We note that for TriviaQA web we do not sub- sample as was done by Joshi et al. (2017), in- stead training on the full 530k question-document training pairs. We also observed that the metrics for TriviaQA are computed after applying a small
We concatenate these three vectors and use them as input to a two layer network with an 80 di- mensional hidden layer and ReLU activations that produces z as its only output.
amount of text normalization (stripping punctua- tion, removing articles, ect.) to both the ground truth text and the predicted text. As a result, some spans of text that would have been considered an exact match after normalization were not marked as answer spans during preprocessing, which only detected exact string matches. We ï¬x this issue by labeling all spans of text that would have been considered an exact match by the ofï¬cial evalua- tion script as an answer span.
In TriviaQA, documents often contain many small paragraphs, so we merge paragraphs to- gether as needed to get paragraphs of up to a tar- get size. We use a maximum size of 400 unless stated otherwise. Paragraph separator tokens with learned embeddings are added between merged paragraphs to preserve formatting information.
# 4.3 Sampling
Our conï¬dence-based approaches are all trained by sampling paragraphs, including paragraphs that do not contain an answer, during training. For SQuAD and TriviaQA web we take the top four paragraphs ranked by TF-IDF score for each question-document pair. We then sample two dif- ferent paragraphs from this set each epoch. Since we observe that the higher-ranked paragraphs are much more likely to contain the context needed to answer the question, we sample the highest ranked paragraph that contains an answer twice as often as the others. For the merge and shared-norm ap- proaches, we additionally require that at least one of the paragraphs contains an answer span.
For TriviaQA unï¬ltered, where we have multi- ple documents for each question, we ï¬nd it bene- ï¬cial to use a more sophisticated paragraph rank- ing function. In particular, we use a linear func- tion with ï¬ve features: the TF-IDF cosine dis- tance, whether the paragraph was the ï¬rst in its document, how many tokens occur before it, and the number of case insensitive and case sensitive matches with question words. The function is trained on the distantly supervised objective of se- lecting paragraphs that contain at least one answer span. We select the top 16 paragraphs for each question and sample pairs of paragraphs as before.
# 4.4 Implementation
We train the model with the Adadelta opti- mizer (Zeiler, 2012) with a batch size 60 for Triv- iaQA and 45 for SQuAD. At test time we select the most probable answer span of length less than
EM Model 41.08 baseline (Joshi et al., 2017) 50.21 BiDAF 53.41 BiDAF + TF-IDF 56.22 BiDAF + sum BiDAF + TF-IDF + sum 57.20 our model + TF-IDF + sum 61.10 F1 47.40 56.86 59.18 61.48 62.44 66.04
Table 2: Results on TriviaQA web using our pipelined method. We signiï¬cantly improve upon the baseline by combining the preprocessing procedures, TF-IDF para- graph selection, the sum objective, and our model de- sign.
or equal to 8 for TriviaQA and 17 for SQuAD. The GloVe 300 dimensional word vectors released by Pennington et al. (2014) are used for word em- beddings. On SQuAD, we use a dimensionality of size 100 for the GRUs and of size 200 for the linear layers employed after each attention mech- anism. We ï¬nd for TriviaQA, likely because there is more data, using a larger dimensionality of 140 for each GRU and 280 for the linear layers is bene- ï¬cial. During training, we maintain an exponential moving average of the weights with a decay rate of 0.999. We use the weight averages at test time.
# 5 Results
# 5.1 TriviaQA Web
First, we do an ablation study on TriviaQA web to show the effects of our proposed methods for our pipeline model. We start with an implementa- tion of the baseline from (Joshi et al., 2017). Their system selects paragraphs by taking the ï¬rst 400 tokens of each document, uses BiDAF (Seo et al., 2016) as the paragraph model, and selects a ran- dom answer span from each paragraph each epoch to be used in BiDAFâs cross entropy loss function during training. Paragraphs of size 800 are used at test time. As shown in Table 2, our implemen- tation of this approach outperforms the results re- ported by Joshi et al. (2017) signiï¬cantly, likely because we are not subsampling the data. We ï¬nd both TF-IDF ranking and the sum objective to be effective; even without changing the model we achieve state-of-the-art results. Using our reï¬ned model increases the gain by another 4 points.
Next we show the results of our conï¬dence- based approaches. In this setting we group each documentâs text into paragraphs of at most 400 to- kens and rank them using our TF-IDF heuristic. Then we measure the performance of our proposed
TriviaQA Web F1 vs. Number of Paragraphs TriviaQA Web Verified F1 vs. Number of Paragraphs 0.70 ° a & ° a & â none â sigmoid â merge â no-answer â shared-norm F1 Score 0.64 0.62 F1 Score none sigmoid merge no-answer shared-norm 1 3 5 7 9 11 13 15 Number of Paragraphs 1 3 5 7 9 11 13 15 Number of Paragraphs
Figure 3: Results on TriviaQA web (left) and veriï¬ed TriviaQA web (right) when applying our models to multiple paragraphs from each document. The shared-norm, merge, and no-answer training methods improve the modelâs ability to utilize more text, with the shared-norm method being signiï¬cantly ahead of the others on the veriï¬ed set and tied with the merge approach on the general set.
Model All Veriï¬ed baseline (Joshi et al., 2017) MEMEN* (Pan et al., 2017) Mnemonic Reader (Hu et al., 2017) Reading Twice for NLU (Weissenborn et al., 2017a) S-Norm (ours) EM 40.74 43.16 46.94 50.56 66.37 F1 47.06 46.90 52.85 56.73 71.32 EM 49.54 49.28 54.45 63.20 79.97 F1 55.80 55.83 59.46 67.97 83.70
*Results on the dev set
Table 3: Published TriviaQA results. We advance the state of the art by about 15 points both test sets.
approaches as the model is used to independently process an increasing number of these paragraphs and the modelâs most conï¬dent answer is returned. We additionally measure performance on the ver- iï¬ed portion of TriviaQA, a small subset of the question-document pairs in TriviaQA web where humans have manually veriï¬ed that the document contains sufï¬cient context to answer the question. The results are shown in Figure 3.
On these datasets even the model trained with- out any of the proposed training methods (ânoneâ) improves as it is allowed to use more text, show- ing it does a passable job at focusing on the cor- rect paragraph. The no-answer option training ap- proach lead to a signiï¬cant improvement, and the shared-norm and merge approach are even better. On the veriï¬ed set, the shared-norm approach is solidly ahead of the other options. This suggests the shared-norm model is better at extracting an- swers when it is clearly stated in the text, but worse at guessing the answer in other cases.
We use the shared-norm approach for evalua- tion on the TriviaQA test set. We found that in- creasing the paragraph size to 800 at test time, and re-training the model on paragraphs of size 600, was slightly beneï¬cial, allowing our model to
reach 66.04 EM and 70.98 F1 on the dev set. We submitted this model to be evaluated on the Triv- iaQA test set and achieved 66.37 EM and 71.32 F1, ï¬rmly ahead of prior work, as shown in Ta- ble 3. Note that human annotators have estimated that only 75.4% of the question-document pairs contain sufï¬cient evidence to answer the ques- tion (Joshi et al., 2017), which suggests we are ap- proaching the upper bound for this task. However, the score of 83.7 F1 on the veriï¬ed set suggests that there is still room for improvement.
# 5.2 TriviaQA Unï¬ltered
Next we apply our conï¬dence methods to Trivi- aQA unï¬ltered. This dataset is of particular inter- est because the system is not told which document contains the answer, so it provides a plausible sim- ulation of attempting to answer a question using a document retrieval system. We show the same graph as before for this dataset in Figure 4. On this dataset it is more important to train the model to produce well calibrated conï¬dence scores. Note the base model starts to lose performance as more paragraphs are used, showing that errors are be- ing caused by the model being overly conï¬dent in incorrect extractions.
Unfiltered TriviaQA F1 vs. Number of Paragraphs
none sigmoid merge no-answer shared-norm F1 Score 0 5 10 15 20 25 30 Number of Paragraphs
Figure 4: Results for our conï¬dence methods on Triv- iaQA unï¬ltered. Here we see a more dramatic differ- ence between these models. The shared-norm approach is the strongest, while the base model starts to lose per- formance as more paragraphs are used.
Dev Test EM Model 71.60 none 70.28 sigmoid 71.20 merge no-answer 71.51 shared-norm 71.16 F1 80.78 79.05 80.26 80.71 80.23 EM 72.14 - - - - F1 81.05 - - - -
Table 4: Results on the standard SQuAD dataset. The test scores place our model as 8th on the SQuAD leader board among non-ensemble models3. Training with the proposed multi-paragraph approaches only leads to a marginal drop in performance in this setting.
# 5.3 SQuAD
We additionally evaluate our model on SQuAD. SQuAD questions were not built to be answered independently of their context paragraph, which makes it unclear how effective of an evaluation tool they can be for document-level question an- swering. To assess this we manually label 500 random questions from the training set. We cat- egorize questions as:
1. Context-independent, meaning it can be un- derstood independently of the paragraph.
2. Document-dependent, meaning it can be un- derstood given the articleâs title. For exam- ple, âWhat individual is the school named af- ter?â for the document âHarvard Universityâ.
3. Paragraph-dependent, meaning it can only be understood given its paragraph. For example, âWhat was the ï¬rst step in the reforms?â.
3as of 10/23/2017
SQuAD F1 vs. Number of Paragraphs 0.725 0.700 2 0.675 5 § 0.650 © 625] //ââ none â sigmoid 0.600 ââ merge â no-answer 0.575 ââ shared-norm 0.550 1 3 5 7 9 11 13 15 Number of Paragraphs
Figure 5: Results for our conï¬dence methods on document-level SQuAD. The base model does poorly in this case, rapidly losing performance once more than two paragraphs are used. While all our approaches had some beneï¬t, the shared-norm model is the strongest, and is the only one to not lose performance as large numbers of paragraphs are used.
We ï¬nd 67.4% of the questions to be context- independent, 22.6% to be document-dependent, and the remaining 10% to be paragraph- dependent. The many document-dependent ques- tions stem from the fact that questions are fre- quently about the subject of the document, so the articleâs title is often sufï¬cient to resolve co- references or ambiguities that appear in the ques- tion. Since a reasonably high fraction of the ques- tions can be understood given the document they are from, and to isolate our analysis from the re- trieval mechanism used, we choose to evaluate on the document-level. We build documents by con- catenating all the paragraphs in SQuAD from the same article together into a single document.
The performance of our models given the cor- rect paragraph (i.e., in the standard SQuAD set- ting), is shown in Table 4. Our paragraph-level model is competitive on this task, and our vari- ations to handle the multi-paragraph setting only cause a minor loss of performance.
We graph the document-level performance in Figure 5. For SQuAD, we ï¬nd it crucial to em- ploy one of the suggested conï¬dence training tech- niques. The base model starts to drop in perfor- mance once more than two paragraphs are used. However, the shared-norm approach is able to reach a peak performance of 72.37 F1 and 64.08 EM given 15 paragraphs. Given our estimate that 10% of the questions are ambiguous if the para- graph is unknown, our approach appears to have adapted to the document-level task very well.
Finally, we compare the shared-norm model with the document-level result reported by Chen
et al. (2017). We re-evaluate our model using the documents used by Chen et al. (2017), which con- sist of the same Wikipedia articles SQuAD was built from, but downloaded at different dates. The advantage of this dataset is that it does not allow the model to know a priori which paragraphs were ï¬ltered out during the construction of SQuAD. The disadvantage is that some of the articles have been edited since the questions were written, so some questions may no longer be answerable. Our model achieves 59.14 EM and 67.34 F1 on this dataset, which signiï¬cantly outperforms the 49.7 EM reported by Chen et al. (2017).
# 5.4 Discussion
We found that models that have only been trained on answer-containing paragraphs can perform very poorly in the multi-paragraph setting. The results were particularly bad for SQuAD, we think this is partly because the paragraphs are shorter, so the model had less exposure to irrelevant text. In general, we found the shared-norm approach to be the most effective way to resolve this problem. The no-answer and merge approaches were mod- erately effective, but we note that they do not re- solve the scaling problem inherent to the softmax objective we discussed in Section 3, which might be why they lagged behind. The sigmoid objective function reduces the paragraph-level performance considerably, especially on the TriviaQA datasets. We suspect this is because it is vulnerable to label noise, as discussed in Section 2.2.
# 6 Related Work
Reading Comprehension Datasets. The state of the art in reading comprehension has been rapidly advanced by neural models, in no small part due to the introduction of many large datasets. The ï¬rst large scale datasets for training neural reading comprehension models used a Cloze-style task, where systems must predict a held out word from a piece of text (Hermann et al., 2015; Hill et al., 2015). Additional datasets including SQuAD (Ra- jpurkar et al., 2016), WikiReading (Hewlett et al., 2016), MS Marco (Nguyen et al., 2016) and Triv- iaQA (Joshi et al., 2017) provided more realis- tic questions. Another dataset of trivia questions, Quasar-T (Dhingra et al., 2017), was introduced recently that uses ClueWeb09 (Callan et al., 2009) In this work we as its source for documents. choose to focus on SQuAD and TriviaQA. Neural Reading Comprehension.
reading comprehension systems typically use some form of attention (Wang and Jiang, 2016), al- though alternative architectures exist (Chen et al., 2017; Weissenborn et al., 2017b). Our model includes some re- follows this approach, but cent advances such as variational dropout (Gal and Ghahramani, 2016) and bi-directional atten- tion (Seo et al., 2016). Self-attention has been used in several prior works (Cheng et al., 2016; Wang et al., 2017b; Pan et al., 2017). Our approach to allowing a reading comprehension model to produce a per-paragraph no-answer score is related to the approach used in the BiDAF- T (Min et al., 2017) model to produce per-sentence classiï¬cation scores, although we use an attention- based method instead of max-pooling.
Open QA. Open question answering has been the subject of much research, especially spurred by the TREC question answering track (Voorhees et al., 1999). Knowledge bases can be used, such as in (Berant et al., 2013), although the re- sulting systems are limited by the quality of the knowledge base. Systems that try to answer ques- tions using natural language resources such as YodaQA (BaudiËs, 2015) typically use pipelined methods to retrieve related text, build answer can- didates, and pick a ï¬nal output.
Neural Open QA. Open question answering with neural models was considered by Chen et al. (2017), where researchers trained a model on SQuAD and combined it with a retrieval engine for Wikipedia articles. Our work differs because we focus on explicitly addressing the problem of applying the model to multiple paragraphs. A pipelined approach to QA was recently proposed by Wang et al. (2017a), where a ranker model is used to select a paragraph for the reading compre- hension model to process.
# 7 Conclusion
We have shown that, when using a paragraph-level QA model across multiple paragraphs, our train- ing method of sampling non-answer containing paragraphs while using a shared-norm objective function can be very beneï¬cial. Combining this with our suggestions for paragraph selection, us- ing the summed training objective, and our model design allows us to advance the state of the art on TriviaQA by a large stride. As shown by our demo, this work can be directly applied to build- ing deep learning powered open question answer- ing systems.
# References
Petr BaudiËs. 2015. YodaQA: A Modular Question An- swering System Pipeline. In POSTER 2015-19th In- ternational Student Conference on Electrical Engi- neering. pages 1156â1165.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In EMNLP.
Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 Data Set.
Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading Wikipedia to An- arXiv preprint swer Open-Domain Questions. arXiv:1704.00051 .
Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long Short-Term Memory-Networks for Machine Reading. arXiv preprint arXiv:1601.06733 .
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .
Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for Question An- arXiv preprint swering by Search and Reading. arXiv:1707.03904 .
Yarin Gal and Zoubin Ghahramani. 2016. A Theoreti- cally Grounded Application of Dropout in Recurrent In Advances in neural informa- Neural Networks. tion processing systems.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching Ma- In Advances in chines to Read and Comprehend. Neural Information Processing Systems.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A Novel Large-scale Language Understanding Task over Wikipedia. arXiv preprint arXiv:1608.03542 .
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The Goldilocks Principle: Reading Childrenâs Books with Explicit Memory Represen- tations. arXiv preprint arXiv:1511.02301 .
Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017. Mnemonic Reader: Machine Comprehension with Iterative Aligning and Multi-hop Answer Pointing .
Robin Jia and Percy Liang. 2017. Adversarial Ex- amples for Evaluating Reading Comprehension Sys- tems. arXiv preprint arXiv:1707.07328 .
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Dis- tantly Supervised Challenge Dataset for Reading Comprehension. arXiv preprint arXiv:1705.03551 .
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 .
Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question Answering through Transfer Learn- ing from Large Fine-grained Supervision Data. arXiv preprint arXiv:1702.02171 .
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268 .
Boyuan Pan, Hao Li, Zhou Zhao, Bin Cao, Deng Cai, and Xiaofei He. 2017. MEMEN: Multi-layer Em- bedding with Memory Networks for Machine Com- prehension. arXiv preprint arXiv:1707.09098 .
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Nat- ural Language Processing (EMNLP).
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv preprint arXiv:1606.05250 .
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional At- tention Flow for Machine Comprehension. CoRR abs/1611.01603.
Chuanqi Tan, Furu Wei, Nan Yang, Weifeng Lv, and Ming Zhou. 2017. S-net: From answer extraction to answer generation for machine reading comprehen- sion. arXiv preprint arXiv:1706.04815 .
Ellen M Voorhees et al. 1999. The TREC-8 Question Answering Track Report. In Trec.
Shuohang Wang and Jing Jiang. 2016. Machine Comprehension Using Match-LSTM and Answer Pointer. arXiv preprint arXiv:1608.07905 .
Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017a. R: Reinforced Reader-Ranker for Open-Domain Ques- tion Answering. arXiv preprint arXiv:1709.00023 .
Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017b. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers). volume 1, pages 189â198.
Dirk Weissenborn, Tom Koisk, and Chris Dyer. 2017a. Dynamic Integration of Background Knowl- arXiv preprint edge in Neural NLU Systems. arXiv:1706.02596 .
Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017b. FastQA: A Simple and Efï¬cient Neural Ar- chitecture for Question Answering. arXiv preprint arXiv:1703.04816 .
Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 . | {
"id": "1608.07905"
} |
1710.10368 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | 8 1 0 2
y a M 5 2 ] G L . s c [
2 v 8 6 3 0 1 . 0 1 7 1 : v i X r a
# Deep Generative Dual Memory Network for Continual Learning
# Nitin Kamra 1 Umang Gupta 1 Yan Liu 1
# Abstract
Despite advances in deep learning, neural net- works can only learn multiple tasks when trained on them jointly. When tasks arrive sequentially, they lose performance on previously learnt tasks. This phenomenon called catastrophic forgetting is a fundamental challenge to overcome before neural networks can learn continually from in- coming data. In this work, we derive inspiration from human memory to develop an architecture capable of learning continuously from sequen- tially incoming tasks, while averting catastrophic forgetting. Speciï¬cally, our contributions are: (i) a dual memory architecture emulating the com- plementary learning systems (hippocampus and the neocortex) in the human brain, (ii) memory consolidation via generative replay of past expe- riences, (iii) demonstrating advantages of gener- ative replay and dual memories via experiments, and (iv) improved performance retention on chal- lenging tasks even for low capacity models. Our architecture displays many characteristics of the mammalian memory and provides insights on the connection between sleep and learning.
1993; French, 1994). Recently, activations like maxout and dropout (Goodfellow et al., 2013) and local winner-takes- all (Srivastava et al., 2013) have been explored to create sparsiï¬ed feature representations. But, natural cognitive sys- tems e.g. mammalian brains are also connectionist in nature and yet they only undergo gradual systematic forgetting. Frequently and recently encountered tasks tend to survive much longer in memory, while those rarely encountered are slowly forgotten. Hence shared representations may not be the root cause of the problem. More recent approaches have targeted slowing down learning on network weights which are important for previously learnt tasks. Kirkpatrick et al. (2017) have used a ï¬sher information matrix based regularizer to slow down learning on network weights which correlate with previously acquired knowledge. Zenke et al. (2017) have employed path integrals of loss-derivatives to slow down learning on weights important for the previous tasks. Progressive neural networks (Rusu et al., 2016) and Pathnets (Fernando et al., 2017) directly freeze important pathways in neural networks, which eliminates forgetting altogether but requires growing the network after each task and can cause the architecture complexity to grow with the number of tasks. Li & Hoiem (2017) have evaluated freez- ing weights in earlier layers of a network and ï¬ne tuning the rest for multiple tasks. These methods outperform sparse representations but may not be explicitly targeting the cause of catastrophic forgetting.
# 1. Introduction
Many machine learning models, when trained sequentially on tasks, forget how to perform previously learnt tasks. This phenomenon, called catastrophic forgetting is an impor- tant challenge to overcome in order to enable systems to learn continuously. In the early stages of investigation, Mc- Closkey & Cohen (1989) suggested the underlying cause of forgetting to be the distributed shared representation of tasks via network weights. Subsequent works attempted to reduce representational overlap between input representations via activation sharpening algorithms (Kortge, 1990), orthogonal recoding of inputs (Lewandowsky, 1991) and orthogonal activations at all hidden layers (McRae & Hetherington,
1Department of Computer Science, University of Southern California, Los Angeles, CA, USA. Correspondence to: Nitin Kamra <nkamra@usc.edu>.
An important assumption for successful gradient-based learning is to observe iid samples from the joint distribution of all tasks to be learnt. Since sequential learning systems violate this assumption, catastrophic forgetting is inevitable. So a direct approach would be to store previously seen sam- ples and replay them along with new samples in appropriate proportions to restore the iid sampling assumption (Lopez- Paz et al., 2017). This experience replay approach has been adopted by maintaining a ï¬xed-size episodic memory of exemplars which are either directly replayed while learn- ing e.g. in iCaRL (Rebufï¬ et al., 2017) or indirectly used to modify future gradient updates to the system e.g. in GEM (Lopez-Paz et al., 2017) to mitigate forgetting on previously seen tasks. However, choosing to store sam- ples from previous tasks is challenging since it requires determining how many samples need to be stored, which samples are most representative of a task, and which sam-
Copyright 2018 by the author(s).
Deep Generative Dual Memory Network for Continual Learning
ples to discard as new tasks arrive (Lucic et al., 2017). We propose that this problem can be solved by maintaining a generative model over samples which would automatically provide the most frequently encountered samples from the distribution learnt so far. This is also feasible with limited total memory and avoids explicitly determining which and how many samples should be stored and/or discarded per task. Previous non-generative approaches to experience replay e.g. pseudo-pattern rehearsal (Robins, 2004) have proposed to preserve neural networksâ learnt mappings by uniformly sampling random inputs and their corresponding outputs from networks and replaying them along with new task samples. These approaches have only been tested in small binary input spaces and our experiments show that sampling random inputs in high-dimensional spaces (e.g. images) does not preserve the learnt mappings.
Neuroscientiï¬c evidence suggests that experience replay of patterns has also been observed in the human brain during sleep and waking rest (McClelland et al., 1995; ONeill et al., 2010). Further, humans have evolved mechanisms to sepa- rately learn new incoming tasks and consolidate them with previous knowledge to avert catastrophic forgetting (McClel- land et al., 1995; French, 1999). The widely acknowledged complementary learning systems theory (McClelland et al., 1995; Kumaran et al., 2016) suggests that this separation has been achieved in the human brain via evolution of two sepa- rate areas: (a) the neocortex, which is a long term memory specializing in consolidating new information with previous knowledge to gradually learn the joint structure of all tasks, and (b) the hippocampus, which acts as a temporary mem- ory to rapidly learn new tasks and then slowly transfers the knowledge to neocortex after acquisition.
In this paper, we propose a dual-memory architecture for learning tasks sequentially while averting catastrophic for- getting. Our model comprises of two generative models: a short-term memory (STM) to emulate the human hippocam- pal system and a long term memory (LTM) to emulate the neocortical learning system. The STM learns new tasks without interfering with previously learnt tasks in the LTM. The LTM stores all previously learnt tasks and aids the STM in learning tasks similar to previously seen tasks. During sleep/down-time, the STM generates and transfers samples of learnt tasks to the LTM. These are gradually consolidated with the LTMâs knowledge base of previous tasks via gen- erative replay. Our model exploits the strengths of deep generative models, experience replay and complementary learning systems literature. We demonstrate its performance experimentally in averting catastrophic forgetting by sequen- tially learning multiple tasks. Moreover, our experiments shed light on some characteristics of human memory as observed in the psychology and neuroscience literature.
# 2. Problem Description
Formally, our problem setting is characterized by a set of tasks T, to be learnt by a parameterized model. Note that we use the the phrase model and neural network architecture interchangeably. In this work, we mainly consider super- vised learning tasks i.e. task t â T has training samples: {Xt, Yt} = {xt i â Y, but our model easily generalizes to unsupervised learning settings. Samples for each task are drawn iid from an (unknown) data generating distribution Pt associated with the task i.e. {xt i } â¼ Pt âi â [Nt], but the distributions {Pt}tâT can be completely different from each other. The tasks arrive sequentially and the total number of tasks T = |T| is not known a priori. Note that the full sequence of samples seen by the architecture is not sampled iid from the joint distri- bution of all samples. The architecture observes the task descriptor and the data {t, Xt, Yt} for each task while train- ing sequentially. It can be evaluated at any time on a test sample {t, xt} to predict its label yt where {xt, yt} â¼ Pt after task t has been observed. Our goal is to learn these tasks sequentially while avoiding catastrophic forgetting and achieve a test accuracy close to that of a model which was jointly trained on all tasks.
Finite memory: We allow a limited storage for algorithms to store or generate samples while learning.The storage size is limited to N,na2 and usually smaller than the total num- ber of samples an N;. Hence, just storing all training samples and reusing them is infeasible.
Evaluation metrics: After training on each task, we evalu- ate models on separate test sets for each task. This gives us a matrix A â RT ÃT with Ai,j being the test accuracy on task j after training on task i. Following (Lopez-Paz et al., 2017), we evaluate algorithms on the following metrics â Average accuracy (ACC) achieved across all tasks and Backward Transfer (BWT):
1 T 1 T-1 an . TT = .â A, ACC = 5 > Ar; | BWT = 75 » Ari â Ais
Backward transfer (BWT) measures the inï¬uence of task t on a previously learnt task Ï . This is generally negative since learning new tasks sequentially causes the model to lose performance on previous tasks. A large negative backward BWT represents catastrophic forgetting. An ideal continual learning algorithm should achieve maximum ACC while having least negative (or positive) BWT.
# 3. Deep Generative Dual Memory Network
# 3.1. Deep Generative Replay
We present a generative experience replay algorithm to learn from sequentially arriving samples. We ï¬rst introduce a
Deep Generative Dual Memory Network for Continual Learning
Generator Training Consolidation Generated Samples Generator learns from samples and generative replay DGM Training Learner Training Learner learns trom reconstructed samples and labels DGM Testing
Figure 1: Deep Generative Replay to train a Deep Generative Memory
sub-model called the Deep Generative Memory (DGM)1 with three elements: (i) a generative model (the generator G), (ii) a feedforward network (the learner L), and (iii) a dictionary (Ddgm) with task descriptors of learnt tasks and the number of times they were encountered. Though most previous works (Kirkpatrick et al., 2017; Lopez-Paz et al., 2017; Zenke et al., 2017) and our algorithm involve usage of task descriptors t in some form, our architecture also works when they are either unavailable, non-integral or just an inseparable part of the input xt (see Appendix A). We choose variational autoencoder (VAE) (Kingma & Welling, 2014) for the generator, since our generative model requires reconstruction capabilities (see section 3.2) but can also work with other kinds of generative models (see section 5).
We update a DGM with samples from (potentially multi- ple) new tasks using our algorithm Deep Generative Replay (DGR). The pseudocode is shown in algorithm 1 and vi- sualized in ï¬gure 1. DGR essentially combines the new incoming samples (X, Y ) with its own generated samples from previous tasks and relearns jointly on these samples. Given new incoming samples (X, Y ), DGR computes the fraction of samples to use from incoming samples (ηtasks) and the fraction to preserve from previous tasks (ηgen) ac- cording to the number of samples seen so far (i.e. age of DGM). If needed, the incoming samples are downsampled while still allocating at least a minimum fraction κ of the memory to them (lines 3â16). This ensures that as the DGM saturates with tasks over time, new tasks are still learnt at the cost of gradually losing performance on the least re- cent previous tasks. This is synonymous to how learning slows down in humans as they age but they still continue to learn while forgetting old things gradually (French, 1999). Next, DGR generates samples of previously learnt tasks (Xgen, Ygen) using the generator and learner, transfers the
task descriptors of samples in (X, Y ) to its own dictionary Ddgm and updates its age (lines 17â21). It then trains the
Algorithm 1 Deep Generative Replay 1: Input: Current params and age of DGM, new samples: (X,Y), dictionary for new samples: Diasxs, minimum fraction: &, memory capacity: Nmaz 2: Output: New parameters of DGM {Compute number of samples} 3: Masks = |X| 4: Ngen = age 5: if |X|+ age > Nmax then enone (a) > Masks = Max (A TeTpage 7: Ntasks = Masks X Nmax 8 = Nmax â Ntasks 9 10: Mtotat = Ntasks + Ngen {Subsample X,Y if needed} 11: if Neasks < |X| then 12: Xtasks; Yiasks = Draw Niasks samples from X,Y 13: else 14: Niasks, Ngen = |X|; Ntotat â |X| 15: Xtasks; Ytasks = X,Y 16: end if {Generate samples from previous tasks} 17: Xgen = Draw Ngen samples from G 18: Ygen = L(Xygen) 19: Xty, Yip = concat(X asks, X gen), concat(Yiasks, Ygen) 20: Add task descriptors from Djasks to Dagm 21: age = age + Niotal {Train DGM} 22: Train generator G on X1,
1We call this a memory because of its weights and learning capacity, not due to any recurrent connections.
22: Train generator G on Xtr 23: Xrecon = Reconstruct Xtasks from generator G 24: Xtr = concat(Xrecon, Xgen) 25: Train learner L on (Xtr, Ytr)
Deep Generative Dual Memory Network for Continual Learning
generator on the total training samples Xtr, reconstructs the new samples via the trained generator as Xrecon (hence we use a VAE) and then trains the learner on resulting samples Xtr = concat(Xrecon, Xgen) and their labels Ytr (lines 22â 25). Doing this ï¬nal reconstruction provides robustness to noise and occlusion (section 5).
keeps track of task descriptors in dictionaries but does not use them for learning. DGDMN only uses task descriptors to recognize whether a task has been previously observed and/or the memory in which a task currently resides. This can be relaxed by using the reconstruction error from gen- erators as a proxy for recognition (see appendix A). Hence DGDMN still works in the absence of task descriptors.
Ideas similar to DGR have recently been proposed by Mo- canu et al. (2016) and Shin et al. (2017) independently, but they do not describe balancing new and generated sam- ples and cannot recognize repeated tasks (section 7.1 in appendix A). Also generative replay without a dual memory architecture is costly to train (section 4.2) and a lack of reconstruction for new samples makes their representations less robust to noise and occlusions (section 5).
# 3.2. Dual memory networks
Though DGR is a continual learning algorithm on its own, our preliminary experiments showed that it is slow and in- accurate. To balance the conï¬icting requirements of quick acquisition of new tasks and performance retention on pre- viously learnt tasks, we propose a dual memory network to combat forgetting. Our architecture (DGDMN) shown in ï¬g- ure 2 comprises of a large DGM called the long-term mem- ory (LTM) which stores information of all previously learnt tasks like the neocortex and a short-term memory (STM) which behaves similar to the hippocampus and learns new incoming tasks quickly without interference from previous tasks. The STM is a collection of nST M small, dedicated deep generative memories (called short-term task memory â STTM), which can each learn one unique task.
While training on an incoming task, if it is already in an STTM, the same STTM is retrained on it, otherwise a fresh STTM is allocated to the task. Additionally, if the task has been previously seen and consolidated into the LTM, then the LTM reconstructs the incoming samples for that task using the generator (hence we use a VAE), predicts labels for the reconstructions using its learner and sends these newly generated samples to the STTM allocated to this task. This provides extra samples on tasks which have been learnt previously and helps to learn them better, while also preserving the previous performance on that task to some extent. Once all (nST M ) STTMs are exhausted, the architecture sleeps (like humans) to consolidate all tasks into the LTM and free up the STTMs for new tasks. While asleep, the STM generates and sends samples of learnt tasks to the LTM, where these are consolidated via deep generative replay (see ï¬gure 2).
# 4. Experiments
We perform experiments to demonstrate forgetting on se- quential image classiï¬cation tasks. We brieï¬y describe our datasets here (details in appendix B): (a) Permnist is a catastrophic forgetting benchmark (Kirkpatrick et al., 2017) and each task contains a ï¬xed permutation of pixels on MNIST images, (b) Digits dataset involves classifying a single MNIST digit per task, (c) TDigits is a transformed variant of MNIST similar to Digits but with 40 tasks for long task sequences, (d) Shapes contains several geometric shape classiï¬cation tasks, and (e) Hindi contains a sequence of 8 tasks with hindi language consonant recognition.
We compare DGDMN with several baselines for catas- trophic forgetting, while choosing at least one from each category: representational overlap, learning slowdown and experience replay. These are brieï¬y described here (im- plementation and hyperparameter details in appendix B): (a) Feedforward neural networks (NN): To characterize forgetting in the absence of any prevention mechanism and as a reference for other approaches, (b) Neural nets with dropout (DropNN): Goodfellow et al. (2013) suggested us- ing dropout as a means to prevent representational overlaps and pacify catastrophic forgetting, (c) Pseudopattern Re- hearsal (PPR): A non-generative approach to experience replay (Robins, 2004), (d) Elastic Weight Consolidation (EWC): Kirkpatrick et al. (2017) proposed using the Fisher Information Matrix for task-speciï¬c learning slowdown of weights in a neural network, and (e) Deep Generative Re- play (DGR): We train only the LTM from DGDMN to sep- arate the effects of deep generative replay and dual memory architecture. This is partly similar to Shin et al. (2017).
In our preliminary experiments, we observed that large over- parameterized networks can more easily adapt to sequen- tially incoming tasks, thereby partly mitigating catastrophic forgetting. So we have chosen network architectures which have to share all their parameters appropriately amongst the various tasks in a dataset to achieve reasonable joint accu- racy. This allows us to evaluate algorithms carefully while ignoring the beneï¬ts provided by overparameterization.
While testing on task t (even intermittently between tasks), if any STTM currently contains task t, it is used to predict the labels, else the prediction is deferred to the LTM. This allows predicting on all tasks seen uptil now (including the most recent ones) without sleeping. Finally note that DGR
# 4.1. Accuracy and Forgetting curves
We trained DGDMN and all baselines sequentially on the image classiï¬cation tasks of Permnist, Digits, Shapes and
Deep Generative Dual Memory Network for Continual Learning
Training STM s⢠eos LTM provides reconstructed samples (recom Yrecon) t0 aid STM aos Training D@DMN Training LTM Xsriwa Yermuz Xstm: Ystm L⢠Xow Yurm Xto:t0) Ypred:(o} LTM consolidates via Deep Generative Replay £ Ypred:{5) Testing DGDMN
Figure 2: Deep Generative Dual Memory Network (DGDMN)
Hindi datasets (separately). Due to space constraints, we show results on the Shapes and Hindi datasets in appendix A. The classiï¬cation accuracy on a held out test set for each task, after training on the tth task has been shown in ï¬gures 3 and 4. We used the same network architecture for NN, PPR, EWC, learner in DGR, and learner in the LTM of DGDMN for a given dataset. DropNN had intermediate dropouts after hidden layers (details in appendix B).
We observe from ï¬gures 3 and 4, that NN and DropNN forget catastrophically while learning and perform similarly. We veriï¬ed the same on other datasets in Appendix A. EWC performs better than NN and DropNN, but rapidly slows down learning on many weights and effectively stagnates after Task 3 (e.g. see Tasks 5 and 6 in ï¬gure 3d). The learning slowdown on weights hinders EWC from reusing those weights later to jointly discover common structures between tasks. Note that the networks do have the capacity to learn all tasks and our generative replay based algorithms DGR and DGDMN indeed learn all tasks sequentially with the same learner networks.
Further, we observed heavy forgetting on Digits (ï¬gure 4) for most baselines, which is expected because all samples in the tth task have a single label (t) and the tth task can be learnt on its own by setting the tth bias of the ï¬nal softmax layer to be high and the other biases to be low. Such sequential tasks cause networks to forget catastrophically. We observed that NN, DropNN, PPR and EWC learnt only the task being trained on and forgot all previous knowledge immediately. Sometimes, we also observed saturation due to the softmax bias being set very high and then being unable to recover from it. PPR showed severe saturation since its replay prevented it from coming out of the saturation.
ter training on the tth task (for both Digits and Permnist) is shown in ï¬gure 5. For absolute reference, the accuracy of NN by training it jointly on all tasks uptil the tth task has also been shown for each t. This also shows that DGR and DGDMN consistently outperform baselines in terms of retained average accuracy. In ï¬gure 5b, NN, DropNN, PPR and EWC follow nearly overlapping curves (acc â 1 t ) since they are only able to learn one task at a time. Though PPR also involves experience replay, it is not able to preserve its learnt mapping by randomly sampling points from its domain and hence forgets catastrophically. These observa- tions substantiate our claim that a replay mechanism must be generative and model the input distribution accurately. We observed similar results on other datasets (appendix A).
Table 1: Average accuracies for all algorithms.
ALGORITHM DIGITS PERMNIST SHAPES HINDI NN DROPNN PPR EWC DGR DGDMN 0.1 0.1 0.1 0.1 0.596 0.818 0.588 0.59 0.574 0.758 0.861 0.831 0.167 0.167 0.167 0.167 0.661 0.722 0.125 0.125 0.134 0.125 0.731 0.658
Table 2: Backward transfer for all algorithms.
ALGORITHM DIGITS PERMNIST SHAPES HINDI NN DROPNN PPR EWC DGR DGDMN -0.778 -1.0 -0.444 -1.0 -0.425 -0.15 -0.434 -0.43 -0.452 -0.05 -0.068 -0.075 -0.4 -0.8 -0.2 -1.0 -0.288 -0.261 -1.0 -1.0 -0.989 -1.0 -0.270 -0.335
DGR and DGDMN still retain performance on all tasks of Digits, since they replay generated samples from previous tasks. The average forgetting on all tasks â {1, . . . , t}, af-
We show the ï¬nal average accuracies (ACC) and backward transfer (BWT) between tasks in tables 1 and 2 respectively.
Deep Generative Dual Memory Network for Continual Learning
(a) NN (b) DropNN (c) PPR (d) EWC (e) DGR (f) DGDMN
Task 1 Task 2 Task 3 > 8 Task 4 3 Task5 g Task6 0.0 er er, a b & G& GF GB F Be @ @ @ 8
> 8 5 2 0.2 0.0 a a? % GF & FG &¢@ ¢ 8 Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 no a & sf e& ¢
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy a a mm + 4 © % % &F %B GF F es © s Ss 8 & e ⬠⬠£⬠Eg
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy rr b & G& GF GB F Be @ @ @ 8 Task
Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 a a % GF GF GF GB FG Boe 8 @ ⬠8 Task
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy er er, a b & G& GF GB F Be @ @ @ 8 Task
Figure 3: Accuracy curves for Permnist (x: tasks seen, y: classiï¬cation accuracy on task).
NN, DropNN, PPR and EWC get near random accuracies on all datasets except Permnist due to catastrophic forgetting. DGDMN and DGR perform similarly and outperform other baselines on ACC while having the least negative BWT. Since backward transfer is a direct measure of forgetting, this also shows that we effectively mitigate catastrophic forgetting and avoid inter-task interference. We point out that datasets like Digits should be considered important benchmarks for continual learning since they have low cor- relation between samples of different tasks and promote overï¬tting to the new incoming task thereby causing catas- trophic forgetting. Being able to retain performance on such task sequences is a strong indicator of the effectiveness of a continual learning algorithm.
# 4.2. Connections to complementary learning systems and sleep
To differentiate between DGDMN and DGR, we trained both of them on a long sequence of 40 tasks from TDigits dataset. We limited Nmax to 120, 000 samples for this task to explore the case where the LTM in DGDMN (DGM in DGR) cannot regenerate many samples and has to forget some tasks. At least κ = 0.05 fraction of memory was ensured for new task samples and consolidation in DGDMN happened after nST M = 5 tasks.
because DGR consolidates its DGM after every task. Since LTM is a large memory and requires more samples to con- solidate, it trains slower. Further, the DGMâs self-generated slightly erroneous samples compound errors quite fast. On the other hand, DGDMN uses small STTMs to learn single tasks faster and with low error. Consequently, the LTM con- solidates less often and sees more accurate samples, hence its error accumulates much slower. Lastly, DGDMN stays around 90% average accuracy on the most recently observed 10 tasks (ï¬gure 6b), whereas DGR propagates errors too fast and also fails on this metric eventually.
Dual memory architecture and periodic sleep has emerged naturally in humans as a scalable design choice. Though sleeping is a dangerous behavior for any organism due to risk of being attacked by a predator, it has still survived eons of evolution (Joiner, 2016) and most organisms with even a slightly developed nervous system (centralized or diffuse) still exhibit either sleep or light-resting behavior (Nath et al., 2017). This experiment partly sheds light on the importance of dual memory architecture intertwined with periodic sleep, without which learning would be highly time consuming and short lived (as in DGR).
# 5. Analysis and discussion
The average forgetting curves are plotted in ï¬gure 6a and show that forgetting is gradual and not catastrophic. DGDMN retains more accuracy on all tasks as compared to DGR and is faster to train as shown by ï¬gure 6c. This is
We next show that DGDMN shares some remarkable char- acteristics with the human memory and present a discussion of some relevant ideas. Due to space constraints, we have deferred some visualizations of the learnt latent structures to appendix A. The hyperparameters of DGDMN (κ and
Deep Generative Dual Memory Network for Continual Learning
(a) NN (b) DropNN (c) PPR (d) EWC (e) DGR (f) DGDMN
Task 1 = Task 2 + Task 3 z ââ Task4 FI * Task5 g ââ Taske ~ Task 7 | -â Task 8 , = Task 9 genre er ees â taskio ereeeeercres Task
5 £ Task 1 = Task 2 ââ Task 3 ââTask4 *Task5 ââTaske Task 7 + Task 8 ââ Task 9 <= Task 10
Task 1 = Task 2 ââ Task 3 & â Task 4 5 *â Task 5 & â Task 6 *â Task 7 + Task 8 ââ Task 9 genres pref â Taskio B@esee ees eg Task
âs Task 1 = Task 2 + Task 3 z ââ Task4 FI * Task5 g ââ Taske ~ Task 7 -â Task 8 Task 9 Task 10 Ee & fy Task 1 Task 2 Task 3 Task 4 Task 7 Task 9 Task 10
5 £ Task 4 â Task 1 = Task 2 ââ Task 3 ââTask4 *Task5 ââTaske Task 7 + Task 8 ââ Task 9 ââ Task 10 Task 7 Task 9 Task 10
âs Task 1 = Task 2 ââ Task 3 & â Task 4 5 *â Task 5 & â Task 6 *â Task 7 + Task 8 ââ Task 9 ââ Task 10
Figure 4: Accuracy curves for Digits (x: tasks seen, y: classiï¬cation accuracy on task).
SSS 208 ââ DropNN g â+â PPR 5 0.6 ââ EWC x ââ DGR 04 ââ DGDMN 2 . 2 02 âeâ joint M»uwuweueueMueuM MN $8 5 8 8 6G GBBT FeEFFFFFFEF & Tasks
(a) Permnist (b) Digits
1.0; âââ NN 20.8 = | âsâ DropNN § â-â PPR 5 8 0.6 ââ EWC 2 ââ DGR gO4 ââ DGDMN Da â-â Joi Zo2 Joint 0.0 a a m + WH © Mow âvoy yoy 4 4 FG & GF F ef fF £ ££ FF F Tasks
Figure 5: Forgetting curves (x: tasks seen, y: avg classiï¬cation accuracy on tasks seen).
nST M ) admit intuitive interpretations and can be tuned with simple heuristics (see appendix B).
Resilience to noise and occlusion: We have used a VAE to be able to reconstruct all samples, which helps to recog- nize task examples (appendix A) and also makes our model resilient to noise, distortion and occlusion. We tested our LTM model and a NN model by jointly training on uncor- rupted Digits data and testing on noisy and occluded images. Figure 7 shows that the LTM is more robust to noise and occlusion due to its denoising reconstructive properties.
The choice of underlying generative model: Our architec- ture is agnostic to the choice of the underlying generative model as long as the generator can generate reliable sam- ples and reconstruct incoming samples accurately. Hence,
apart from VAEs, variants of Generative Adversarial Net- works like BiGANs (Donahue et al., 2017), ALI (Dumoulin et al., 2017) and AVB (Mescheder et al., 2017) can be used depending on the modeled domain.
Connections to knowledge distillation: Previous works on (joint) multitask learning have also proposed approaches to learn individual tasks with small networks and then âdis- tillingâ them jointly into a larger network (Rusu et al., 2015). Such distillation can sometimes improve performance on individual tasks if they share structure and at other times mitigate inter-task interference due to reï¬nement of learnt functions while distilling (Parisotto et al., 2016). Similarly, due to reï¬nement and compression during consolidation phase, DGDMN is also able to learn joint task structure
Deep Generative Dual Memory Network for Continual Learning
(a) (b) (c)
08.
âvg task accuracy on last 10 tasks âTasks
= DR
Figure 6: Accuracy and training time for DGDMN and DGR on TDigits: (a) Accuracy on tasks seen so far, (b) Accuracy on last 10 tasks seen, (c) Training time
(a) (b) (c)
Noise Occlusion 66)|9q 77 é 6 CO 97 QQA\| o b| ¢ a 3 S a uy q ta] # a i) B
0.9 _ DGDMN 2 oe : â 0.7 06 00 Seclusion factor 028
°° 30% ios g â 0.80) _. ny 0.75| DGDMN °° Gaussian noise stdev °8
Figure 7: LTM is robust to noisy and occluded images and exhibits smoother degradation in classiï¬cation accuracy because of its denoising reconstructive properties: (a) LTM reconstruction from noisy and occluded digits, (b) Classiï¬cation accuracy with increasing gaussian noise, and (c) Classiï¬cation accuracy with increasing occlusion factor.
effectively while mitigating interference between tasks.
Learning from streaming data: We have presently for- mulated our setup with task descriptors to compare it with existing approaches in the continual learning literature, but we emphasize that having no dependence on task descrip- tors is an essential step to learn continually from streaming data. Our approach allows online recognition of task sam- ples via a reconstructive generative model and is applicable in domains with directly streaming data without any task descriptors unlike most previous approaches which make explicit use of task descriptors (Zenke et al., 2017; Kirk- patrick et al., 2017; Rebufï¬ et al., 2017; Lopez-Paz et al., 2017) (see appendix A). This would allow DGDMN to be used for learning policies over many tasks via reinforcement learning without explicit replay memories, and we plan to explore this in future work.
Approaches based on synaptic consolidation: Though our architecture draws inspiration from complementary learning systems and experience replay in the human brain, there is also neuroscientiï¬c evidence for synaptic consolida-
tion in the human brain like in (Kirkpatrick et al., 2017) and (Zenke et al., 2017). It might be interesting to explore how synaptic consolidation can be incorporated in our dual mem- ory architecture without causing stagnation and we leave this to future work.
# 6. Conclusion
In this work, we have developed a continual learning archi- tecture to avert catastrophic forgetting. Our dual memory architecture emulates the complementary learning systems in the human brain and maintains a consolidated long-term memory via generative replay of past experiences. We have shown that generative replay performs the best for long- term performance retention and scales well along with a dual memory architecture via our experiments. Moreover, our architecture displays signiï¬cant parallels with the hu- man memory system and provides useful insights about the connection between sleep and learning in humans.
Deep Generative Dual Memory Network for Continual Learning
# References
Cepeda, Nicholas J, Pashler, Harold, Vul, Edward, Wixted, John T, and Rohrer, Doug. Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psycho- logical bulletin, 132(3):354, 2006.
Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska- Barwinska, Agnieszka, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the Na- tional Academy of Sciences, 114(13):3521â3526, 2017.
Donahue, Jeff, Kr¨ahenb¨uhl, Philipp, and Darrell, Trevor. Adversarial feature learning. In International Conference on Learning Representations, 2017.
Dumoulin, Vincent, Belghazi, Ishmael, Poole, Ben, Lamb, Alex, Arjovsky, Martin, Mastropietro, Olivier, and Courville, Aaron. Adversarially learned inference. In International Conference on Learning Representations, 2017.
Kortge, Chris A. Episodic memory in connectionist net- works. In Proceedings of the 12th Annual Conference of the Cognitive Science Society, volume 764, pp. 771. Erlbaum, 1990.
Kumaran, Dharshan, Hassabis, Demis, and McClelland, James L. What learning systems do intelligent agents need? complementary learning systems theory updated. Trends in cognitive sciences, 20(7):512â534, 2016.
Fernando, Chrisantha, Banarse, Dylan, Blundell, Charles, Zwols, Yori, Ha, David, Rusu, Andrei A, Pritzel, Alexan- der, and Wierstra, Daan. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017.
Lewandowsky, Stephan. Gradual unlearning and catas- trophic interference: A comparison of distributed archi- tectures. Relating theory and data: Essays on human memory in honor of Bennet B. Murdock, pp. 445â476, 1991.
French, Robert M. Dynamically constraining connectionist networks to produce distributed, orthogonal representa- tions to reduce catastrophic interference. network, 1994.
Li, Zhizhong and Hoiem, Derek. Learning without forget- ting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
French, Robert M. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128â135, 1999.
Lopez-Paz, David et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6470â6479, 2017.
Goodfellow, Ian J, Mirza, Mehdi, Xiao, Da, Courville, Aaron, and Bengio, Yoshua. An empirical investiga- tion of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
URL: https://github.com/googlecreativelab/quickdraw-dataset, 2017.
Hinton, Geoffrey. Neural networks for machine learning - lecture 6a - overview of mini-batch gradient descent, 2012.
Lucic, Mario, Faulkner, Matthew, Krause, Andreas, and Feldman, Dan. Training mixture models at scale via coresets. arXiv preprint arXiv:1703.08110, 2017.
Maaten, Laurens van der and Hinton, Geoffrey. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
McClelland, James L, McNaughton, Bruce L, and Oâreilly, Randall C. Why there are complementary learning sys- tems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995.
Joiner, William J. Unraveling the evolutionary determinants of sleep. Current Biology, 26(20):R1073âR1087, 2016.
Kaggle. Devanagari character set. URL: https://www.kaggle.com/rishianand/devanagari- character-set, 2017.
Kahana, Michael J and Howard, Marc W. Spacing and lag effects in free recall of pure lists. Psychonomic Bulletin & Review, 12(1):159â164, 2005.
McCloskey, Michael and Cohen, Neal J. Catastrophic inter- ference in connectionist networks: The sequential learn- ing problem. Psychology of learning and motivation, 24: 109â165, 1989.
McRae, Ken and Hetherington, Phil A. Catastrophic interfer- ence is eliminated in pretrained networks. In Proceedings of the 15h Annual Conference of the Cognitive Science Society, pp. 723â728, 1993.
Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Repre- sentations, 2014.
Mescheder, Lars, Nowozin, Sebastian, and Geiger, Andreas. Adversarial variational bayes: Unifying variational au- toencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
Deep Generative Dual Memory Network for Continual Learning
Mocanu, Decebal Constantin, Vega, Maria Torres, Eaton, Eric, Stone, Peter, and Liotta, Antonio. Online contrastive divergence with generative replay: Experience replay without storing data. CoRR, abs/1610.05555, 2016.
Nath, Ravi D, Bedbrook, Claire N, Abrams, Michael J, Basinger, Ty, Bois, Justin S, Prober, David A, Sternberg, Paul W, Gradinaru, Viviana, and Goentoro, Lea. The jellyï¬sh cassiopea exhibits a sleep-like state. Current Biology, 27(19):2984â2990, 2017.
ONeill, Joseph, Pleydell-Bouverie, Barty, Dupret, David, and Csicsvari, Jozsef. Play it again: reactivation of wak- ing experience and memory. Trends in Neurosciences, 33 (5):220â229, 2010.
Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Rus- lan. Actor-mimic: Deep multitask and transfer reinforce- ment learning. In International Conference on Learning Representations, 2016.
Rebufï¬, Sylvestre-Alvise, Kolesnikov, Alexander, and Lam- pert, Christoph H. icarl: Incremental classiï¬er and rep- resentation learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
Robins, Anthony. Sequential learning in neural networks: A review and a discussion of pseudorehearsal based meth- ods. Intelligent Data Analysis, 8(3):301â322, 2004.
Rusu, Andrei A, Colmenarejo, Sergio Gomez, Gulcehre, Caglar, Desjardins, Guillaume, Kirkpatrick, James, Pas- canu, Razvan, Mnih, Volodymyr, Kavukcuoglu, Koray, and Hadsell, Raia. Policy distillation. arXiv preprint arXiv:1511.06295, 2015.
Rusu, Andrei A, Rabinowitz, Neil C, Desjardins, Guillaume, Soyer, Hubert, Kirkpatrick, James, Kavukcuoglu, Koray, Pascanu, Razvan, and Hadsell, Raia. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
Shin, Hanul, Lee, Jung Kwon, Kim, Jaehong, and Kim, Jiwon. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pp. 2994â3003, 2017.
Srivastava, Rupesh K, Masci, Jonathan, Kazerounian, Sohrob, Gomez, Faustino, and Schmidhuber, J¨urgen. Compete to compute. In Advances in Neural Information Processing Systems, pp. 2310â2318, 2013.
Zenke, Friedemann, Poole, Ben, and Ganguli, Surya. Con- tinual learning through synaptic intelligence. In Interna- tional Conference on Machine Learning, pp. 3987â3995, 2017.
Deep Generative Dual Memory Network for Continual Learning
# 7. Appendix A
# 7.1. Repeated tasks and revision
in these cases and we leave a more thorough investigation to future work.
It is well known in psychology literature that human learn- ing improves via revision (Kahana & Howard, 2005; Cepeda et al., 2006). We show performance of EWC and DGDMN on Permnist, when some tasks are repeated (ï¬gure 8). DGR performs very similar to DGDMN, hence we omit it. EWC stagnates and once learning has slowed down on the weights important for Task 1, the weights cannot be changed again, not even for improving Task 1. Further, it did not learn Task 6 the ï¬rst time and revision does not help either. However, DGDMN learns all tasks uptil Task 6 and then improves by revising Task 1 and 6 again. We point out that methods involving freezing (or slowdown) of learning often do not learn well via revision since they do not have any means of identifying tasks and unfreezing the previously frozen weights when the task is re-encountered. While many pre- vious works do not investigate revision, it is crucial for learning continuously and should improve performance on tasks. The ability to learn from correlated task samples and revision makes our architecture functionally similar to that of humans.
# 7.2. Experiments on other datasets
In this section, we present more experiments on the Shapes and the Hindi dataset, which contain sequences of tasks with geometric shapes and hindi consonants recognition respectively. We observed similar forgetting patterns as on the Digits dataset in section 4. All baselines exhibited catas- trophic forgetting on these sequences of tasks, but DGR and DGDMN were able to learn the task structure sequen- tially (ï¬gures 9, 10). The same is reï¬ected in the average forgetting curves in ï¬gure 11.
# 7.3. Jointly vs. sequentially learnt structure
To explore whether learning tasks sequentially results in a similar structure as learning them jointly, we visualized t-SNE (Maaten & Hinton, 2008) embeddings of the latent vectors of the LTM generator (VAE) in DGDMN after train- ing it: (a) jointly over all tasks (Figure 12a), and (b) sequen- tially over tasks seen one at a time (Figure 12b) on the Digits dataset. To maintain consistency, we used the same random seed in t-SNE for both joint and sequential embeddings.
We observe that the LTMâs latent space effectively seg- regates the 10 digits in both cases (joint and sequential). Though the absolute locations of the digit clusters differ in the two plots, the relative locations of digits share some sim- ilarity between both plots i.e. the neighboring digit clusters for each cluster are roughly similar. This may not be suf- ï¬cient to conclude that the LTM discovers the same latent representation for the underlying shared structure of tasks
# 7.4. Visualizations for the jointly and sequentially learnt LTM
We also show visualizations of digits from the LTM when trained jointly on Digits tasks (Figure 13a) and when trained sequentially (Figure 13b). Though the digits generated from the jointly trained LTM are quite sharp, the same is not true for the sequentially trained LTM. We observe that the sequentially trained LTM produces sharp samples of the re- cently learnt tasks (digits 6, 7, 8 and 9), but blurred samples of previously learnt tasks, which is due to partial forgetting on these previous tasks.
# 7.5. DGDMN with no task descriptors
As described in section 3.2, DGDMN only uses task descrip- tors to recognize if a task already exists in an STTM or the LTM so that it can be appropriately allocated to the correct memory. Note that in our architecture this can also be done by using the reconstruction error of the generator on the task samples as a proxy for recognition. Speciï¬cally, in this variant DGDMN recog, tasks arrive sequentially but only (Xt, Yt) is observed while training and only Xt while test- ing. A DGM, when tested to recognize task t from samples Xt, reconstructs all samples Xt using the generator G and checks if the recognition loss is less than a certain threshold:
Ne recog _loss(X;) = > i=1 recons_loss(a?) 7 7 i Ydgm; intensity (a4)
where recons loss(·) is the reconstruction loss on a sample, intensity(·) describes the strength of the input sample (for images, the sum of pixel intensities) and γdgm is a scalar threshold and a hyperparameter which can be tuned sep- arately for the LTM and the STM (same for all STTMs). We kept γdgm = 1.55 for both the LTM and all STTMs. In this case the training of the generators also employs a new termination criteria i.e. the generator of a DGM is trained till recog loss(·) is below γdgm. The rest of the algo- rithm remains unchanged. We show the accuracy curves and the average forgetting curves for this variant on the Digits dataset in ï¬gures 14a and 14b respectively. We observe very little degradation from the original DGDMN which uses task descriptors for recognition. DGDMN recog achieved ACC = 0.766 and BW T = â0.197 across all tasks which is similar to that of DGDMN.
# 8. Appendix B
# 8.1. Dataset preprocessing
All our datasets have images with intensities normalized in the range [0.0, 1.0] and size (28 Ã 28), except Hindi which
Deep Generative Dual Memory Network for Continual Learning
1.0 No] 08 ââ Task 6 20.6 5 ee nae Boa 0.2 0.0 aan +4 6 4 6 %%EE HE GES BES eeE BB Task
1.0 ââ Task1 08 ââ Task 6 20.6 FA Boa 0.2 0.0 aan +4 6 4 6 %%EE HE GES BES eeE BB Task
(a) (b)
Figure 8: Accuracy curves when tasks are revised: (a) EWC, (b) GEM, and (c) DGDMN.
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6
Accuracy Task 1 Task 2 Task 3 Task 4 Task1 Task2 Task 3 Task4 Tasks Taskk6 â ° a & 2 8
1? â- Task 1 gg ââ Task 2 â+ Task 3 20.6 = Task 4 5 â Task 5 904 + Task 6 0.2 0.0 =â a m + 4 © % % GF %F GF F 8 @ ⬠8 8 8
(a) CNN (b) DropCNN (c) PPR (d) EWC (e) DGR (f) DGDMN
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy Sr ee a b & G& GF GB F Be @ @ @ 8 Task
Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 i ne % GF GF GF GB FG Boe 8 @ ⬠8 Task
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy er or rr b & G& GF GB F Be @ @ @ 8 Task
Figure 9: Accuracy curves for Shapes (x: tasks seen, y: classiï¬cation accuracy on task).
has (32 Ã 32) size images.
Permnist: Our version involved six tasks, each containing a ï¬xed permutation on images sampled from the original MNIST dataset. We sampled 30, 000 images from the train- ing set and all the 10, 000 test set images for each task. The tasks were as follows: (i) Original MNIST, (ii) 8x8 central patch of each image blackened, (iii) 8x8 central patch of each image whitened, (iv) 8x8 central patch of each im- age permuted with a ï¬xed random permutation, (v) 12x12 central patch of each image permuted with a ï¬xed random permutation, and (vi) mirror images of MNIST. This way each task is as hard as MNIST and the tasks share some common underlying structure. Digits: We introduce this smaller dataset which contains 10 tasks with the tth task being classiï¬cation of digit t from the MNIST dataset. TDigits: We introduced a transformed variant of MNIST
containing all ten digits, their mirror images, their upside down images, and their images when reï¬ected about the main diagonal making a total of 40 tasks. This dataset poses similar difï¬culty as the Digits dataset and we use it for ex- periments involving longer sequence of tasks. Shapes: This dataset was extracted from the Quick, Draw! dataset recently released by Google (2017), which contains 50 million drawings across 345 categories of hand-drawn images. We subsampled 4, 500 training images and 500 test images from all geometric shapes in Quick, Draw! (namely circle, hexagon, octagon, square, triangle and zigzag). Hindi: Extracted from the Devanagri dataset (Kaggle, 2017) and contains a sequence of 8 tasks, each involving image classiï¬cation of a hindi language consonant.
Deep Generative Dual Memory Network for Continual Learning
(a) CNN (b) DropCNN (c) PPR (d) EWC (e) DGR (f) DGDMN
ââ Task1 ââ Task 2 + Task 3 z ââ Taka 5 â Task5 g ââ Task6 oe Task 7 â Task 8 an m⢠+ OF © ye eee eg Task
â Task 1 â Task 2 + Task 3 2 ââ Taka 5 = TaskS 2 ââ Task6 ~~ Task 7 â Task 8 aa m+n on © ye ee ee gg Task
â Task 1 ââ Task 2 + Task 3 2 = Task 4 5 â Task 5 & ââ Task 6 oe Task 7 â Task 8 =a Am +n on © ye eee eg Task
â Task 1 ââ Task 2 + Task 3 z ââ Taka 5 â Task5 Z ââTask6 <= Task 7 Task 8 an m⢠+ OF © ye eee eg Task
â Task 1 â Task 2 + Task 3 2 ââ Taka 5 = TaskS & ââTask6 | | ââ Task7 | | Task 8 aa m+n on © ye ee ee gg Task
â Task 1 ââ Task 2 + Task 3 2 = Task 4 5 â Task 5 & ââ Task 6 <= Task 7 / Task 8 aan m + On © ye eee eg Task
Figure 10: Accuracy curves for Hindi (x: tasks seen, y: classiï¬cation accuracy on task).
# 8.2. Training algorithm and its parameters
All models were trained with RMSProp (Hinton, 2012) us- ing learning rate = 0.001, p = 0.9, ⬠= 1078 and no decay. We used a batch size of 128 and all classifiers were provided 20 epochs of training when trained jointly, and 6 epochs when trained sequentially over tasks. For generative mod- els (VAEs), we used gradient clipping in RMSProp with clipnorm= 1.0 and clipvalue= 0.5, and they were trained for 25 epochs regardless of the task or dataset.
# 8.3. Neural network architectures
We chose all models by ï¬rst training them jointly on all tasks in a dataset to ensure that our models had enough capacity to perform reasonably well. But we gave preference to simpler models over very high capacity models.
the cross-entropy objective function. The STTM learners employed in DGDMN were smaller for speed and efï¬ciency.
Generative models: The generators for DGR and LTM of DGDMN employed encoders and decoders with two fully connected hidden layers each with ReLU activation for Permnist, Digits and TDigits, and convolutional variants for Shapes and Hindi. The sizes and number of units/kernels in the layers were tuned independently for each dataset with an approximate coarse grid-search. The size of the latent variable z was set to 32 for Digits, 64 for Permnist, 96 for TDigits, 32 for Shapes and 48 for Hindi. The STTM generators in DGDMN were kept smaller for speed and efï¬ciency concerns.
# 8.4. Hyperparameters of DGDMN
Classiï¬er Models: Our implementation of NN, DropNN, PPR, EWC, learner for DGR and the learner for LTM in DGDMN used a neural network with three fully-connected layers with the number of units tuned differently according to the dataset (24, 24 units for Digits, 48, 48 for Permnist and 36, 36 for TDigits). DropNN also added two dropout layers, one after each hidden layer with droput rate = 0.2 each. The classiï¬ers (learners) for Shapes and Hindi datasets had two convolutional layers (12, 20 : 3 à 3 kernels for Shapes and 24, 32 : 3 à 3 kernels for Hindi) each followed by a 2 à 2 max-pooling layer. The last two layers were fully-connected (16, 6 for Shapes and 144, 36 for Hindi). The hidden layers used ReLU activations, the last layer had a softmax activation, and the model was trained to minimize
DGDMN has two new hyperparameters: (i) κ: minimum fraction of Nmax reserved for incoming tasks, and (ii) nST M : number of STTMs (also sleep/consolidation fre- quency). Both these have straightforward interpretations and can be set directly without complex hyperparameter searches.
κ ensures continual incorporation of new tasks by guaran- teeing them a minimum fraction of LTM samples during consolidation. Given that LTM should perform well on last K tasks seen in long task sequence of T tasks, we observed that it is safe to assume that about 50% of the LTM would be crowded by the earlier T â K tasks. The remaining 0.5 fraction should be distributed to the last K tasks. So choosing κ = 0.5 K works well in practice (or as a good start-
Deep Generative Dual Memory Network for Continual Learning
Pr ° CNN >08 DropCNN fa PPR 5 06 EWC 8 & DGR B04 DGDMN ey Z 02 9.0 ca N al st ra oO 4% G GF GF GF FB Xe £ £ FF F F Tasks
(a) Shapes
1.0 â- CNN 70.8 âsâ DropCNN g âsâ PPR 5 06 ââ EWC 8 & â-â DGR B04 ââ DGDMN ey ae Zo2 Joint 0.0 aN Mm tT HH oO Re wo 4G GH GG ED ee eeeeree Tasks
# (b) Hindi
Figure 11: Forgetting curves on Shapes and Hindi dataset (x: tasks seen, y: avg classiï¬cation accuracy on tasks seen).
(a)
ing point for tuning). We made this choice in section 4.2 with K = 10 and κ = 0.05, and hence plotted the average accuracy over the last 10 tasks as a metric.
nST M controls the consolidation cycle frequency. Increas- ing nST M gives more STTMs, less frequent consolidations and hence a learning speed advantage. But this also means that fewer samples of previous tasks would participate in consolidation (due to maximum capacity Nmax of LTM), and hence more forgetting might occur. This parameter does not affect learning much till the LTM remains unsaturated (i.e. Nmax capacity is unï¬lled by generated + new samples) and becomes active after that. For long sequences of tasks, we found it best to keep at least 75% of the total samples from previously learnt tasks to have appropriate retention. Hence, nST M can be set as approximately 0.25 κ in practice (as we did in section 4.2), or as a starting point for tuning.
# 8.5. Algorithm speciï¬c hyperparameters
PPR: We used a maximum memory capacity of about 3 â 6 times the number of samples in a task for the dataset being learnt on (i.e. 18, 000 for Digits, 60, 000 for Permnist, 15, 000 for Shapes and 5, 400 for Hindi). While replaying, apart from the task samples, the remaining memory was ï¬lled with random samples and corresponding labels.
(b)
Figure 12: t-SNE embedding for latent vectors of the VAE generator on Digits dataset when: (a) tasks are learnt jointly, and (b) tasks are learnt sequentially.
experiments.
DGR and DGDMN: Nmax for the DGM in DGR and for the LTM in DGDMN for Digits, Permnist, Shapes and Hindi was set as the total number of samples in the datasets (summed over all tasks) to ensure that there was enough capacity to regenerate the datasets well. For TDigits, we deliberately restricted memory capacity to see the effects of learning tasks over a long time and we kept Nmax as half the total number of samples. nST M was kept at 2 for Digits, Permnist and Shapes, 5 for TDigits and 2 for Hindi. κ was set to be small, so that it does not come into play for Digits, Permnist, Shapes and Hindi since we already provided memories with full capacity for all samples. For TDigits, we used κ = 0.05 which would let us incorporate roughly 10 out of the 40 tasks well.
EWC: Most values of the coefï¬cient of the Fisher Infor- mation Matrix based regularizer between 1 to 500 worked reasonably well for our datasets. We chose 100 for our
Deep Generative Dual Memory Network for Continual Learning
TADN OY Rad AQOHMNGADYRSO AO ~DOATAAUNANA +Hbrm er CONKR meow gorn TT WOMANNeTHhAâ TONES roe MD NROOC+-ATAo& WHOOBSAMY HA RI reyw*oa»sae
(a)
oe s>5 @aense Qh oc QErCHs BRO Or News ROW KY OPeo~MhLo o> he (Am FH oe Been â~âM RBH Se Hwee 8ecwewworbocn® ~QArnwoenhrrn Pans eRe oe Fox hrf Orn mM Oe OS
(a)
ââ Task 1 ââ Task 2 ââ Task 3 ââ Task 4 âeâ Task 5 ââ Task 6 â=~ Task 7 ââ Task 8 ââ Task 9 ââ Task 10 OT yseL 6 seL g>1seL £sel 9 4seL Gg ysel prsel ⬠sel zseL Tse Task Adeinooy
g id TSeL Areinoze yse] Bay So oO £ Zz = 83 aas ttt OT ASeL 6 1seL axel LyseL 9 9 SPL g sxse, & pseL eseL Z SEL
(b)
(b)
Figure 14: Curves for DGDMN recog on Digits dataset: (a) Accuracy curves, (b) Average forgetting curves (x: tasks seen, y: classiï¬cation accuracy).
Figure 13: Visualization of digits from LTM when trained: (a) jointly, (b) sequentially | {
"id": "1703.08110"
} |
1710.10304 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | 8 1 0 2
b e F 8 2 ] E N . s c [
4 v 4 0 3 0 1 . 0 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# FEW-SHOT AUTOREGRESSIVE DENSITY ESTIMATION: TOWARDS LEARNING TO LEARN DISTRIBUTIONS
S. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. Rezende, O. Vinyals, N. de Freitas {reedscot,yutianc,tpaine}@google.com
# ABSTRACT
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. How- ever, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn vi- sual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with au- toregressive models to enable effective few-shot density estimation. Our proposed modiï¬cations to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and ï¬nd that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
# INTRODUCTION
Contemporary machine learning systems are still far behind humans in their ability to rapidly learn new visual concepts from only a few examples (Lake et al., 2013). This setting, called few-shot learning, has been studied using deep neural networks and many other approaches in the context of discriminative models, for example Vinyals et al. (2016); Santoro et al. (2016). However, compara- tively little attention has been devoted to the task of few-shot image density estimation; that is, the problem of learning a model of a probability distribution from a small number of examples. Below we motivate our study of few-shot autoregressive models, their connection to meta-learning, and provide a comparison of multiple approaches to conditioning in neural density models.
WHY AUTOREGRESSIVE MODELS?
Autoregressive neural networks are useful for studying few-shot density estimation for several rea- sons. They are fast and stable to train, easy to implement, and have tractable likelihoods, allowing us to quantitatively compare a large number of model variants in an objective manner. Therefore we can easily add complexity in orthogonal directions to the generative model itself.
Autoregressive image models factorize the joint distribution into per-pixel factors:
N P(x|s;0) = T] Plailece, f(s): 9) (1) t=1
where θ are the model parameters, x â RN are the image pixels, s is a conditioning variable, and f is a function encoding this conditioning variable. For example in text-to-image synthesis, s would be an image caption and f could be a convolutional or recurrent encoder network, as in Reed et al. (2016). In label-conditional image generation, s would be the discrete class label and f could simply convert s to a one-hot encoding possibly followed by an MLP.
A straightforward approach to few-shot density estimation would be to simply treat samples from the target distribution as conditioning variables for the model. That is, let s correspond to a few data examples illustrating a concept. For example, s may consist of four images depicting bears, and the task is then to generate an image x of a bear, or to compute its probability P (x|s; θ).
1
Published as a conference paper at ICLR 2018
A learned conditional density model that conditions on samples from its target distribution is in fact learning a learning algorithm, embedded into the weights of the network. This learning algorithm is executed by a feed-forward pass through the network encoding the target distribution samples.
WHY LEARN TO LEARN DISTRIBUTIONS?
If the number of training samples from a target distribution is tiny, then using standard gradient descent to train a deep network from scratch or even ï¬ne-tuning is likely to result in memorization of the samples; there is little reason to expect generalization. Therefore what is needed is a learning algorithm that can be expected to work on tiny training sets. Since designing such an algorithm has thus far proven to be challenging, one could try to learn the algorithm itself. In general this may be impossible, but if there is shared underlying structure among the set of target distributions, this learning algorithm can be learned from experience as we show in this paper.
For our purposes, it is instructive to think of learning to learn as two nested learning problems, where the inner learning problem is less constrained than the outer one. For example, the inner learning problem may be unsupervised while the outer one may be supervised. Similarly, the inner learning problem may involve only a few data points. In this latter case, the aim is to meta-learn a model that when deployed is able to infer, generate or learn rapidly using few data s.
A rough analogy can be made to evolution: a slow and expensive meta-learning process, which has resulted in life-forms that at birth already have priors that facilitate rapid learning and inductive leaps. Understanding the exact form of the priors is an active, very challenging, area of research (Spelke & Kinzler, 2007; Smith & Gasser, 2005). From this research perspective, we can think of meta-learning as a potential data-driven alternative to hand engineering priors.
The meta-learning process can be undertaken using large amounts of computation and data. The output is however a model that can learn from few data. This facilitates the deployment of models in resource-constrained computing devices, e.g. mobile phones, to learn from few data. This may prove to be very important for protection of private data s and for personalisation.
FEW-SHOT LEARNING AS INFERENCE OR AS A WEIGHT UPDATE?
A sample-conditional density model Pθ(x|s) treats meta-learning as inference; the conditioning samples s vary but the model parameters θ are ï¬xed. A standard MLP or convolutional network can parameterize the sample encoding (i.e. meta-learning) component, or an attention mechanism can be used, which we will refer to as PixelCNN and Attention PixelCNN, respectively.
A very different approach to meta-learning is taken by Ravi & Larochelle (2016) and Finn et al. (2017a), who instead learn unconditional models that adapt their weights based on a gradient step computed on the few-shot samples. This same approach can also be taken with PixelCNN: train an unconditional network Pg (x) that is implicitly conditioned by a previous gradient ascent step on log Po(s); that is, 6â = 0 â aVo¢ log Po(s). We will refer to this as Meta PixelCNN.
In Section 2 we connect our work to previous attentive autoregressive models, as well as to work on gradient based meta-learning. In Section 3 we describe Attention PixelCNN and Meta PixelCNN in greater detail. We show how attention can improve performance in the the few-shot density estimation problem by enabling the model to easily transmit texture information from the support set onto the target image canvas. In Section 4 we compare several few-shot PixelCNN variants on simple image mirroring, Omniglot and Stanford Online Products. We show that both gradient-based and attention-based few-shot PixelCNN can learn to learn simple distributions, and both achieve state-of-the-art likelihoods on Omniglot.
# 2 RELATED WORK
Learning to learn or meta-learning has been studied in cognitive science and machine learning for decades (Harlow, 1949; Thrun & Pratt, 1998; Hochreiter et al., 2001). In the context of modern deep networks, Andrychowicz et al. (2016) learned a gradient descent optimizer by gradient descent, itself parameterized as a recurrent network. Chen et al. (2017) showed how to learn to learn by gradient descent in the black-box optimization setting.
2
Published as a conference paper at ICLR 2018
Ravi & Larochelle (2017) showed the effectiveness of learning an optimizer in the few-shot learning setting. Finn et al. (2017a) advanced a simplified yet effective variation in which the optimizer is not learned but rather fixed as one or a few steps of gradient descent, and the meta-learning problem reduces to learning an initial set of base parameters 6 that can be adapted to minimize any task loss L;, by a single step of gradient descent, i.e. 0â = 6 â aVL,(6). This approach was further shown to be effective in imitation learning including on real robotic manipulation tasks (Finn et al., 2017b). Shyam et al. (2017) train a neural attentive recurrent comparator function to perform one- shot classification on Omniglot.
Few-shot density estimation has been studied previously using matching networks (Bartunov & Vetrov, 2016) and variational autoencoders (VAEs). Bornschein et al. (2017) apply variational in- ference to memory addressing, treating the memory address as a latent variable. Rezende et al. (2016) develop a sequential generative model for few-shot learning, generalizing the Deep Recur- rent Attention Writer (DRAW) model (Gregor et al., 2015). In this work, our focus is on extending autoregressive models to the few-shot setting, in particular PixelCNN (van den Oord et al., 2016).
Autoregressive (over time) models with attention are well-established in language tasks. Bahdanau et al. (2014) developed an attention-based network for machine translation. This work inspired a wave of recurrent attention models for other applications. Xu et al. (2015) used visual attention to produce higher-quality and more interpretable image captioning systems. This type of model has also been applied in motor control, for the purpose of imitation learning. Duan et al. (2017) learn a policy for robotic block stacking conditioned on a small number of demonstration trajectories.
Gehring et al. (2017) developed convolutional machine translation models augmented with attention over the input sentence. A nice property of this model is that all attention operations can be batched over time, because one does not need to unroll a recurrent net during training. Our attentive Pixel- CNN is similar in high-level design, but our data is pixels rather than words, and 2D instead of 1D, and we consider image generation rather than text generation as our task.
3 MODEL
3.1 FEW-SHOT LEARNING WITH ATTENTION PIXELCNN
In this section we describe the model, which we refer to as Attention PixelCNN. At a high level, it works as follows: at the point of generating every pixel, the network queries a memory. This memory can consist of anything, but in this work it will be a support set of images of a visual concept. In addition to global features derived from these support images, the network has access to textures via support image patches. Figure 2 illustrates the attention mechanism.
In previous conditional PixelCNN works, the encoding f (s) was shared across all pixels. However, this can be sub-optimal for several reasons. First, at different points of generating the target image x, different aspects of the support images may become relevant. Second, it can make learning difï¬cult, because the network will need to encode the entire support set of images into a single global conditioning vector, fed to every output pixel. This single vector would need to transmit information across all pairs of salient regions in the supporting images and the target image.
UG GEBIâ| CE TBIâ ime Supports + attention Sampl Supports + attention Sampl mm NEA ie at AH ka cl Ct ict cl aT eH Al id at AA(o7| a W ctict Cl cl cy my ct
Figure 1: Sampling from Attention PixelCNN. Support images are overlaid in red to indicate the attention weights. The support sets can be viewed as small training sets, illustrating the connection between sample-conditional density estimation and learning to learn distributions.
3
Published as a conference paper at ICLR 2018
To overcome this difï¬culty, we propose to replace the simple encoder function f (s) with a context- sensitive attention mechanism ft(s, x<t). It produces an encoding of the context that depends on the image generated up until the current step t. The weights are shared over t.
We will use the following nota- tion. Let the target image be x â RHÃW Ã3. and the support set images be s â RSÃHÃW Ã3, where S is the number of supports.
To capture texture information, we encode all supporting images with a shallow convolutional network, typi- cally only two layers. Each hidden unit of the resulting feature map will have a small receptive ï¬eld, e.g. cor- responding to a 10 à 10 patch in a support set image. We encode these support images into a set of spatially- indexed key and value vectors.
kakxP 4/4 reduce â] tx1xP m f(s, x.) + were a 7 \ KxKx4 ZY pre ry 5 KxKxP i 7 Z | WxHxP Th WxHx3 WxHx3 - Lf KxKxP Support image, s Target image, x
Figure 2: The PixelCNN attention mechanism.
After encoding the support images in parallel, we reshape the resulting S Ã K Ã K Ã 2P feature maps to squeeze out the spatial dimensions, resulting in a SK 2 Ã 2P matrix.
p = fpatch(s) = reshape(CNN(s), [SK 2 Ã 2P ]) (2)
pkey = p[:, 0 : P ], pvalue = p[:, P : 2P ] (3)
where CNN is a shallow convolutional network. We take the ï¬rst P channels as the patch key vectors pkey â RSK2ÃP and the second P channels as the patch value vectors pvalue â RSK2ÃP . Together these form a queryable memory for image generation.
To query this memory, we need to encode both the global context from the support set s as well as the pixels x<t generated so far. We can obtain these features simply by taking any layer of a PixelCNN conditioned on the support set:
qt = PixelCNNL(f (s), x<t), (4)
where L is the desired layer of hidden unit activations within the PixelCNN network. In practice we use the middle layer.
To incorporate the patch attention features into the pixel predictions, we build a scoring function us- ing q and pkey. Following the design proposed by Bahdanau et al. (2014), we compute a normalized matching score αtj between query pixel qt and supporting patch pkey
) j (5)
# ey = ov" tanh(q, + prâ) ong = exp(erj)/ DES
k=1 exp(eik). (6)
The resulting attention-gated context function can be written as:
3 SK? value Fels, ver) = CN anjpyaâ «)
which can be substituted into the objective in equation 1. context features ft(s, x<t) with global context features f (s) by channel-wise concatenation.
This attention mechanism can also be straightforwardly applied to the multiscale PixelCNN archi- tecture of Reed et al. (2017). In that model, pixel factors P (xt|x<t, ft(s, x<t)) are simply replaced by pixel group factors P (xg|x<g, fg(s, x<g)), where g indexes a set of pixels and < g indicates all pixels in previous pixel groups, including previously-generated lower resolutions.
We ï¬nd that a few simple modiï¬cations to the above design can signiï¬cantly improve performance. First, we can augment the supporting images with a channel encoding relative position within the image, normalized to [â1, 1]. One channel is added for x-position, another for y-position. When
4
Published as a conference paper at ICLR 2018
patch features are extracted, position information is thus encoded, which may help the network assemble the output image. Second, we add a 1-of-K channel for the supporting image label, where K is the number of supporting images. This provides patch encodings information about which global context they are extracted from, which may be useful e.g. when assembling patches from multiple views of an object.
3.2 FEW-SHOT LEARNING WITH META PIXELCNN
As an alternative to explicit conditioning with attention, in this section we propose an implicitly- conditioned version using gradient descent. This is an instance of what Finn et al. (2017a) called model-agnostic meta learning, because it works in the same way regardless of the network archi- tecture. The conditioning pathway (i.e. ï¬ow of information from supports s to the next pixel xt) introduces no additional parameters. The objective to minimize is as follows:
L(x, ;0) = âlog P(x; 6"), where 0â = 6 â aV6Linner(s; 9) (8)
A natural choice for the inner objective would be Linner(s; θ) = â log P (s; θ). However, as shown in Finn et al. (2017b) and similar to the setup in Neu & Szepesv´ari (2012), we actually have consid- erable ï¬exibility here to make the inner and outer objectives different.
Any learnable function of s and 6 could potentially learn to produce gradients that increase log P(x; 6â). In particular, this function does not need to compute log likelihood, and does not even need to respect the causal ordering of pixels implied by the chain rule factorization in equation 1. Effectively, the model can learn to learn by maximum likelihood without likelihoods.
As input features for computing Linner(s, θ), we use the L-th layer of spatial features q = PixelCNNL(s, θ) â RSÃHÃW ÃZ, where S is the number of support images - acting as the batch dimension - and Z is the number of feature channels used in the PixelCNN. Note that this is the same network used to model P (x; θ).
The features q are fed through a convolutional network g (whose parameters are also included in θ) producing a scalar, which is treated as the learned inner loss Linner. In practice, we used α = 0.1, and the encoder had three layers of stride-2 convolutions with 3 à 3 kernels, followed by L2 norm of the ï¬nal layer features. Since these convolutional weights are part of θ, they are learned jointly with the generative model weights by minimizing equation 8.
Algorithm 1 Meta PixelCNN training 1: θ: Randomly initialized model parameters 2: p(s, x) : Distribution over support sets and target outputs. 3: while not done do {si, xi}M 4: for all si, xi do 5: 6: 7:
1: @: Randomly initialized model parameters 2: p(s, x) : Distribution over support sets and target outputs. 3: while not done do > Training loop 4: {s;,c:}â¢4, ~ p(s,t). > Sample a batch of M support sets and target outputs 5: for all s;,7; do 6: qi = PixelCNN_(s;,9) > Compute support set embedding as L-th layer features 7: 6 = 6 â aVog(u, 6) > Adapt 6 using Linner(si,9) = 9(Gi, 4) 8: 0 = 0 â BVg XY, â log P(x;, 04) > Update parameters using maximum likelihood
Algorithm 1 describes the training procedure for Meta PixelCNN. Note that in the outer loop step (line 8), the distribution parametrized by 6/ is not explicitly conditioned on the support set images, but implicitly through the weight adaptation from 6 in line 7.
# 4 EXPERIMENTS
In this section we describe experiments on image ï¬ipping, Omniglot, and Stanford Online Products. In all experiments, the support set encoder f (s) has the following structure: in parallel over support images, a 5 à 5 conv layer, followed by a sequence of 3 à 3 convolutions and max-pooling until the spatial dimension is 1. Finally, the support image encodings are concatenated and fed through two fully-connected layers to get the support set embedding.
5
Published as a conference paper at ICLR 2018
IMAGENET FLIPPING
As a diagnostic task, we consider the problem of image ï¬ipping as few-shot learning. The âsupport setâ contains only one image and is simply the horizontally-ï¬ipped target image. A trivial algorithm exists for this problem, which of course is to simply copy pixel values directly from the support to the corresponding target location. We ï¬nd that the Attention PixelCNN did indeed learn to solve the task, however, interestingly, the baseline conditional PixelCNN and Meta PixelCNN did not.
We trained the model on ImageNet (Deng et al., 2009) images resized to 48 x 48 for 30K steps using RMSProp with learning rate le~+. The network was a 16-layer PixelCNN with 128-dimensional feature maps at each layer, with skip connections to a 256-dimensional penultimate layer before pixel prediction. The baseline PixelCNN is conditioned on the 128-dimensional encoding of the flipped image at each layer; f(s) = f(zâ), where xâ is the mirror image of x. The Attention PixelCNN network is exactly the same for the first 8 layers, and the latter 8 layers are conditioned also on attention features f;(s, <4) = f(xâ, xz) as described in section 3.1.
Source Without attention Source With attention ae a } ey» smal tm rf a=
Figure 3: Horizontally ï¬ipping ImageNet images. The network using attention learns to mirror, while the network without attention does not.
Figure 3 shows the qualitative results for several validation set images. We observe that the baseline model without attention completely fails to ï¬ip the image or even produce a similar image. With attention, the model learns to consistently apply the horizontal ï¬ip operation. However, it is not entirely perfect - one can observe slight mistakes on the upper and left borders. This makes sense because in those regions, the model has the least context to predict pixel values. We also ran the experiment on 24 à 24 images; see ï¬gure 6 in the appendix. Even in this simpliï¬ed setting, neither the baseline conditional PixelCNN or Meta PixelCNN learned to ï¬ip the image.
Quantitatively, we also observe a clear difference between the baseline and the attention model. The baseline achieves 2.64 nats/dim on the training set and 2.65 on the validation set. The attention model achieves 0.89 and 0.90 nats/dim, respectively. During sampling, Attention PixelCNN learns a simple copy operation in which the attention head proceeds in right-to-left raster order over the input, while the output is written in left-to-right raster order.
# 4.2 OMNIGLOT
In this section we benchmark our model on Omniglot (Lake et al., 2013), and analyze the learned behavior of the attention module. We trained the model on 26 Ã 26 binarized images and a 45 â 5 split into training and testing character alphabets as in Bornschein et al. (2017).
To avoid over-ï¬tting, we used a very small network architecture. It had a total of 12 layers with 24 planes each, with skip connections to a penultimate layer with 32 planes. As before, the baseline model conditioned each pixel prediction on a single global vector computed from the support set. The attention model is the same for the ï¬rst half (6 layers), and for the second half it also conditions on attention features.
The task is set up as follows: the network sees several images of a character from the same alphabet, and then tries to induce a density model of that character. We evaluate the likelihood on a held-out example image of that same character from the same alphabet.
All PixelCNN variants achieve state-of-the-art likelihood results (see table 1). Attention PixelCNN signiï¬cantly outperforms the other methods, including PixelCNN without attention, across 1, 2, 4
6
Published as a conference paper at ICLR 2018
Number of support set examples Model Bornschein et al. (2017) Gregor et al. (2016) Conditional PixelCNN Attention PixelCNN 1 0.128(ââ) 0.079(0.063) 0.077(0.070) 0.071(0.066) 2 0.123(ââ) 0.076(0.060) 0.077(0.068) 0.068(0.064) 4 0.117(ââ) 0.076(0.060) 0.077(0.067) 0.066(0.062) 8 â â (ââ) 0.076(0.057) 0.076(0.065) 0.064(0.060)
Table 1: Omniglot test(train) few-shot density estimation NLL in nats/dim. Bornschein et al. (2017) refers to Variational Memory Addressing and Gregor et al. (2016) to ConvDRAW.
and 8-shot learning. PixelCNN and Attention PixelCNN models are also fast to train: 10K iterations with batch size 32 took under an hour using NVidia Tesla K80 GPUs.
We also report new results of training a ConvDRAW Gregor et al. (2016) on this task. While the likelihoods are signiï¬cantly worse than those of Attention PixelCNN, they are otherwise state-of- the-art, and qualitatively the samples look as good. We include ConvDRAW samples on Omniglot for comparison in the appendix section 6.2.
PixelCNN Model Conditional PixelCNN Attention PixelCNN Meta PixelCNN Attention Meta PixelCNN NLL test(train) 0.077(0.067) 0.066(0.062) 0.068(0.065) 0.069(0.065)
Table 2: Omniglot NLL in nats/pixel with four support examples. Attention Meta PixelCNN is a model combining attention with gradient-based weight updates for few-shot learning.
Meta PixelCNN also achieves state-of-the-art likelihoods, only outperformed by Attention Pixel- CNN (see Table 2). Naively combining attention and meta learning does not seem to help. How- ever, there are likely more effective ways to combine attention and meta learning, such as varying the inner loss function or using multiple meta-gradient steps, which could be future work.
Supports PixelCNN Attention PixelCNN Meta PixelCNN Cz s Ola GAL 3 8 Ae | Ss
Figure 4: Typical Omniglot samples from PixelCNN, Attention PixelCNN, and Meta PixelCNN.
Figure 1 shows several key frames of the attention model sampling Omniglot. Within each column, the left part shows the 4 support set images. The red overlay indicates the attention head read weights. The red attention pixel is shown over the center of the corresponding patch to which it attends. The right part shows the progress of sampling the image, which proceeds in raster order. We observe that as expected, the network learns to attend to corresponding regions of the support
7
Published as a conference paper at ICLR 2018
set when drawing each portion of the output image. Figure 4 compares results with and without attention. Here, the difference in likelihood clearly correlates with improvement in sample quality.
4.3 STANFORD ONLINE PRODUCTS
In this section we demonstrate results on natural images from online product listings in the Stanford Online Products Dataset (Song et al., 2016). The data consists of sets of images showing the same product gathered from eBay product listings. There are 12 broad product categories. The training set has 11, 318 distinct objects and the testing set has 11, 316 objects.
The task is, given a set of 3 images of a single object, induce a density model over images of that object. This is a very challenging problem because the target image camera is arbitrary and unknown, and the background may also change dramatically. Some products are shown cleanly with a white background, and others are shown in a usage context. Some views show the entire product, and others zoom in on a small region.
For this dataset, we found it important to use a multiscale architecture as in Reed et al. (2017). We used three scales: 8 Ã 8, 16 Ã 16 and 32 Ã 32. The base scale uses the standard PixelCNN architecture with 12 layers and 128 planes per layer, with 512 planes in the penultimate layer. The upscaling networks use 18 layers with 128 planes each. In Attention PixelCNN, the second half of the layers condition on attention features in both the base and upscaling networks.
Source With Attn No Attn Source With Attn No Attn BER ett wee AA w fg FS cHe F oo tht Ss ~-rs eon SGX Gvese dydadh rr aoe 5
Figure 5: Stanford online products. Samples from Attention PixelCNN tend to match textures and colors from the support set, which is less apparent in samples from the non-attentive model.
Figure 5 shows the result of sampling with the baseline PixelCNN and the attention model. Note that in cases where fewer than 3 images are available, we simply duplicate other support images.
We observe that the baseline model can sometimes generate images of the right broad category, such as bicycles. However, it usually fails to learn the style and texture of the support images. The attention model is able to more accurately capture the objects, in some cases starting to copy textures such as the red character depicted on a white mug.
8
Published as a conference paper at ICLR 2018
Interestingly, unlike the other datasets we do not observe a quantitative beneï¬t in terms of test like- lihood from the attention model. The baseline model and the attention model achieve 2.15 and 2.14 nats/dim on the validation set, respectively. While likelihood appears to be a useful objective and when combined with attention can generate compelling samples, this suggests that other quantitative criterion besides likelihood may be needed for evaluating few-shot visual concept learning.
# 5 CONCLUSIONS
In this paper we adapted PixelCNN to the task of few-shot density estimation. Comparing to several strong baselines, we showed that Attention PixelCNN achieves state-of-the-art results on Omniglot and also promising results on natural images. The model is very simple and fast to train. By looking at the attention weights, we see that it learns sensible algorithms for generation tasks such as image mirroring and handwritten character drawing. In the Meta PixelCNN model, we also showed that recently proposed methods for gradient-based meta learning can also be used for few-shot density estimation, and also achieve state-of-the-art results in terms of likelihood on Omniglot.
# REFERENCES
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
S Bartunov and DP Vetrov. Fast adaptation in generative models with generative matching networks. arxiv preprint 1612.02192, 2016.
J¨org Bornschein, Andriy Mnih, Daniel Zoran, and Danilo J. Rezende. Variational memory address- ing in generative models. 2017.
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, and Nando de Freitas. Learning to learn for global optimization of black box functions. In ICML, 2017.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248â255, 2009.
Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, arXiv preprint Pieter Abbeel, and Wojciech Zaremba. arXiv:1703.07326, 2017. One-shot imitation learning.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. 2017a.
Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imita- tion learning via meta-learning. arXiv preprint arXiv:1709.04905, 2017b.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo J. Rezende, and Daan Wierstra. Draw: A recur- rent neural network for image generation. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1462â1471, 2015.
Karol Gregor, Frederic Besse, Danilo J. Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. In Advances In Neural Information Processing Systems, pp. 3549â3557, 2016.
Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949.
9
Published as a conference paper at ICLR 2018
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In ICANN, pp. 87â94. Springer, 2001.
Brenden M Lake, Ruslan R Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In NIPS, pp. 2526â2534, 2013.
Gergely Neu and Csaba Szepesv´ari. Apprenticeship learning using inverse reinforcement learning and gradient methods. arXiv preprint arXiv:1206.5264, 2012.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text-to-image synthesis. In ICML, pp. 1060â1069, 2016.
Scott E. Reed, A¨aron van den Oord, Nal Kalchbrenner, Sergio G´omez, Ziyu Wang, Dan Belov, and Nando de Freitas. Parallel multiscale autoregressive density estimation. In ICML, 2017.
Danilo J. Rezende, Ivo Danihelka, Karol Gregor, Daan Wierstra, et al. One-shot generalization In Proceedings of The 33rd International Conference on Machine in deep generative models. Learning, pp. 1521â1529, 2016.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta- learning with memory-augmented neural networks. In ICML, 2016.
Pranav Shyam, Shubham Gupta, and Ambedkar Dukkipati. Attentive recurrent comparators. ICML, 2017. In
Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artiï¬cial life, 11(1-2):13â29, 2005.
Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89â96, 2007.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 1998.
A¨aron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with PixelCNN decoders. In NIPS, 2016.
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In NIPS, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pp. 2048â2057, 2015.
10
Published as a conference paper at ICLR 2018
6 APPENDIX
6.1 ADDITIONAL SAMPLES
PixelCNN Attention PixelCNN Meta PixelCNN
Figure 6: Flipping 24Ã24 images, comparing global-conditional, attention-conditional and gradient- conditional (i.e. MAML) PixelCNN.
6.2 QUALITATIVE COMPARISON TO CONVDRAW
Although all PixelCNN variants outperform the previous state-of-the-art in terms of likelihood, prior methods can still produce high quality samples, in some cases clearly better than the PixelCNN sam- ples. Of course, there are other important factors in choosing a model that may favor autoregressive models, such as training time and scalability to few-shot density modeling on natural images. Also, the Attention PixelCNN has only 286K parameters, compared to 53M for the ConvDRAW. Still, it is notable that likelihood and sample quality lead to conï¬icting rankings of several models.
The conditional ConvDraw model used for these experiments is a modiï¬cation of the models intro- duced in (Gregor et al., 2015; Rezende et al., 2016), where the support set images are ï¬rst encoded with 4 convolution layers without any attention mechanism and then are concatenated to the ConvL- STM state at every Draw step (we used 12 Draw-steps for this paper). The model was trained using the same protocol used for the PixelCNN experiments.
Attention PixelCNN samples ConvDRAW samples Support set examples Test NLL = 0.065 nats/dim Test NLL = 0.076 nats/dim 4 By 8 /O § CQ ae ules FOS © 6 A oh ed ash Geko erar Ban mG /+t abv: 4} 2/3 fixes. OnFS aa Ee ne Ue ce eee aly As|uls mw t elt |e a ole agen LE Aw aialb v :
Figure 7: Comparison to ConvDRAW in 4-shot learning.
11 | {
"id": "1705.03122"
} |
1710.05941 | Searching for Activation Functions | The choice of activation functions in deep networks has a significant effect
on the training dynamics and task performance. Currently, the most successful
and widely-used activation function is the Rectified Linear Unit (ReLU).
Although various hand-designed alternatives to ReLU have been proposed, none
have managed to replace it due to inconsistent gains. In this work, we propose
to leverage automatic search techniques to discover new activation functions.
Using a combination of exhaustive and reinforcement learning-based search, we
discover multiple novel activation functions. We verify the effectiveness of
the searches by conducting an empirical evaluation with the best discovered
activation function. Our experiments show that the best discovered activation
function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, which we name Swish, tends
to work better than ReLU on deeper models across a number of challenging
datasets. For example, simply replacing ReLUs with Swish units improves top-1
classification accuracy on ImageNet by 0.9\% for Mobile NASNet-A and 0.6\% for
Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it
easy for practitioners to replace ReLUs with Swish units in any neural network. | http://arxiv.org/pdf/1710.05941 | Prajit Ramachandran, Barret Zoph, Quoc V. Le | cs.NE, cs.CV, cs.LG | Updated version of "Swish: a Self-Gated Activation Function" | null | cs.NE | 20171016 | 20171027 | 7 1 0 2
t c O 7 2 ] E N . s c [
2 v 1 4 9 5 0 . 0 1 7 1 : v i X r a
# SEARCHING FOR ACTIVATION FUNCTIONS
# namachanra
# Prajit Ramachandranâ, Barret Zoph, Quoc V. Le Google Brain {prajit,barretzoph,qvl}@google.com
# ABSTRACT
The choice of activation functions in deep networks has a signiï¬cant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectiï¬ed Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have man- aged to replace it due to inconsistent gains. In this work, we propose to lever- age automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we dis- cover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activa- tion function. Our experiments show that the best discovered activation function, f (x) = x · sigmoid(βx), which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classiï¬cation accuracy on Im- ageNet by 0.9% for Mobile NASNet-A and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.
# INTRODUCTION
At the heart of every deep network lies a linear transformation followed by an activation func- tion f (·). The activation function plays a major role in the success of training deep neural net- works. Currently, the most successful and widely-used activation function is the Rectiï¬ed Lin- ear Unit (ReLU) (Hahnloser et al., 2000; Jarrett et al., 2009; Nair & Hinton, 2010), deï¬ned as f (x) = max(x, 0). The use of ReLUs was a breakthrough that enabled the fully supervised training of state-of-the-art deep networks (Krizhevsky et al., 2012). Deep networks with ReLUs are more easily optimized than networks with sigmoid or tanh units, because gradients are able to ï¬ow when the input to the ReLU function is positive. Thanks to its simplicity and effectiveness, ReLU has become the default activation function used across the deep learning community.
While numerous activation functions have been proposed to replace ReLU (Maas et al., 2013; He et al., 2015; Clevert et al., 2015; Klambauer et al., 2017), none have managed to gain the widespread adoption that ReLU enjoys. Many practitioners have favored the simplicity and reliability of ReLU because the performance improvements of the other activation functions tend to be inconsistent across different models and datasets.
The activation functions proposed to replace ReLU were hand-designed to ï¬t properties deemed to be important. However, the use of search techniques to automate the discovery of traditionally human-designed components has recently shown to be extremely effective (Zoph & Le, 2016; Bello et al., 2017; Zoph et al., 2017). For example, Zoph et al. (2017) used reinforcement learning- based search to ï¬nd a replicable convolutional cell that outperforms human-designed architectures on ImageNet.
In this work, we use automated search techniques to discover novel activation functions. We focus on ï¬nding new scalar activation functions, which take in as input a scalar and output a scalar, because scalar activation functions can be used to replace the ReLU function without changing the network architecture. Using a combination of exhaustive and reinforcement learning-based search, we ï¬nd a number of novel activation functions that show promising performance. To further validate the
âWork done as a member of the Google Brain Residency program (g.co/brainresidency).
1
effectiveness of using searches to discover scalar activation functions, we empirically evaluate the best discovered activation function. The best discovered activation function, which we call Swish, is f (x) = x · sigmoid(βx), where β is a constant or trainable parameter. Our extensive experiments show that Swish consistently matches or outperforms ReLU on deep networks applied to a variety of challenging domains such as image classiï¬cation and machine translation. On ImageNet, replac- ing ReLUs with Swish units improves top-1 classiï¬cation accuracy by 0.9% on Mobile NASNet-A (Zoph et al., 2017) and 0.6% on Inception-ResNet-v2 (Szegedy et al., 2017). These accuracy gains are signiï¬cant given that one year of architectural tuning and enlarging yielded 1.3% accuracy im- provement going from Inception V3 (Szegedy et al., 2016) to Inception-ResNet-v2 (Szegedy et al., 2017).
# 2 METHODS
In order to utilize search techniques, a search space that contains promising candidate activation functions must be designed. An important challenge in designing search spaces is balancing the size and expressivity of the search space. An overly constrained search space will not contain novel activation functions, whereas a search space that is too large will be difï¬cult to effectively search. To balance the two criteria, we design a simple search space inspired by the optimizer search space of Bello et al. (2017) that composes unary and binary functions to construct the activation function.
Unary Core unit Unary oe: Binary ââ* Unai oH Binary
Figure 1: An example activation function structure. The activation function is composed of multiple repetitions of the âcore unitâ, which consists of two inputs, two unary functions, and one binary function. Unary functions take in a single scalar input and return a single scalar output, such u(x) = x2 or u(x) = Ï(x). Binary functions take in two scalar inputs and return a single scalar output, such as b(x1, x2) = x1 · x2 or b(x1, x2) = exp(â(x1 â x2)2).
As shown in Figure 1, the activation function is constructed by repeatedly composing the the âcore unitâ, which is deï¬ned as b(u1(x1), u2(x2)). The core unit takes in two scalar inputs, passes each input independently through an unary function, and combines the two unary outputs with a binary function that outputs a scalar. Since our aim is to ï¬nd scalar activation functions which transform a single scalar input into a single scalar output, the inputs of the unary functions are restricted to the layer preactivation x and the binary function outputs.
Given the search space, the goal of the search algorithm is to ï¬nd effective choices for the unary and binary functions. The choice of the search algorithm depends on the size of the search space. If the search space is small, such as when using a single core unit, it is possible to exhaustively enumerate the entire search space. If the core unit is repeated multiple times, the search space will be extremely large (i.e., on the order of 1012 possibilities), making exhaustive search infeasible.
For large search spaces, we use an RNN controller (Zoph & Le, 2016), which is visualized in Figure 2. At each timestep, the controller predicts a single component of the activation function. The prediction is fed back to the controller in the next timestep, and this process is repeated until every component of the activation function is predicted. The predicted string is then used to construct the activation function.
Once a candidate activation function has been generated by the search algorithm, a âchild net- workâ with the candidate activation function is trained on some task, such as image classiï¬cation on CIFAR-10. After training, the validation accuracy of the child network is recorded and used
2
Binary |'| Input1 | | Input2 || Unary1 ||| Unary2 |\| Binary |.| Inputt Core unit N-1 Core unit N Core unit Net
Figure 2: The RNN controller used to search over large spaces. At each step, it predicts a single component of the activation function. The prediction is fed back as input to the next timestep in an autoregressive fashion. The controller keeps predicting until every component of the activation function has been chosen. The controller is trained with reinforcement learning.
to update the search algorithm. In the case of exhaustive search, a list of the top performing acti- vation functions ordered by validation accuracy is maintained. In the case of the RNN controller, the controller is trained with reinforcement learning to maximize the validation accuracy, where the validation accuracy serves as the reward. This training pushes the controller to generate activation functions that have high validation accuracies.
Since evaluating a single activation function requires training a child network, the search is compu- tationally expensive. To decrease the wall clock time required to conduct the search, a distributed training scheme is used to parallelize the training of each child network. In this scheme, the search algorithm proposes a batch of candidate activation functions which are added to a queue. Worker machines pull activation functions off the queue, train a child network, and report back the ï¬nal val- idation accuracy of the corresponding activation function. The validation accuracies are aggregated and used to update the search algorithm.
# 3 SEARCH FINDINGS
We conduct all our searches with the ResNet-20 (He et al., 2016a) as the child network architecture, and train on CIFAR-10 (Krizhevsky & Hinton, 2009) for 10K steps. This constrained environ- ment could potentially skew the results because the top performing activation functions might only perform well for small networks. However, we show in the experiments section that many of the discovered functions generalize to larger models. Exhaustive search is used for small search spaces, while an RNN controller is used for larger search spaces. The RNN controller is trained with Policy Proximal Optimization (Schulman et al., 2017), using the exponential moving average of rewards as a baseline to reduce variance. The full list unary and binary functions considered are as follows:
â
|z|, 27, 23, /z, Bx, x + B, log(|x| + â¬), exp(x) sin(x), cos(zx), ae sinh '(a), tan71(x), sine(a), max(2,0), min(x,0), o(2), ?), erf(x), 8 e Unary functions: x, âx sinh(x), cosh(x), tanh( log(1 + exp(x)), exp(âa
e Binary functions: 7) + x2, 71 - x2, 71 â 2, wae max(21, 22), min(21,%2), 0(21) + x2, exp(â((a1 â 2)â), exp(âBla1 â x2), Bar + (1 â B)ae
where β indicates a per-channel trainable parameter and Ï(x) = (1 + exp(âx))â1 is the sigmoid function. Different search spaces are created by varying the number of core units used to construct the activation function and varying the unary and binary functions available to the search algorithm.
Figure 3 plots the top performing novel activation functions found by the searches. We highlight several noteworthy trends uncovered by the searches:
3
â min(z, sin(z)) â (ta (2) 2 max(2, 0(c)) cos(z) â2 â max(z, tanh(2)) sine(2) +2 -6 4 -2 0 2 4 6 -6 4 -2 0 2 4
Figure 3: The top novel activation functions found by the searches. Separated into two diagrams for visual clarity. Best viewed in color.
⢠Complicated activation functions consistently underperform simpler activation functions, potentially due to an increased difï¬culty in optimization. The best performing activation functions can be represented by 1 or 2 core units.
⢠A common structure shared by the top activation functions is the use of the raw preactiva- tion x as input to the ï¬nal binary function: b(x, g(x)). The ReLU function also follows this structure, where b(x1, x2) = max(x1, x2) and g(x) = 0.
⢠The searches discovered activation functions that utilize periodic functions, such as sin and cos. The most common use of periodic functions is through addition or subtraction with the raw preactivation x (or a linearly scaled x). The use of periodic functions in activation functions has only been brieï¬y explored in prior work (Parascandolo et al., 2016), so these discovered functions suggest a fruitful route for further research.
⢠Functions that use division tend to perform poorly because the output explodes when the denominator is near 0. Division is successful only when functions in the denominator are either bounded away from 0, such as cosh(x), or approach 0 only when the numerator also approaches 0, producing an output of 1.
Since the activation functions were found using a relatively small child network, their performance may not generalize when applied to bigger models. To test the robustness of the top performing novel activation functions to different architectures, we run additional experiments using the preactivation ResNet-164 (RN) (He et al., 2016b), Wide ResNet 28-10 (WRN) (Zagoruyko & Komodakis, 2016), and DenseNet 100-12 (DN) (Huang et al., 2017) models. We implement the 3 models in TensorFlow and replace the ReLU function with each of the top novel activation functions discovered by the searches. We use the same hyperparameters described in each work, such as optimizing using SGD with momentum, and follow previous works by reporting the median of 5 different runs.
Function RN WRN DN Function RN WRN DN ReLU [max(x, 0)] 93.8 95.3 94.8 ReLU [max(x, 0)] 74.2 77.8 83.7 x · Ï(βx) max(x, Ï(x)) cos(x) â x min(x, sin(x)) (tanâ1(x))2 â x max(x, tanh(x)) sinc(x) + x x · (sinhâ1(x))2 94.5 94.3 94.1 94.0 93.9 93.9 91.5 85.1 95.5 95.3 94.8 95.1 94.7 94.2 92.1 92.1 94.9 94.8 94.6 94.4 94.9 94.5 92.0 91.1 x · Ï(βx) max(x, Ï(x)) cos(x) â x min(x, sin(x)) (tanâ1(x))2 â x max(x, tanh(x)) sinc(x) + x x · (sinhâ1(x))2 75.1 74.8 75.2 73.4 75.2 74.8 66.1 52.8 78.0 78.6 76.6 77.1 76.7 76.0 68.3 70.6 83.9 84.2 81.8 74.3 83.1 78.6 67.9 68.1
Table 1: CIFAR-10 accuracy.
Table 2: CIFAR-100 accuracy.
The results are shown in Tables 1 and 2. Despite the changes in model architecture, six of the eight activation functions successfully generalize. Of these six activation functions, all match or outperform ReLU on ResNet-164. Furthermore, two of the discovered activation functions, x·Ï(βx) and max(x, Ï(x)), consistently match or outperform ReLU on all three models.
4
6
While these results are promising, it is still unclear whether the discovered activation functions can successfully replace ReLU on challenging real world datasets. In order to validate the effec- tiveness of the searches, in the rest of this work we focus on empirically evaluating the activation function f (x) = x · Ï(βx), which we call Swish. We choose to extensively evaluate Swish in- stead of max(x, Ï(x)) because early experimentation showed better generalization for Swish. In the following sections, we analyze the properties of Swish and then conduct a thorough empirical evaluation comparing Swish, ReLU, and other candidate baseline activation functions on number of large models across a variety of tasks.
# 4 SWISH
To recap, Swish is deï¬ned as x · Ï(βx), where Ï(z) = (1 + exp(âz))â1 is the sigmoid function and β is either a constant or a trainable parameter. Figure 4 plots the graph of Swish for different values of β. If β = 1, Swish is equivalent to the Sigmoid-weighted Linear Unit (SiL) of Elfwing et al. (2017) that was proposed for reinforcement learning. If β = 0, Swish becomes the scaled linear function f (x) = x 2 . As β â â, the sigmoid component approaches a 0-1 function, so Swish becomes like the ReLU function. This suggests that Swish can be loosely viewed as a smooth function which nonlinearly interpolates between the linear function and the ReLU function. The degree of interpolation can be controlled by the model if β is set as a trainable parameter.
Swish â soot â p=01 â p=10 â p-10 â p=100 Lo â g-100 Swish first derivatives 5 4 3 2 1 ° 1 2 3 6 4 2 ° 2 4 6
Figure 4: The Swish activation function.
# Figure 5: First derivatives of Swish.
Like ReLU, Swish is unbounded above and bounded below. Unlike ReLU, Swish is smooth and non- monotonic. In fact, the non-monotonicity property of Swish distinguishes itself from most common activation functions. The derivative of Swish is
fi(w) = o(Bx) + Bx -o(Bx)(1 â o(82)) o(Gxr) + Ba -o(Bx) â Bax -o(Bx)? = Ba -o(x) + 0(Bx)(1 â Bx -0(Bz)) = f(x) + o(8x)(1â Bf (2)
The ï¬rst derivative of Swish is shown in Figure 5 for different values of β. The scale of β controls how fast the ï¬rst derivative asymptotes to 0 and 1. When β = 1, the derivative has magnitude less than 1 for inputs that are less than around 1.25. Thus, the success of Swish with β = 1 implies that the gradient preserving property of ReLU (i.e., having a derivative of 1 when x > 0) may no longer be a distinct advantage in modern architectures.
The most striking difference between Swish and ReLU is the non-monotonic âbumpâ of Swish when x < 0. As shown in Figure 6, a large percentage of preactivations fall inside the domain of the bump (â5 ⤠x ⤠0), which indicates that the non-monotonic bump is an important aspect of Swish. The shape of the bump can be controlled by changing the β parameter. While ï¬xing β = 1 is effective in practice, the experiments section shows that training β can further improve performance on some models. Figure 7 plots distribution of trained β values from a Mobile NASNet-A model (Zoph et al., 2017). The trained β values are spread out between 0 and 1.5 and have a peak at β â 1, suggesting that the model takes advantage of the additional ï¬exibility of trainable β parameters.
5
Preactivations after training 8 values after training Ta 1.0 1.5 2.0 A | \__â | âfll -10 -5 i?) 5 10 -0.5 0.0
Figure 6: Preactivation distribution after training of Swish with β = 1 on ResNet-32.
Figure 7: Distribution of trained β values of Swish on Mobile NASNet-A.
Practically, Swish can be implemented with a single line code change in most deep learning libraries, such as TensorFlow (Abadi et al., 2016) (e.g., x * tf.sigmoid(beta * x) or tf.nn.swish(x) if using a version of TensorFlow released after the submission of this work). As a cautionary note, if BatchNorm (Ioffe & Szegedy, 2015) is used, the scale parameter should be set. Some high level libraries turn off the scale parameter by default due to the ReLU function being piecewise linear, but this setting is incorrect for Swish. For training Swish networks, we found that slightly lowering the learning rate used to train ReLU networks works well.
# 5 EXPERIMENTS WITH SWISH
We benchmark Swish against ReLU and a number of recently proposed activation functions on challenging datasets, and ï¬nd that Swish matches or exceeds the baselines on nearly all tasks. The following sections will describe our experimental settings and results in greater detail. As a sum- mary, Table 3 shows Swish in comparison to each baseline activation function we considered (which are deï¬ned in the next section). The results in Table 3 are aggregated by comparing the performance of Swish to the performance of different activation functions applied to a variety of models, such as Inception ResNet-v2 (Szegedy et al., 2017) and Transformer (Vaswani et al., 2017), across multiple datasets, such as CIFAR, ImageNet, and EnglishâGerman translation.1 The improvement of Swish over other activation functions is statistically signiï¬cant under a one-sided paired sign test.
Baselines ReLU LReLU PReLU Softplus ELU SELU GELU Swish > Baseline Swish = Baseline Swish < Baseline 9 0 0 7 1 1 6 3 0 6 2 1 8 0 1 8 1 0 8 1 0
Table 3: The number of models on which Swish outperforms, is equivalent to, or underperforms each baseline activation function we compared against in our experiments.
5.1 EXPERIMENTAL SET UP
We compare Swish against several additional baseline activation functions on a variety of models and datasets. Since many activation functions have been proposed, we choose the most common activation functions to compare against, and follow the guidelines laid out in each work:
1To avoid skewing the comparison, each model type is compared just once. A model with multiple results is represented by the median of its results. Speciï¬cally, the models with aggregated results are (a) ResNet-164, Wide ResNet 28-10, and DenseNet 100-12 across the CIFAR-10 and CIFAR-100 results, (b) Mobile NASNet-A and Inception-ResNet-v2 across the 3 runs, and (c) WMT Transformer model across the 4 newstest results.
6
⢠Leaky ReLU (LReLU) (Maas et al., 2013):
# x
f (x) = if x ⥠0 αx if x < 0
where α = 0.01. LReLU enables a small amount of information to ï¬ow when x < 0.
⢠Parametric ReLU (PReLU) (He et al., 2015): The same form as LReLU but α is a learnable parameter. Each channel has a shared α which is initialized to 0.25.
⢠Softplus (Nair & Hinton, 2010): f (x) = log(1 + exp(x)). Softplus is a smooth function with properties similar to Swish, but is strictly positive and monotonic. It can be viewed as a smooth version of ReLU.
⢠Exponential Linear Unit (ELU) (Clevert et al., 2015):
# x
f (x) = α(exp(x) â 1) if x ⥠0 if x < 0
where α = 1.0
⢠Scaled Exponential Linear Unit (SELU) (Klambauer et al., 2017):
# x
f (x) = λ α(exp(x) â 1) if x ⥠0 if x < 0
with α â 1.6733 and λ â 1.0507.
⢠Gaussian Error Linear Unit (GELU) (Hendrycks & Gimpel, 2016): f (x) = x · Φ(x), where Φ(x) is the cumulative distribution function of the standard normal distribution. GELU is a nonmonotonic function that has a shape similar to Swish with β = 1.4.
We evaluate both Swish with a trainable β and Swish with a ï¬xed β = 1 (which for simplicity we call Swish-1, but it is equivalent to the Sigmoid-weighted Linear Unit of Elfwing et al. (2017)). Note that our results may not be directly comparable to the results in the corresponding works due to differences in our training setup.
# 5.2 CIFAR
We ï¬rst compare Swish to all the baseline activation functions on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton, 2009). We follow the same set up used when comparing the acti- vation functions discovered by the search techniques, and compare the median of 5 runs with the preactivation ResNet-164 (He et al., 2016b), Wide ResNet 28-10 (WRN) (Zagoruyko & Komodakis, 2016), and DenseNet 100-12 (Huang et al., 2017) models.
Model ResNet WRN DenseNet Model ResNet WRN DenseNet LReLU PReLU Softplus ELU SELU GELU 94.2 94.1 94.6 94.1 93.0 94.3 95.6 95.1 94.9 94.1 93.2 95.5 94.7 94.5 94.7 94.4 93.9 94.8 LReLU PReLU Softplus ELU SELU GELU 74.2 74.5 76.0 75.0 73.2 74.7 78.0 77.3 78.4 76.0 74.3 78.0 83.3 81.5 83.7 80.6 80.8 83.8 ReLU 93.8 95.3 94.8 ReLU 74.2 77.8 83.7 Swish-1 Swish 94.7 94.5 95.5 95.5 94.8 94.8 Swish-1 Swish 75.1 75.1 78.5 78.0 83.8 83.9
Table 4: CIFAR-10 accuracy.
Table 5: CIFAR-100 accuracy.
The results in Tables 4 and 5 show how Swish and Swish-1 consistently matches or outperforms ReLU on every model for both CIFAR-10 and CIFAR-100. Swish also matches or exceeds the best baseline performance on almost every model. Importantly, the âbest baselineâ changes between dif- ferent models, which demonstrates the stability of Swish to match these varying baselines. Softplus, which is smooth and approaches zero on one side, similar to Swish, also has strong performance.
7
5.3 IMAGENET
Next, we benchmark Swish against the baseline activation functions on the ImageNet 2012 classi- ï¬cation dataset (Russakovsky et al., 2015). ImageNet is widely considered one of most important image classiï¬cation datasets, consisting of a 1,000 classes and 1.28 million training images. We evaluate on the validation dataset, which has 50,000 images.
We compare all the activation functions on a variety of architectures designed for ImageNet: Inception-ResNet-v2, Inception-v4, Inception-v3 (Szegedy et al., 2017), MobileNet (Howard et al., 2017), and Mobile NASNet-A (Zoph et al., 2017). All these architectures were designed with Re- LUs. We again replace the ReLU activation function with different activation functions and train for a ï¬xed number of steps, determined by the convergence of the ReLU baseline. For each activa- tion function, we try 3 different learning rates with RMSProp (Tieleman & Hinton, 2012) and pick the best.2 All networks are initialized with He initialization (He et al., 2015).3 To verify that the performance differences are reproducible, we run the Inception-ResNet-v2 and Mobile NASNet-A experiments 3 times with the best learning rate from the ï¬rst experiment. We plot the learning curves for Mobile NASNet-A in Figure 8.
Mobile NASNet-A training curve ~~ Swish train 005 oe Seta val net 0-60, 50000 100000 450000 200000 Training steps
Model Top-1 Acc. (%) Top-5 Acc. (%) LReLU PReLU Softplus ELU SELU GELU 73.8 74.6 74.0 74.1 73.6 74.6 73.9 74.7 74.2 74.2 73.7 - 74.2 74.7 74.2 74.2 73.7 - 91.6 92.4 91.6 91.8 91.6 92.0 91.9 92.3 91.8 91.8 91.7 - 91.9 92.3 91.9 91.8 91.7 - ReLU 73.5 73.6 73.8 91.4 91.5 91.6 Swish-1 Swish 74.6 74.9 74.7 74.9 74.7 75.2 92.1 92.3 92.0 92.4 92.0 92.4
Figure 8: Training curves of Mobile NASNet-A on ImageNet. Best viewed in color
Table 6: Mobile NASNet-A on ImageNet, with 3 different runs ordered by top-1 accuracy. The additional 2 GELU experiments are still training at the time of submission.
Model Top-1 Acc. (%) Top-5 Acc. (%) Model Top-1 Acc. (%) LReLU PReLU Softplus ELU SELU GELU 79.5 79.7 80.1 75.8 79.0 79.6 79.5 79.8 80.2 79.9 79.2 79.6 79.6 80.1 80.4 80.0 79.2 79.9 94.7 94.8 95.2 92.6 94.5 94.8 94.7 94.9 95.2 95.0 94.4 94.8 94.7 94.9 95.3 95.1 94.5 94.9 LReLU PReLU Softplus ELU SELU GELU 72.5 74.2 73.6 73.9 73.2 73.5 91.0 91.9 91.6 91.3 91.0 91.4 ReLU 79.5 79.6 79.8 94.8 94.8 94.8 ReLU 72.0 90.8 Swish-1 Swish 80.2 80.2 80.3 80.2 80.4 80.3 95.1 95.0 95.2 95.2 95.2 95.0 Swish-1 Swish 74.2 74.2 91.6 91.7
# Top-5 Acc. (%)
Table 7: Inception-ResNet-v2 on ImageNet with 3 different runs. Note that the ELU sometimes has instabilities at the start of training, which accounts for the ï¬rst result.
# Table 8: MobileNet on ImageNet.
The results in Tables 6-10 show strong performance for Swish. On Inception-ResNet-v2, Swish outperforms ReLU by a nontrivial 0.5%. Swish performs especially well on mobile sized models,
2For some of the models with ELU, SELU, and PReLU, we train with an additional 3 learning rates (so a total of 6 learning rates) because the original 3 learning rates did not converge.
3For SELU, we tried both He initialization and the initialization recommended in Klambauer et al. (2017), and choose the best result for each model separately.
8
Model Top-1 Acc. (%) Top-5 Acc. (%) Model Top-1 Acc. (%) Top-5 Acc. (%) LReLU PReLU Softplus ELU SELU GELU 78.4 77.7 78.7 77.9 76.7 77.7 94.1 93.5 94.4 93.7 92.8 93.9 LReLU PReLU Softplus ELU SELU GELU 79.3 79.3 79.6 79.5 78.3 79.0 94.7 94.4 94.8 94.5 94.5 94.6 ReLU 78.4 94.2 ReLU 79.2 94.6 Swish-1 Swish 78.7 78.7 94.2 94.0 Swish-1 Swish 79.3 79.3 94.7 94.6
Table 9: Inception-v3 on ImageNet.
Table 10: Inception-v4 on ImageNet.
with a 1.4% boost on Mobile NASNet-A and a 2.2% boost on MobileNet over ReLU. Swish also matches or exceeds the best performing baseline on most models, where again, the best performing baseline differs depending on the model. Softplus achieves accuracies comparable to Swish on the larger models, but performs worse on both mobile sized models. For Inception-v4, the gains from switching between activation functions is more limited, and Swish slightly underperforms Softplus and ELU. In general, the results suggest that switching to Swish improves performance with little additional tuning.
5.4 MACHINE TRANSLATION
We additionally benchmark Swish on the domain of machine translation. We train machine transla- tion models on the standard WMT 2014 EnglishâGerman dataset, which has 4.5 million training sentences, and evaluate on 4 different newstest sets using the standard BLEU metric. We use the attention based Transformer (Vaswani et al., 2017) model, which utilizes ReLUs in a 2-layered feed- forward network between each attention layer. We train a 12 layer âBase Transformerâ model with 2 different learning rates4 for 300K steps, but otherwise use the same hyperparameters as in the original work, such as using Adam (Kingma & Ba, 2015) to optimize.
Model newstest2013 newstest2014 newstest2015 newstest2016 LReLU PReLU Softplus ELU SELU GELU 26.2 26.3 23.4 24.6 23.7 25.9 27.9 27.7 23.6 25.1 23.5 27.3 29.8 29.7 25.8 27.7 25.9 29.5 33.4 33.1 29.2 32.5 30.5 33.1 ReLU 26.1 27.8 29.8 33.3 Swish-1 Swish 26.2 26.5 28.0 27.6 30.1 30.0 34.0 33.1
Table 11: BLEU score of a 12 layer Transformer on WMT EnglishâGerman.
Table 11 shows that Swish outperforms or matches the other baselines on machine translation. Swish-1 does especially well on newstest2016, exceeding the next best performing baseline by 0.6 BLEU points. The worst performing baseline function is Softplus, demonstrating inconsistency in performance across differing domains. In contrast, Swish consistently performs well across multiple domains.
# 6 RELATED WORK
Swish was found using a variety of automated search techniques. Search techniques have been utilized in other works to discover convolutional and recurrent architectures (Zoph & Le, 2016;
4We tried an additional learning rate for Softplus, but found it did not work well across all learning rates.
9
Zoph et al., 2017; Real et al., 2017; Cai et al., 2017; Zhong et al., 2017) and optimizers (Bello et al., 2017). The use of search techniques to discover traditionally hand-designed components is an instance of the recently revived subï¬eld of meta-learning (Schmidhuber, 1987; Naik & Mammone, 1992; Thrun & Pratt, 2012). Meta-learning has been used to ï¬nd initializations for one-shot learning (Finn et al., 2017; Ravi & Larochelle, 2016), adaptable reinforcement learning (Wang et al., 2016; Duan et al., 2016), and generating model parameters (Ha et al., 2016). Meta-learning is powerful because the ï¬exibility derived from the minimal assumptions encoded leads to empirically effective solutions. We take advantage of this property in order to ï¬nd scalar activation functions, such as Swish, that have strong empirical performance.
While this work focuses on scalar activation functions, which transform one scalar to another scalar, there are many types of activation functions used in deep networks. Many-to-one functions, like max pooling, maxout (Goodfellow et al., 2013), and gating (Hochreiter & Schmidhuber, 1997; Srivastava et al., 2015; van den Oord et al., 2016; Dauphin et al., 2016; Wu et al., 2016; Miech et al., 2017), derive their power from combining multiple sources in a nonlinear way. One-to-many functions, like Concatenated ReLU (Shang et al., 2016), improve performance by applying multiple nonlinear functions to a single input. Finally, many-to-many functions, such as BatchNorm (Ioffe & Szegedy, 2015) and LayerNorm (Ba et al., 2016), induce powerful nonlinear relationships between their in- puts.
Most prior work has focused on proposing new activation functions (Maas et al., 2013; Agostinelli et al., 2014; He et al., 2015; Clevert et al., 2015; Hendrycks & Gimpel, 2016; Klambauer et al., 2017; Qiu & Cai, 2017; Zhou et al., 2017; Elfwing et al., 2017), but few studies, such as Xu et al. (2015), have systematically compared different activation functions. To the best of our knowledge, this is the ï¬rst study to compare scalar activation functions across multiple challenging datasets.
Our study shows that Swish consistently outperforms ReLU on deep models. The strong perfor- mance of Swish challenges conventional wisdom about ReLU. Hypotheses about the importance of the gradient preserving property of ReLU seem unnecessary when residual connections (He et al., 2016a) enable the optimization of very deep networks. A similar insight can be found in the fully at- tentional Transformer (Vaswani et al., 2017), where the intricately constructed LSTM cell (Hochre- iter & Schmidhuber, 1997) is no longer necessary when constant-length attentional connections are used. Architectural improvements lessen the need for individual components to preserve gradients.
# 7 CONCLUSION
In this work, we utilized automatic search techniques to discover novel activation functions that have strong empirical performance. We then empirically validated the best discovered activation function, which we call Swish and is deï¬ned as f (x) = x · sigmoid(βx). Our experiments used models and hyperparameters that were designed for ReLU and just replaced the ReLU activation function with Swish; even this simple, suboptimal procedure resulted in Swish consistently outperforming ReLU and other activation functions. We expect additional gains to be made when these models and hyperparameters are speciï¬cally designed with Swish in mind. The simplicity of Swish and its similarity to ReLU means that replacing ReLUs in any network is just a simple one line code change.
ACKNOWLEDGEMENTS
We thank Esteban Real, Geoffrey Hinton, Irwan Bello, Jascha Sohl-Dickstein, Jon Shlens, Kathryn Rough, Mohammad Norouzi, Navdeep Jaitly, Niki Parmar, Sam Smith, Simon Kornblith, Vijay Vasudevan, and the Google Brain team for help with this project.
# REFERENCES
Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorï¬ow: A system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation, volume 16, pp. 265â283, 2016.
Forest Agostinelli, Matthew Hoffman, Peter Sadowski, and Pierre Baldi. Learning activation functions to improve deep neural networks. arXiv preprint arXiv:1412.6830, 2014.
10
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. In Advances in Neural Information Processing Systems, 2016.
Irwan Bello, Barret Zoph, Vijay Vasudevan, and Quoc V Le. Neural optimizer search with reinforcement learning. In International Conference on Machine Learning, pp. 459â468, 2017.
Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Reinforcement learning for architecture search by network transformation. arXiv preprint arXiv:1707.04873, 2017.
Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083, 2016.
Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforce- ment learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. arXiv preprint arXiv:1702.03118, 2017.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017.
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In International Conference on Machine Learning, 2013.
David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
Richard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney J Douglas, and H Sebastian Seung. Digital selection and analogue ampliï¬cation coexist in a cortex-inspired silicon circuit. Nature, 405(6789): 947, 2000.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human- level performance on imagenet classiï¬cation. In Proceedings of the IEEE international conference on com- puter vision, pp. 1026â1034, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pp. 630â645. Springer, 2016b.
Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415, 2016.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision ap- plications. arXiv preprint arXiv:1704.04861, 2017.
Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In Conference on Computer Vision and Pattern Recognition, 2017.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448â456, 2015.
Kevin Jarrett, Koray Kavukcuoglu, Yann LeCun, et al. What is the best multi-stage architecture for object recognition? In 2009 IEEE 12th International Conference on Computer Vision, 2009.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural net- works. arXiv preprint arXiv:1706.02515, 2017.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Technical report, University of Toronto, 2009.
11
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097â1105, 2012.
Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural network acoustic models. In International Conference on Machine Learning, volume 30, 2013.
Antoine Miech, Ivan Laptev, and Josef Sivic. Learnable pooling with context gating for video classiï¬cation. arXiv preprint arXiv:1706.06905, 2017.
Devang K Naik and RJ Mammone. Meta-neural networks that learn by learning. In Neural Networks, 1992. IJCNN., International Joint Conference on, volume 1, pp. 437â442. IEEE, 1992.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Interna- tional Conference on Machine Learning, 2010.
Giambattista Parascandolo, Heikki Huttunen, and Tuomas Virtanen. Taming the waves: sine as activation function in deep neural networks. 2016.
Suo Qiu and Bolun Cai. Flexible rectiï¬ed linear units for improving convolutional neural networks. arXiv preprint arXiv:1706.08098, 2017.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Quoc Le, and Alex Kurakin. Large-scale evolution of image classiï¬ers. arXiv preprint arXiv:1703.01041, 2017.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
Jurgen Schmidhuber. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improving convolutional neural networks via concatenated rectiï¬ed linear units. In International Conference on Machine Learning, pp. 2217â2225, 2016.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the incep- tion architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, pp. 4278â4284, 2017.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26â31, 2012.
Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems, pp. 4790â4798, 2016.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles arXiv preprint Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. arXiv:1611.05763, 2016.
Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. On multiplicative In Advances in Neural Information Processing Systems, pp. integration with recurrent neural networks. 2856â2864, 2016.
12
Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectiï¬ed activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference, 2016.
Zhao Zhong, Junjie Yan, and Cheng-Lin Liu. Practical network blocks design with q-learning. arXiv preprint arXiv:1708.05552, 2017.
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Xiao Ma, Yanghui Yan, Xingya Dai, Han Zhu, Junqi Jin, Han Li, and Kun Gai. Deep interest network for click-through rate prediction. arXiv preprint arXiv:1706.06978, 2017.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In International Confer- ence on Learning Representations, 2016.
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scal- able image recognition. arXiv preprint arXiv:1707.07012, 2017.
13 | {
"id": "1702.03118"
} |
1710.04087 | Word Translation Without Parallel Data | State-of-the-art methods for learning cross-lingual word embeddings have
relied on bilingual dictionaries or parallel corpora. Recent studies showed
that the need for parallel data supervision can be alleviated with
character-level information. While these methods showed encouraging results,
they are not on par with their supervised counterparts and are limited to pairs
of languages sharing a common alphabet. In this work, we show that we can build
a bilingual dictionary between two languages without using any parallel
corpora, by aligning monolingual word embedding spaces in an unsupervised way.
Without using any character information, our model even outperforms existing
supervised methods on cross-lingual tasks for some language pairs. Our
experiments demonstrate that our method works very well also for distant
language pairs, like English-Russian or English-Chinese. We finally describe
experiments on the English-Esperanto low-resource language pair, on which there
only exists a limited amount of parallel data, to show the potential impact of
our method in fully unsupervised machine translation. Our code, embeddings and
dictionaries are publicly available. | http://arxiv.org/pdf/1710.04087 | Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou | cs.CL | ICLR 2018 | null | cs.CL | 20171011 | 20180130 | 8 1 0 2 n a J 0 3 ] L C . s c [
3 v 7 8 0 4 0 . 0 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# WORD TRANSLATION WITHOUT PARALLEL DATA
Alexis Conneauâ â â¡ , Guillaume Lampleâ â § , MarcâAurelio Ranzatoâ , Ludovic Denoyer§ , Herv´e J´egouâ {aconneau,glample,ranzato,rvj}@fb.com ludovic.denoyer@upmc.fr
# ABSTRACT
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character informa- tion, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English- Chinese. We ï¬nally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available1.
# INTRODUCTION
Most successful methods for learning distributed representations of words (e.g. Mikolov et al. (2013c;a); Pennington et al. (2014); Bojanowski et al. (2017)) rely on the distributional hypoth- esis of Harris (1954), which states that words occurring in similar contexts tend to have similar meanings. Levy & Goldberg (2014) show that the skip-gram with negative sampling method of Mikolov et al. (2013c) amounts to factorizing a word-context co-occurrence matrix, whose entries are the pointwise mutual information of the respective word and context pairs. Exploiting word co- occurrence statistics leads to word vectors that reï¬ect the semantic similarities and dissimilarities: similar words are close in the embedding space and conversely.
Mikolov et al. (2013b) ï¬rst noticed that continuous word embedding spaces exhibit similar structures across languages, even when considering distant language pairs like English and Vietnamese. They proposed to exploit this similarity by learning a linear mapping from a source to a target embedding space. They employed a parallel vocabulary of ï¬ve thousand words as anchor points to learn this mapping and evaluated their approach on a word translation task. Since then, several studies aimed at improving these cross-lingual word embeddings (Faruqui & Dyer (2014); Xing et al. (2015); Lazaridou et al. (2015); Ammar et al. (2016); Artetxe et al. (2016); Smith et al. (2017)), but they all rely on bilingual word lexicons.
Recent attempts at reducing the need for bilingual supervision (Smith et al., 2017) employ identical character strings to form a parallel vocabulary. The iterative method of Artetxe et al. (2017) gradu- ally aligns embedding spaces, starting from a parallel vocabulary of aligned digits. These methods are however limited to similar languages sharing a common alphabet, such as European languages. Some recent methods explored distribution-based approach (Cao et al., 2016) or adversarial training Zhang et al. (2017b) to obtain cross-lingual word embeddings without any parallel data. While these
âEqual contribution. Order has been determined with a coin ï¬ip. â Facebook AI Research â¡LIUM, University of Le Mans §Sorbonne Universit´es, UPMC Univ Paris 06, UMR 7606, LIP6 1https://github.com/facebookresearch/MUSE
1
Published as a conference paper at ICLR 2018
approaches sound appealing, their performance is signiï¬cantly below supervised methods. To sum up, current methods have either not reached competitive performance, or they still require parallel data, such as aligned corpora (Gouws et al., 2015; Vulic & Moens, 2015) or a seed parallel lexicon (Duong et al., 2016).
In this paper, we introduce a model that either is on par, or outperforms supervised state-of-the-art methods, without employing any cross-lingual annotated data. We only use two large monolingual corpora, one in the source and one in the target language. Our method leverages adversarial training to learn a linear mapping from a source to a target space and operates in two steps. First, in a two- player game, a discriminator is trained to distinguish between the mapped source embeddings and the target embeddings, while the mapping (which can be seen as a generator) is jointly trained to fool the discriminator. Second, we extract a synthetic dictionary from the resulting shared embedding space and ï¬ne-tune the mapping with the closed-form Procrustes solution from Sch¨onemann (1966). Since the method is unsupervised, cross-lingual data can not be used to select the best model. To overcome this issue, we introduce an unsupervised selection metric that is highly correlated with the mapping quality and that we use both as a stopping criterion and to select the best hyper-parameters.
In summary, this paper makes the following main contributions:
⢠We present an unsupervised approach that reaches or outperforms state-of-the-art super- vised approaches on several language pairs and on three different evaluation tasks, namely word translation, sentence translation retrieval, and cross-lingual word similarity. On a standard word translation retrieval benchmark, using 200k vocabularies, our method reaches 66.2% accuracy on English-Italian while the best supervised approach is at 63.7%.
⢠We introduce a cross-domain similarity adaptation to mitigate the so-called hubness prob- lem (points tending to be nearest neighbors of many points in high-dimensional spaces). It is inspired by the self-tuning method from Zelnik-manor & Perona (2005), but adapted to our two-domain scenario in which we must consider a bi-partite graph for neighbors. This approach signiï¬cantly improves the absolute performance, and outperforms the state of the art both in supervised and unsupervised setups on word-translation benchmarks.
⢠We propose an unsupervised criterion that is highly correlated with the quality of the map- ping, that can be used both as a stopping criterion and to select the best hyper-parameters.
⢠We release high-quality dictionaries for 12 oriented languages pairs, as well as the corre- sponding supervised and unsupervised word embeddings.
⢠We demonstrate the effectiveness of our method using an example of a low-resource lan- guage pair where parallel corpora are not available (English-Esperanto) for which our method is particularly suited.
The paper is organized as follows. Section 2 describes our unsupervised approach with adversarial training and our reï¬nement procedure. We then present our training procedure with unsupervised model selection in Section 3. We report in Section 4 our results on several cross-lingual tasks for several language pairs and compare our approach to supervised methods. Finally, we explain how our approach differs from recent related work on learning cross-lingual word embeddings.
# 2 MODEL
In this paper, we always assume that we have two sets of embeddings trained independently on monolingual data. Our work focuses on learning a mapping between the two sets such that transla- tions are close in the shared space. Mikolov et al. (2013b) show that they can exploit the similarities of monolingual embedding spaces to learn such a mapping. For this purpose, they use a known dictionary of n = 5000 pairs of words {xi, yi}iâ{1,n}, and learn a linear mapping W between the source and the target space such that
W* = argmin ||WX âY||p (1) WeMa(R)
where d is the dimension of the embeddings, Md(R) is the space of d à d matrices of real numbers, and X and Y are two aligned matrices of size d à n containing the embeddings of the words in the parallel vocabulary. The translation t of any source word s is deï¬ned as t = argmaxt cos(W xs, yt).
2
Published as a conference paper at ICLR 2018
Figure 1: Toy illustration of the method. (A) There are two distributions of word embeddings, English words in red denoted by X and Italian words in blue denoted by Y , which we want to align/translate. Each dot represents a word in that space. The size of the dot is proportional to the frequency of the words in the training corpus of that language. (B) Using adversarial learning, we learn a rotation matrix W which roughly aligns the two distributions. The green stars are randomly selected words that are fed to the discriminator to determine whether the two word embeddings come from the same distribution. (C) The mapping W is further reï¬ned via Procrustes. This method uses frequent words aligned by the previous step as anchor points, and minimizes an energy function that corresponds to a spring system between anchor points. The reï¬ned mapping is then used to map all words in the dictionary. (D) Finally, we translate by using the mapping W and a distance metric, dubbed CSLS, that expands the space where there is high density of points (like the area around the word âcatâ), so that âhubsâ (like the word âcatâ) become less close to other word vectors than they would otherwise (compare to the same region in panel (A)).
In practice, |Mikolov et al.|(2013b) obtained better results on the word translation task using a sim- ple linear mapping, and did not observe any improvement when using more advanced strategies like multilayer neural networks. {Xing et al. (2015) showed that these results are improved by enforc- ing an orthogonality constraint on W. In that case, the equation boils down to the Procrustes problem, which advantageously offers a closed form solution obtained from the singular value de- composition (SVD) of Y X7: W* = argmin ||WX â Y||p = UV", with UZV7 = SVD(YX7). (2) We04(R)
In this paper, we show how to learn this mapping W without cross-lingual supervision; an illustration of the approach is given in Fig. 1. First, we learn an initial proxy of W by using an adversarial criterion. Then, we use the words that match the best as anchor points for Procrustes. Finally, we improve performance over less frequent words by changing the metric of the space, which leads to spread more of those points in dense regions. Next, we describe the details of each of these steps.
2.1 DOMAIN-ADVERSARIAL SETTING
In this section, we present our domain-adversarial approach for learning W without cross-lingual supervision. Let X = {x1, ..., xn} and Y = {y1, ..., ym} be two sets of n and m word embeddings coming from a source and a target language respectively. A model is trained to discriminate between elements randomly sampled from W X = {W x1, ..., W xn} and Y. We call this model the discrim- inator. W is trained to prevent the discriminator from making accurate predictions. As a result, this is a two-player game, where the discriminator aims at maximizing its ability to identify the origin of an embedding, and W aims at preventing the discriminator from doing so by making W X and Y as similar as possible. This approach is in line with the work of Ganin et al. (2016), who proposed to learn latent representations invariant to the input domain, where in our case, a domain is represented by a language (source or target).
Discriminator objective We refer to the discriminator parameters as 0p. We consider the prob- ability Po, (source = 1|z) that a vector z is the mapping of a source embedding (as opposed to a target embedding) according to the discriminator. The discriminator loss can be written as:
i 1 Lp(@p|W) = a Slog Po, (source = 1|W2;) âah Ss log Po, (source = Oly). (3) i=1 i=1
In the unsupervised setting, W is now trained so that the discriminator is Mapping objective unable to accurately predict the embedding origins:
m n Lw(W|@p) = -= Ss log Po, (source = 0|W2) - â> log Po, (source = 1lyi)- (4) i=1 i=1
3
Published as a conference paper at ICLR 2018
Learning algorithm To train our model, we follow the standard training procedure of deep ad- versarial networks of Goodfellow et al. (2014). For every input sample, the discriminator and the mapping matrix W are trained successively with stochastic gradient updates to respectively mini- mize LD and LW . The details of training are given in the next section.
2.2 REFINEMENT PROCEDURE
The matrix W obtained with adversarial training gives good performance (see Table 1), but the results are still not on par with the supervised approach. In fact, the adversarial approach tries to align all words irrespective of their frequencies. However, rare words have embeddings that are less updated and are more likely to appear in different contexts in each corpus, which makes them harder to align. Under the assumption that the mapping is linear, it is then better to infer the global mapping using only the most frequent words as anchors. Besides, the accuracy on the most frequent word pairs is high after adversarial training.
To reï¬ne our mapping, we build a synthetic parallel vocabulary using the W just learned with ad- versarial training. Speciï¬cally, we consider the most frequent words and retain only mutual nearest neighbors to ensure a high-quality dictionary. Subsequently, we apply the Procrustes solution in (2) on this generated dictionary. Considering the improved solution generated with the Procrustes al- gorithm, it is possible to generate a more accurate dictionary and apply this method iteratively, similarly to Artetxe et al. (2017). However, given that the synthetic dictionary obtained using ad- versarial training is already strong, we only observe small improvements when doing more than one iteration, i.e., the improvements on the word translation task are usually below 1%.
2.3 CROSS-DOMAIN SIMILARITY LOCAL SCALING (CSLS)
In this subsection, our motivation is to produce reliable matching pairs between two languages: we want to improve the comparison metric such that the nearest neighbor of a source word, in the target language, is more likely to have as a nearest neighbor this particular source word.
Nearest neighbors are by nature asymmetric: y being a K-NN of x does not imply that x is a K-NN of y. In high-dimensional spaces (Radovanovi´c et al., 2010), this leads to a phenomenon that is detrimental to matching pairs based on a nearest neighbor rule: some vectors, dubbed hubs, are with high probability nearest neighbors of many other points, while others (anti-hubs) are not nearest neighbors of any point. This problem has been observed in different areas, from matching image features in vision (Jegou et al., 2010) to translating words in text understanding applications (Dinu et al., 2015). Various solutions have been proposed to mitigate this issue, some being reminiscent of pre-processing already existing in spectral clustering algorithms (Zelnik-manor & Perona, 2005).
However, most studies aiming at mitigating hubness consider a single feature distribution. In our case, we have two domains, one for each language. This particular case is taken into account by Dinu et al. (2015), who propose a pairing rule based on reverse ranks, and the inverted soft-max (ISF) by Smith et al. (2017), which we evaluate in our experimental section. These methods are not fully satisfactory because the similarity updates are different for the words of the source and target languages. Additionally, ISF requires to cross-validate a parameter, whose estimation is noisy in an unsupervised setting where we do not have a direct cross-validation criterion.
In contrast, we consider a bi-partite neighborhood graph, in which each word of a given dictionary is connected to its K nearest neighbors in the other language. We denote by NT(W xs) the neigh- borhood, on this bi-partite graph, associated with a mapped source word embedding W xs. All K elements of NT(W xs) are words from the target language. Similarly we denote by NS(yt) the neighborhood associated with a word t of the target language. We consider the mean similarity of a source embedding xs to its target neighborhood as
1 r¢(Was) = K > cos(Was, yt), (5) weNy (Wee)
ytâNT(W xs) where cos(., .) is the cosine similarity. Likewise we denote by rS(yt) the mean similarity of a target word yt to its neighborhood. These quantities are computed for all source and target word vectors with the efï¬cient nearest neighbors implementation by Johnson et al. (2017). We use them to deï¬ne a similarity measure CSLS(., .) between mapped source words and target words, as
CSLS(W xs, yt) = 2 cos(W xs, yt) â rT(W xs) â rS(yt). (6)
4
Published as a conference paper at ICLR 2018
Intuitively, this update increases the similarity associated with isolated word vectors. Conversely it decreases the ones of vectors lying in dense areas. Our experiments show that the CSLS signiï¬cantly increases the accuracy for word translation retrieval, while not requiring any parameter tuning.
# 3 TRAINING AND ARCHITECTURAL CHOICES
3.1 ARCHITECTURE
We use unsupervised word vectors that were trained using fastText2. These correspond to monolin- gual embeddings of dimension 300 trained on Wikipedia corpora; therefore, the mapping W has size 300 à 300. Words are lower-cased, and those that appear less than 5 times are discarded for training. As a post-processing step, we only select the ï¬rst 200k most frequent words in our experiments.
For our discriminator, we use a multilayer perceptron with two hidden layers of size 2048, and Leaky-ReLU activation functions. The input to the discriminator is corrupted with dropout noise with a rate of 0.1. As suggested by Goodfellow (2016), we include a smoothing coefï¬cient s = 0.2 in the discriminator predictions. We use stochastic gradient descent with a batch size of 32, a learning rate of 0.1 and a decay of 0.95 both for the discriminator and W . We divide the learning rate by 2 every time our unsupervised validation criterion decreases.
3.2 DISCRIMINATOR INPUTS
The embedding quality of rare words is generally not as good as the one of frequent words (Luong et al., 2013), and we observed that feeding the discriminator with rare words had a small, but not negligible negative impact. As a result, we only feed the discriminator with the 50,000 most frequent words. At each training step, the word embeddings given to the discriminator are sampled uniformly. Sampling them according to the word frequency did not have any noticeable impact on the results.
3.3 ORTHOGONALITY
Smith et al. ) showed that imposing an orthogonal constraint to the linear operator led to better performance. Using an orthogonal matrix has several advantages. First, it ensures that the monolingual quality of the embeddings is preserved. Indeed, an orthogonal matrix preserves the dot product of vectors, as well as their ¢2 distances, and is therefore an isometry of the Euclidean space (such as a rotation). Moreover, it made the training procedure more stable in our experiments. In this work, we propose to use a simple update step to ensure that the matrix W stays close to an orthogonal matrix during training . Specifically, we alternate the update of our model with the following update rule on the matrix W:
W â (1 + β)W â β(W W T )W (7)
where β = 0.01 is usually found to perform well. This method ensures that the matrix stays close to the manifold of orthogonal matrices after each update. In practice, we observe that the eigenvalues of our matrices all have a modulus close to 1, as expected.
3.4 DICTIONARY GENERATION
The reï¬nement step requires to generate a new dictionary at each iteration. In order for the Procrustes solution to work well, it is best to apply it on correct word pairs. As a result, we use the CSLS method described in Section 2.3 to select more accurate translation pairs in the dictionary. To increase even more the quality of the dictionary, and ensure that W is learned from correct translation pairs, we only consider mutual nearest neighbors, i.e. pairs of words that are mutually nearest neighbors of each other according to CSLS. This signiï¬cantly decreases the size of the generated dictionary, but improves its accuracy, as well as the overall performance.
# 3.5 VALIDATION CRITERION FOR UNSUPERVISED MODEL SELECTION
Selecting the best model is a challenging, yet important task in the unsupervised setting, as it is not possible to use a validation set (using a validation set would mean that we possess parallel data). To
2Word vectors downloaded from: https://github.com/facebookresearch/fastText
5
Published as a conference paper at ICLR 2018
Ve ve a agin Wn VAS ! 60 40 20 â = Word Translation Accuracy ââ Discriminator Accuracy ° ââ Unsupervised Criterion oO 20 40 60 80 100 120 140 Epoch
Figure 2: Unsupervised model selection. Correlation between our unsupervised vali- dation criterion (black line) and actual word translation accuracy (blue line). In this par- ticular experiment, the selected model is at epoch 10. Observe how our criterion is well correlated with translation accuracy.
address this issue, we perform model selection using an unsupervised criterion that quantiï¬es the closeness of the source and target embedding spaces. Speciï¬cally, we consider the 10k most frequent source words, and use CSLS to generate a translation for each of them. We then compute the average cosine similarity between these deemed translations, and use this average as a validation metric. We found that this simple criterion is better correlated with the performance on the evaluation tasks than optimal transport distances such as the Wasserstein distance (Rubner et al. (2000)). Figure 2 shows the correlation between the evaluation score and this unsupervised criterion (without stabilization by learning rate shrinkage). We use it as a stopping criterion during training, and also for hyper- parameter selection in all our experiments.
# 4 EXPERIMENTS
In this section, we empirically demonstrate the effectiveness of our unsupervised approach on sev- eral benchmarks, and compare it with state-of-the-art supervised methods. We ï¬rst present the cross-lingual evaluation tasks that we consider to evaluate the quality of our cross-lingual word em- beddings. Then, we present our baseline model. Last, we compare our unsupervised approach to our baseline and to previous methods. In the appendix, we offer a complementary analysis on the alignment of several sets of English embeddings trained with different methods and corpora.
# 4.1 EVALUATION TASKS
Word translation The task considers the problem of retrieving the translation of given source words. The problem with most available bilingual dictionaries is that they are generated using online tools like Google Translate, and do not take into account the polysemy of words. Failing to capture word polysemy in the vocabulary leads to a wrong evaluation of the quality of the word embedding space. Other dictionaries are generated using phrase tables of machine translation systems, but they are very noisy or trained on relatively small parallel corpora. For this task, we create high-quality
en-de de-en Methods with cross-lingual supervision and fastText embeddings Procrustes - NN Procrustes - ISF Procrustes - CSLS Methods without cross-lingual supervision and fastText embeddings Adv - NN Adv - CSLS Adv - Reï¬ne - NN Adv - Reï¬ne - CSLS Table 1: Word translation retrieval P@1 for our released vocabularies in various language pairs. We consider 1,500 source test queries, and 200k target words for each language pair. We use fastText embeddings trained on Wikipedia. NN: nearest neighbors. ISF: inverted softmax. (âenâ is English, âfrâ is French, âdeâ is German, âruâ is Russian, âzhâ is classical Chinese and âeoâ is Esperanto)
6
Published as a conference paper at ICLR 2018
Italian to English P@1 P@5 P@10 P@1 P@5 P@10
Methods with cross-lingual supervision (WaCky) Mikolov et al. (2013b) â Dinu et al. (2015)â CCAâ Artetxe et al. (2017) Smith et al. (2017)â Procrustes - CSLS Methods without cross-lingual supervision (WaCky) Adv - Reï¬ne - CSLS Methods with cross-lingual supervision (Wiki) Procrustes - CSLS Methods without cross-lingual supervision (Wiki) Adv - Reï¬ne - CSLS 33.8 48.3 53.9 38.5 56.4 63.9 36.1 52.7 58.1 39.7 54.7 60.5 43.1 60.7 66.4 44.9 61.8 66.6 45.1 60.7 65.1 63.7 78.6 81.1 24.9 41.0 24.6 45.4 31.0 49.9 33.8 52.4 38.0 58.5 38.5 57.2 38.3 57.8 56.3 76.2 47.4 54.1 57.0 59.1 63.6 63.0 62.8 80.6 66.2 80.4 83.4 58.7 76.5 80.9
Table 2: English-Italian word translation average precisions (@1, @5, @10) from 1.5k source word queries using 200k target words. Re- sults marked with the symbol â are from Smith et al. (2017). Wiki means the embeddings were trained on Wikipedia using fastText. Note that the method used by Artetxe et al. (2017) does not use the same super- vision as other supervised methods, as they only use numbers in their ini- tial parallel dictionary.
dictionaries of up to 100k pairs of words using an internal translation tool to alleviate this issue. We make these dictionaries publicly available as part of the MUSE library3.
We report results on these bilingual dictionaries, as well on those released by Dinu et al. (2015) to allow for a direct comparison with previous approaches. For each language pair, we consider 1,500 query source and 200k target words. Following standard practice, we measure how many times one of the correct translations of a source word is retrieved, and report precision@k for k = 1, 5, 10.
Cross-lingual semantic word similarity We also evaluate the quality of our cross-lingual word embeddings space using word similarity tasks. This task aims at evaluating how well the cosine similarity between two words of different languages correlates with a human-labeled score. We use the SemEval 2017 competition data (Camacho-Collados et al. (2017)) which provides large, high- quality and well-balanced datasets composed of nominal pairs that are manually scored according to a well-deï¬ned similarity scale. We report Pearson correlation.
Sentence translation retrieval Going from the word to the sentence level, we consider bag-of- words aggregation methods to perform sentence retrieval on the Europarl corpus. We consider 2,000 source sentence queries and 200k target sentences for each language pair and report the precision@k for k = 1, 5, 10, which accounts for the fraction of pairs for which the correct translation of the source words is in the k-th nearest neighbors. We use the idf-weighted average to merge word into sentence embeddings. The idf weights are obtained using other 300k sentences from Europarl.
4.2 RESULTS AND DISCUSSION
In what follows, we present the results on word translation retrieval using our bilingual dictionar- ies in Table 1 and our comparison to previous work in Table 2 where we signiï¬cantly outperform previous approaches. We also present results on the sentence translation retrieval task in Table 3 and the cross-lingual word similarity task in Table 4. Finally, we present results on word-by-word translation for English-Esperanto in Table 5.
Baselines In our experiments, we consider a supervised baseline that uses the solution of the Procrustes formula given in (2), and trained on a dictionary of 5,000 source words. This baseline can be combined with different similarity measures: NN for nearest neighbor similarity, ISF for Inverted SoftMax and the CSLS approach described in Section 2.2.
Cross-domain similarity local scaling This approach has a single parameter K deï¬ning the size of the neighborhood. The performance is very stable and therefore K does not need cross-validation: the results are essentially the same for K = 5, 10 and 50, therefore we set K = 10 in all experiments. In Table 1, we observe the impact of the similarity metric with the Procrustes supervised approach. Looking at the difference between Procrustes-NN and Procrustes-CSLS, one can see that CSLS
# 3https://github.com/facebookresearch/MUSE
7
Published as a conference paper at ICLR 2018
English to Italian P@1 P@5 P@10 Methods with cross-lingual supervision Mikolov et al. (2013b) â Dinu et al. (2015) â Smith et al. (2017) â Procrustes - NN Procrustes - CSLS Methods without cross-lingual supervision Adv - CSLS Adv - Reï¬ne - CSLS 10.5 18.7 22.8 45.3 72.4 80.7 54.6 72.7 78.2 42.6 54.7 59.0 66.1 77.1 80.7 42.5 57.6 63.6 65.9 79.7 83.1 Italian to English P@1 P@5 P@10 12.0 22.1 26.7 48.9 71.3 78.3 42.9 62.2 69.2 53.5 65.5 69.5 69.5 79.6 83.5 47.0 62.1 67.8 69.0 79.7 83.1 Table 3: English-Italian sentence translation retrieval. We report the average P@k from 2,000 source queries using 200,000 target sen- tences. We use the same embeddings as in Smith et al. (2017). Their re- sults are marked with the symbol â .
provides a strong and robust gain in performance across all language pairs, with up to 7.2% in en- eo. We observe that Procrustes-CSLS is almost systematically better than Procrustes-ISF, while being computationally faster and not requiring hyper-parameter tuning. In Table 2, we compare our Procrustes-CSLS approach to previous models presented in Mikolov et al. (2013b); Dinu et al. (2015); Smith et al. (2017); Artetxe et al. (2017) on the English-Italian word translation task, on which state-of-the-art models have been already compared. We show that our Procrustes-CSLS approach obtains an accuracy of 44.9%, outperforming all previous approaches. In Table 3, we also obtain a strong gain in accuracy in the Italian-English sentence retrieval task using CSLS, from 53.5% to 69.5%, outperforming previous approaches by an absolute gain of more than 20%.
Impact of the monolingual embeddings For the word translation task, we obtained a signiï¬cant boost in performance when considering fastText embeddings trained on Wikipedia, as opposed to previously used CBOW embeddings trained on the WaCky datasets (Baroni et al. (2009)), as can been seen in Table 2. Among the two factors of variation, we noticed that this boost in performance was mostly due to the change in corpora. The fastText embeddings, which incorporates more syn- tactic information about the words, obtained only two percent more accuracy compared to CBOW embeddings trained on the same corpus, out of the 18.8% gain. We hypothesize that this gain is due to the similar co-occurrence statistics of Wikipedia corpora. Figure 3 in the appendix shows results on the alignment of different monolingual embeddings and concurs with this hypothesis. We also obtained better results for monolingual evaluation tasks such as word similarities and word analogies when training our embeddings on the Wikipedia corpora.
Adversarial approach Table[I]shows that the adversarial approach provides a strong system for learning cross-lingual embeddings without parallel data. On the es-en and en-fr language pairs, Adv-CSLS obtains a P@1 of 79.7% and 77.8%, which is only 3.2% and 3.3% below the super- vised approach. Additionally, we observe that most systems still obtain decent results on distant languages that do not share a common alphabet (en-ru and en-zh), for which method exploiting identical character strings are just not applicable (Artetxe et al.|(2017)). This method allows us to build a strong synthetic vocabulary using similarities obtained with CSLS. The gain in absolute ac- curacy observed with CSLS on the Procrustes method is even more important here, with differences between Adv-NN and Adv-CSLS of up to 8.4% on es-en. As a simple baseline, we tried to match the first two moments of the projected source and target embeddings, which amounts to solving W* © argminy, ||(WX)7(WX) â YTY||p and solving the sign ambiguity This attempt was not successful, which we explain by the fact that this method tries to align only the first two moments, while adversarial training matches all the moments and can learn to focus on specific areas of the distributions instead of considering global statistics.
Reï¬nement: closing the gap with supervised approaches The reï¬nement step on the synthetic bilingual vocabulary constructed after adversarial training brings an additional and signiï¬cant gain in performance, closing the gap between our approach and the supervised baseline. In Table 1, we observe that our unsupervised method even outperforms our strong supervised baseline on en-it and en-es, and is able to retrieve the correct translation of a source word with up to 83% accuracy. The better performance of the unsupervised approach can be explained by the strong similarity of co- occurrence statistics between the languages, and by the limitation in the supervised approach that uses a pre-deï¬ned ï¬xed-size vocabulary (of 5,000 unique source words): in our case the reï¬nement step can potentially use more anchor points. In Table 3, we also observe a strong gain in accuracy
8
Published as a conference paper at ICLR 2018
en-es SemEval 2017 Methods with cross-lingual supervision 0.65 0.64 NASARI our baseline 0.71 0.72 Methods without cross-lingual supervision 0.67 0.69 Adv 0.71 0.71 Adv - Reï¬ne Table 4: Cross-lingual wordsim task. NASARI (Camacho-Collados et al. (2016)) refers to the ofï¬cial SemEval2017 baseline. We report Pearson correlation.
0.60 0.72 0.70 0.71
Dictionary - NN Dictionary - CSLS en-eo 6.1 11.1 eo-en 11.9 14.3
Table 5: BLEU score on English-Esperanto. Although being a naive approach, word-by- word translation is enough to get a rough idea of the input sentence. The quality of the gener- ated dictionary has a signiï¬cant impact on the BLEU score.
(up to 15%) on sentence retrieval using bag-of-words embeddings, which is consistent with the gain observed on the word retrieval task.
Application to a low-resource language pair and to machine translation Our method is par- ticularly suited for low-resource languages for which there only exists a very limited amount of parallel data. We apply it to the English-Esperanto language pair. We use the fastText embeddings trained on Wikipedia, and create a dictionary based on an online lexicon. The performance of our unsupervised approach on English-Esperanto is of 28.2%, compared to 29.3% with the supervised method. On Esperanto-English, our unsupervised approach obtains 25.6%, which is 1.3% better than the supervised method. The dictionary we use for that language pair does not take into account the polysemy of words, which explains why the results are lower than on other language pairs. Peo- ple commonly report the P@5 to alleviate this issue. In particular, the P@5 for English-Esperanto and Esperanto-English is of 46.5% and 43.9% respectively.
To show the impact of such a dictionary on machine translation, we apply it to the English-Esperanto Tatoeba corpora (Tiedemann, 2012). We remove all pairs containing sentences with unknown words, resulting in about 60k pairs. Then, we translate sentences in both directions by doing word-by- word translation. In Table 5, we report the BLEU score with this method, when using a dictionary generated using nearest neighbors, and CSLS. With CSLS, this naive approach obtains 11.1 and 14.3 BLEU on English-Esperanto and Esperanto-English respectively. Table 6 in the appendix shows some examples of sentences in Esperanto translated into English using word-by-word translation. As one can see, the meaning is mostly conveyed in the translated sentences, but the translations contain some simple errors. For instance, the âmiâ is translated into âsorryâ instead of âiâ, etc. The translations could easily be improved using a language model.
# 5 RELATED WORK
Work on bilingual lexicon induction without parallel corpora has a long tradition, starting with the seminal works by Rapp (1995) and Fung (1995). Similar to our approach, they exploit the Harris (1954) distributional structure, but using discrete word representations such as TF-IDF vectors. Fol- lowing studies by Fung & Yee (1998); Rapp (1999); Schafer & Yarowsky (2002); Koehn & Knight (2002); Haghighi et al. (2008); Irvine & Callison-Burch (2013) leverage statistical similarities be- tween two languages to learn small dictionaries of a few hundred words. These methods need to be initialized with a seed bilingual lexicon, using for instance the edit distance between source and tar- get words. This can be seen as prior knowledge, only available for closely related languages. There is also a large amount of studies in statistical decipherment, where the machine translation problem is reduced to a deciphering problem, and the source language is considered as a ciphertext (Ravi & Knight, 2011; Pourdamghani & Knight, 2017). Although initially not based on distributional se- mantics, recent studies show that the use of word embeddings can bring signiï¬cant improvement in statistical decipherment (Dou et al., 2015).
The rise of distributed word embeddings has revived some of these approaches, now with the goal of aligning embedding spaces instead of just aligning vocabularies. Cross-lingual word embeddings can be used to extract bilingual lexicons by computing the nearest neighbor of a source word, but also allow other applications such as sentence retrieval or cross-lingual document classiï¬cation (Kle- mentiev et al., 2012). In general, they are used as building blocks for various cross-lingual language processing systems. More recently, several approaches have been proposed to learn bilingual dictio- naries mapping from the source to the target space (Mikolov et al., 2013b; Zou et al., 2013; Faruqui
9
Published as a conference paper at ICLR 2018
& Dyer, 2014; Ammar et al., 2016). In particular, Xing et al. (2015) showed that adding an or- thogonality constraint to the mapping can signiï¬cantly improve performance, and has a closed-form solution. This approach was further referred to as the Procrustes approach in Smith et al. (2017).
The hubness problem for cross-lingual word embedding spaces was investigated by Dinu et al. (2015). The authors added a correction to the word retrieval algorithm by incorporating a nearest neighbors reciprocity term. More similar to our cross-domain similarity local scaling approach, Smith et al. (2017) introduced the inverted-softmax to down-weight similarities involving often- retrieved hub words. Intuitively, given a query source word and a candidate target word, they esti- mate the probability that the candidate translates back to the query, rather than the probability that the query translates to the candidate.
Recent work by Smith et al. (2017) leveraged identical character strings in both source and target languages to create a dictionary with low supervision, on which they applied the Procrustes al- gorithm. Similar to this approach, recent work by Artetxe et al. (2017) used identical digits and numbers to form an initial seed dictionary, and performed an update similar to our reï¬nement step, but iteratively until convergence. While they showed they could obtain good results using as little as twenty parallel words, their method still needs cross-lingual information and is not suitable for languages that do not share a common alphabet. For instance, the method of Artetxe et al. (2017) on our dataset does not work on the word translation task for any of the language pairs, because the digits were ï¬ltered out from the datasets used to train the fastText embeddings. This iterative EM- based algorithm initialized with a seed lexicon has also been explored in other studies (Haghighi et al., 2008; Kondrak et al., 2017).
There has been a few attempts to align monolingual word vector spaces with no supervision. Similar to our work, Zhang et al. (2017b) employed adversarial training, but their approach is different than ours in multiple ways. First, they rely on sharp drops of the discriminator accuracy for model selection. In our experiments, their model selection criterion does not correlate with the overall model performance, as shown in Figure 2. Furthermore, it does not allow for hyper-parameters tuning, since it selects the best model over a single experiment. We argue it is a serious limitation, since the best hyper-parameters vary signiï¬cantly across language pairs. Despite considering small vocabularies of a few thousand words, their method obtained weak results compared to supervised approaches. More recently, Zhang et al. (2017a) proposed to minimize the earth-mover distance after adversarial training. They compare their results only to their supervised baseline trained with a small seed lexicon, which is one to two orders of magnitude smaller than what we report here.
# 6 CONCLUSION
In this work, we show for the ï¬rst time that one can align word embedding spaces without any cross-lingual supervision, i.e., solely based on unaligned datasets of each language, while reaching or outperforming the quality of previous supervised approaches in several cases. Using adversarial training, we are able to initialize a linear mapping between a source and a target space, which we also use to produce a synthetic parallel dictionary. It is then possible to apply the same techniques proposed for supervised techniques, namely a Procrustean optimization. Two key ingredients con- tribute to the success of our approach: First we propose a simple criterion that is used as an effective unsupervised validation metric. Second we propose the similarity measure CSLS, which mitigates the hubness problem and drastically increases the word translation accuracy. As a result, our ap- proach produces high-quality dictionaries between different pairs of languages, with up to 83.3% on the Spanish-English word translation task. This performance is on par with supervised approaches. Our method is also effective on the English-Esperanto pair, thereby showing that it works for low- resource language pairs, and can be used as a ï¬rst step towards unsupervised machine translation.
# ACKNOWLEDGMENTS
We thank Juan Miguel Pino, Moustapha Ciss´e, Nicolas Usunier, Yann Ollivier, David Lopez-Paz, Alexandre Sablayrolles, and the FAIR team for useful comments and discussions.
REFERENCES Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A
Smith. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925, 2016.
10
Published as a conference paper at ICLR 2018
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. Proceedings of EMNLP, 2016.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning bilingual word embeddings with (al- In Proceedings of the 55th Annual Meeting of the Association for most) no bilingual data. Computational Linguistics (Volume 1: Long Papers), pp. 451â462. Association for Computa- tional Linguistics, 2017.
Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209â226, 2009.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5: 135â146, 2017.
Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. Nasari: Integrating ex- plicit knowledge and corpus statistics for a multilingual representation of concepts and entities. Artiï¬cial Intelligence, 240:36â64, 2016.
Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. Semeval- 2017 task 2: Multilingual and cross-lingual semantic word similarity. Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017), 2017.
Hailong Cao, Tiejun Zhao, Shu Zhang, and Yao Meng. A distribution-based model to learn bilingual word embeddings. Proceedings of COLING, 2016.
Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. International Conference on Machine Learning, pp. 854â863, 2017.
Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. Improving zero-shot learning by mitigating the hubness problem. International Conference on Learning Representations, Workshop Track, 2015.
Qing Dou, Ashish Vaswani, Kevin Knight, and Chris Dyer. Unifying bayesian inference and vector space models for improved decipherment. 2015.
Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. Learning crosslingual word embeddings without bilingual corpora. Proceedings of EMNLP, 2016.
Manaal Faruqui and Chris Dyer. Improving vector space word representations using multilingual correlation. Proceedings of EACL, 2014.
Pascale Fung. Compiling bilingual lexicon entries from a non-parallel english-chinese corpus. In Proceedings of the Third Workshop on Very Large Corpora, pp. 173â183, 1995.
Pascale Fung and Lo Yuen Yee. An ir approach for translating new words from nonparallel, compa- rable texts. In Proceedings of the 17th International Conference on Computational Linguistics - Volume 1, COLING â98, pp. 414â420. Association for Computational Linguistics, 1998.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural net- works. Journal of Machine Learning Research, 17(59):1â35, 2016.
Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, pp. 2672â2680, 2014.
Stephan Gouws, Yoshua Bengio, and Greg Corrado. Bilbowa: Fast bilingual distributed representa- tions without word alignments. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 748â756, 2015.
11
Published as a conference paper at ICLR 2018
Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. Learning bilingual lexicons from monolingual corpora. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, 2008.
Zellig S Harris. Distributional structure. Word, 10(2-3):146â162, 1954.
Ann Irvine and Chris Callison-Burch. Supervised bilingual lexicon induction with multiple mono- lingual signals. In HLT-NAACL, 2013.
Herve Jegou, Cordelia Schmid, Hedi Harzallah, and Jakob Verbeek. Accurate image search us- ing the contextual dissimilarity measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(1):2â11, 2010.
Jeff Johnson, Matthijs Douze, and Herv´e J´egou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017.
Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. Inducing crosslingual distributed represen- tations of words. Proceedings of COLING, pp. 1459â1474, 2012.
In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition-Volume 9, pp. 9â16. Association for Computational Linguistics, 2002.
Grzegorz Kondrak, Bradley Hauer, and Garrett Nicolai. Bootstrapping unsupervised bilingual lexi- con induction. In EACL, 2017.
Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. Hubness and pollution: Delving into cross- space mapping for zero-shot learning. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, 2015.
Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. Advances in neural information processing systems, pp. 2177â2185, 2014.
Thang Luong, Richard Socher, and Christopher D Manning. Better word representations with re- cursive neural networks for morphology. CoNLL, pp. 104â113, 2013.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word represen- tations in vector space. Proceedings of Workshop at ICLR, 2013a.
Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168, 2013b.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa- tions of words and phrases and their compositionality. Advances in neural information processing systems, pp. 3111â3119, 2013c.
Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword. Linguistic Data Consortium, 2011.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. Proceedings of EMNLP, 14:1532â1543, 2014.
N. Pourdamghani and K. Knight. Deciphering related languages. In EMNLP, 2017.
MiloËs Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11(Sep):2487â2531, 2010.
In Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, ACL â95, pp. 320â322. Associa- tion for Computational Linguistics, 1995.
Reinhard Rapp. Automatic identiï¬cation of word translations from unrelated english and ger- man corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, ACL â99. Association for Computational Linguistics, 1999.
12
Published as a conference paper at ICLR 2018
S. Ravi and K. Knight. Deciphering foreign language. In ACL, 2011.
Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth moverâs distance as a metric for image retrieval. International journal of computer vision, 40(2):99â121, 2000.
Charles Schafer and David Yarowsky. Inducing translation lexicons via diverse similarity measures In Proceedings of the 6th Conference on Natural Language Learning - and bridge languages. Volume 20, COLING-02. Association for Computational Linguistics, 2002.
Peter H Sch¨onemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1â10, 1966.
Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. Ofï¬ine bilingual word vectors, orthogonal transformations and the inverted softmax. International Conference on Learning Representations, 2017.
In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uur Doan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Eight International Conference on Language Resources and Evaluation (LRECâ12), Istanbul, Turkey, may 2012. European Language Resources Association (ELRA). ISBN 978-2-9517408-7-7.
Shinji Umeyama. An eigendecomposition approach to weighted graph matching problems. IEEE transactions on pattern analysis and machine intelligence, 10(5):695â703, 1988.
Ivan Vulic and Marie-Francine Moens. Bilingual word embeddings from non-parallel document- aligned data applied to bilingual lexicon induction. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015), pp. 719â725, 2015.
Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. Normalized word embedding and orthogonal transform for bilingual word translation. Proceedings of NAACL, 2015.
Lihi Zelnik-manor and Pietro Perona. Self-tuning spectral clustering. In L. K. Saul, Y. Weiss, and L. Bottou (eds.), Advances in Neural Information Processing Systems 17, pp. 1601â1608. MIT Press, 2005.
Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. Earth moverâs distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1924â1935. Association for Computational Lin- guistics, 2017a.
Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. Adversarial training for unsupervised bilingual lexicon induction. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, 2017b.
Will Y Zou, Richard Socher, Daniel M Cer, and Christopher D Manning. Bilingual word embed- dings for phrase-based machine translation. Proceedings of EMNLP, 2013.
13
Published as a conference paper at ICLR 2018
# 7 APPENDIX
In order to gain a better understanding of the impact of using similar corpora or similar word em- bedding methods, we investigated merging two English monolingual embedding spaces using either Wikipedia or the Gigaword corpus (Parker et al. (2011)), and either Skip-Gram, CBOW or fastText methods (see Figure 3).
100 100] 100 J 100 | 29.910 99.0] 99.9] 09.7] 99.6] 99.7) 99.7 o 90 e mm Fottid sa 8 80 e > anne: a 70 5 2 60 z . aoe: = 50 40 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
100 99,7) 99.7] 99.2) 99.3) eas 96.2 963 o 90 e ae ee es 8 80 e ae | a 70 5 2 60 z oe | = 50 40 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
(a) skip-gram-seed1(Wiki) â skip-gram-seed2(Wiki)
# (b) skip-gram(Wiki) â CBOW(Wiki)
100 & 90 g 373) | Fad 5 : : 8 80 33 | 8 g 70 é ia o) 5 2 60 z :. wae = 50 40 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
100 3 90 § 878 ; 3 so fal 82.9 8 80.4 Fa g 70 ae air 5 673 2 60 z :. aoe = 50 40 a 5k-7k 10k-12k =S0k-52k 100k-â102k 150k â152k @NN @CSLS
(c) fastText(Wiki) â fastText(Giga) (d) skip-gram(Wiki) â fastText(Giga)
Figure 3: English to English word alignment accuracy. Evolution of word translation retrieval accuracy with regard to word frequency, using either Wikipedia (Wiki) or the Gigaword corpus (Giga), and either skip-gram, continuous bag-of-words (CBOW) or fastText embeddings. The model can learn to perfectly align embeddings trained on the same corpus but with different seeds (a), as well as embeddings learned using different models (overall, when employing CSLS which is more accurate on rare words) (b). However, the model has more trouble aligning embeddings trained on different corpora (Wikipedia and Gigaword) (c). This can be explained by the difference in co-occurrence statistics of the two corpora, particularly on the rarer words. Performance can be further deteriorated by using both different models and different types of corpus (d).
mi kelkfoje parolas kun mia najbaro tra la barilo . sorry sometimes speaks with my neighbor across the barrier . i sometimes talk to my neighbor across the fence . la viro malanta ili ludas la pianon . the man behind they plays the piano . the man behind them is playing the piano . bonvole protektu min kontra tiuj malbonaj viroj . gratefully protects hi against those worst men . please defend me from such bad men .
Table 6: Esperanto-English. Examples of fully unsupervised word-by-word translations. The translations reï¬ect the meaning of the source sentences, and could potentially be improved using a simple language model.
14 | {
"id": "1701.00160"
} |
1710.03740 | Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units. | http://arxiv.org/pdf/1710.03740 | Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu | cs.AI, cs.LG, stat.ML | Published as a conference paper at ICLR 2018 | null | cs.AI | 20171010 | 20180215 | 8 1 0 2
b e F 5 1 ] I A . s c [
3 v 0 4 7 3 0 . 0 1 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# MIXED PRECISION TRAINING
# Sharan Narangâ, Gregory Diamos, Erich Elsenâ Baidu Research {sharan, gdiamos}@baidu.com
Paulius Micikeviciusâ, Jonah Alben, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu NVIDIA {pauliusm, alben, dagarcia, bginsburg, mhouston,
okuchaiev, gavenkatesh, skyw}@nvidia.com
# ABSTRACT
Increasing the size of a neural network typically improves accuracy but also in- creases the memory and compute requirements for training the model. We intro- duce methodology for training deep neural networks using half-precision ï¬oat- ing point numbers, without losing model accuracy or having to modify hyper- parameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE half- precision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward- and back-propagation). Secondly, we propose loss-scaling to pre- serve gradient values with small magnitudes. Thirdly, we use half-precision arith- metic that accumulates into single-precision outputs, which are converted to half- precision before storing to memory. We demonstrate that the proposed methodol- ogy works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.
# INTRODUCTION
Deep Learning has enabled progress in many different applications, ranging from image recognition (He et al., 2016a) to language modeling (Jozefowicz et al., 2016) to machine translation (Wu et al., 2016) and speech recognition (Amodei et al., 2016). Two trends have been critical to these results - increasingly large training data sets and increasingly complex models. For example, the neural network used in Hannun et al. (2014) had 11 million parameters which grew to approximately 67 million for bidirectional RNNs and further to 116 million for the latest forward only Gated Recurrent Unit (GRU) models in Amodei et al. (2016).
Larger models usually require more compute and memory resources to train. These requirements can be lowered by using reduced precision representation and arithmetic. Performance (speed) of any program, including neural network training and inference, is limited by one of three factors: arithmetic bandwidth, memory bandwidth, or latency. Reduced precision addresses two of these limiters. Memory bandwidth pressure is lowered by using fewer bits to to store the same number of values. Arithmetic time can also be lowered on processors that offer higher throughput for reduced precision math. For example, half-precision math throughput in recent GPUs is 2Ã to 8Ã higher than for single-precision. In addition to speed improvements, reduced precision formats also reduce the amount of memory required for training.
Modern deep learning training systems use single-precision (FP32) format. In this paper, we address the training with reduced precision while maintaining model accuracy. Speciï¬cally, we train vari-
# âEqual contribution â Now at Google Brain eriche@google.com
1
Published as a conference paper at ICLR 2018
ous neural networks using IEEE half-precision format (FP16). Since FP16 format has a narrower dynamic range than FP32, we introduce three techniques to prevent model accuracy loss: maintain- ing a master copy of weights in FP32, loss-scaling that minimizes gradient values becoming zeros, and FP16 arithmetic with accumulation in FP32. Using these techniques we demonstrate that a wide variety of network architectures and applications can be trained to match the accuracy FP32 training. Experimental results include convolutional and recurrent network architectures, trained for classiï¬cation, regression, and generative tasks. Applications include image classiï¬cation, image generation, object detection, language modeling, machine translation, and speech recognition. The proposed methodology requires no changes to models or training hyper-parameters.
# 2 RELATED WORK
There have been a number of publications on training Convolutional Neural Networks (CNNs) with reduced precision. Courbariaux et al. (2015) proposed training with binary weights, all other ten- sors and arithmetic were in full precision. Hubara et al. (2016a) extended that work to also binarize the activations, but gradients were stored and computed in single precision. Hubara et al. (2016b) considered quantization of weights and activations to 2, 4 and 6 bits, gradients were real numbers. Rastegari et al. (2016) binarize all tensors, including the gradients. However, all of these approaches lead to non-trivial loss of accuracy when larger CNN models were trained for ILSVRC classiï¬ca- tion task (Russakovsky et al., 2015). Zhou et al. (2016) quantize weights, activations, and gradients to different bit counts to further improve result accuracy. This still incurs some accuracy loss and requires a search over bit width conï¬gurations per network, which can be impractical for larger models. Mishra et al. improve on the top-1 accuracy achieved by prior weight and activation quan- tizations by doubling or tripling the width of layers in popular CNNs. However, the gradients are still computed and stored in single precision, while quantized model accuracy is lower than that of the widened baseline. Gupta et al. (2015) demonstrate that 16 bit ï¬xed point representation can be used to train CNNs on MNIST and CIFAR-10 datasets without accuracy loss. It is not clear how this approach would work on the larger CNNs trained on large datasets or whether it would work for Recurrent Neural Networks (RNNs).
There have also been several proposals to quantize RNN training. He et al. (2016c) train quantized variants of the GRU (Cho et al., 2014) and Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells to use fewer bits for weights and activations, albeit with a small loss in accuracy. It is not clear whether their results hold for larger networks needed for larger datasets Hubara et al. (2016b) propose another approach to quantize RNNs without altering their structure. Another approach to quantize RNNs is proposed in Ott et al. (2016). They evaluate binary, ternary and exponential quantization for weights in various different RNN models trained for language modelling and speech recognition. All of these approaches leave the gradients unmodiï¬ed in single- precision and therefore the computation cost during back propagation is unchanged.
The techniques proposed in this paper are different from the above approaches in three aspects. First, all tensors and arithmetic for forward and backward passes use reduced precision, FP16 in our case. Second, no hyper-parameters (such as layer width) are adjusted. Lastly, models trained with these techniques do not incur accuracy loss when compared to single-precision baselines. We demonstrate that this technique works across a variety of applications using state-of-the-art models trained on large scale datasets.
# IMPLEMENTATION
We introduce the key techniques for training with FP16 while still matching the model accuracy of FP32 training session: single-precision master weights and updates, loss-scaling, and accumulating FP16 products into FP32. Results of training with these techniques are presented in Section 4.
# 3.1 FP32 MASTER COPY OF WEIGHTS
In mixed precision training, weights, activations and gradients are stored as FP16. In order to match the accuracy of the FP32 networks, an FP32 master copy of weights is maintained and updated with the weight gradient during the optimizer step. In each iteration an FP16 copy of the master weights is
2
Published as a conference paper at ICLR 2018
1 Activations ââ>| | . F6. (7) âhs, float2halt |} â> Weights ae {25> rctvations Fie Activation Grad 2224 BWD-Actv | Fie. âWeights [ , âActivation Grad ( F16 i Fie Ps Activations Weight Grad BwD-Weight "ae *otâ¢ti [ . âActivation Grad Master-Weights (F32) Weight Update Updated Master-Weights
Figure 1: Mixed precision training iteration for a layer.
used in the forward and backward pass, halving the storage and bandwidth needed by FP32 training. Figure 1 illustrates this mixed precision training process.
While the need for FP32 master weights is not universal, there are two possible reasons why a number of networks require it. One explanation is that updates (weight gradients multiplied by the learning rate) become too small to be represented in FP16 - any value whose magnitude is smaller than 2â24 becomes zero in FP16. We can see in Figure 2b that approximately 5% of weight gradient values have exponents smaller than â24. These small valued gradients would become zero in the optimizer when multiplied with the learning rate and adversely affect the model accuracy. Using a single-precision copy for the updates allows us to overcome this problem and recover the accuracy.
Another explanation is that the ratio of the weight value to the weight update is very large. In this case, even though the weight update is representable in FP16, it could still become zero when addition operation right-shifts it to align the binary point with the weight. This can happen when the magnitude of a normalized weight value is at least 2048 times larger that of the weight update. Since FP16 has 10 bits of mantissa, the implicit bit must be right-shifted by 11 or more positions to potentially create a zero (in some cases rounding can recover the value). In cases where the ratio is larger than 2048, the implicit bit would be right-shifted by 12 or more positions. This will cause the weight update to become a zero which cannot be recovered. An even larger ratio will result in this effect for de-normalized numbers. Again, this effect can be counteracted by computing the update in FP32.
To illustrate the need for an FP32 master copy of weights, we use the Mandarin speech model (described in more detail in Section 4.3) trained on a dataset comprising of approximately 800 hours of speech data for 20 epochs. As shown in 2a, we match FP32 training results when updating an FP32 master copy of weights after FP16 forward and backward passes, while updating FP16 weights results in 80% relative accuracy loss.
Even though maintaining an additional copy of weights increases the memory requirements for the weights by 50% compared with single precision training, impact on overall memory usage is much smaller. For training memory consumption is dominated by activations, due to larger batch sizes and activations of each layer being saved for reuse in the back-propagation pass. Since activations are also stored in half-precision format, the overall memory consumption for training deep neural networks is roughly halved.
3.2 LOSS SCALING
FP16 exponent bias centers the range of normalized value exponents to [â14, 15] while gradient values in practice tend to be dominated by small magnitudes (negative exponents). For example, consider Figure 3 showing the histogram of activation gradient values, collected across all layers during FP32 training of Multibox SSD detector network (Liu et al., 2015a). Note that much of the FP16 representable range was left unused, while many values were below the minimum repre- sentable range and became zeros. Scaling up the gradients will shift them to occupy more of the representable range and preserve values that are otherwise lost to zeros. This particular network diverges when gradients are not scaled, but scaling them by a factor of 8 (increasing the exponents by 3) is sufï¬cient to match the accuracy achieved with FP32 training. This suggests that activation
3
Published as a conference paper at ICLR 2018
30 baseline train baseline devo â mixed_precision_fp32_weights_copy_train © mixed _precision {p32_weights copy devo â mixed_precision_no_FP32_weights_copy_train mixed_precision_no_FP32_weights_copy_dev0 25 20 train cost, 0 0 3 70 we 20 Epoch number
Weight Gradient 25.0% 20.0% Become zero in EPI6 15.0% 10.0% | Percentage of total gradients 0% =A 30 =20 10 0 Exponent value
30 Weight Gradient 25.0% baseline train baseline devo â mixed_precision_fp32_weights_copy_train © mixed _precision {p32_weights copy devo â mixed_precision_no_FP32_weights_copy_train mixed_precision_no_FP32_weights_copy_dev0 25 20.0% Become zero in EPI6 20 15.0% train cost, 10.0% | Percentage of total gradients 0 0 3 70 we 20 0% Epoch number =A 30 =20 10 0 Exponent value validation for Mandarin
(a) Training and validation (dev0) curves for Mandarin speech recognition model
(b) Gradient histogram for Mandarin training run
Figure 2: Figure 2a shows the results of three experiemnts; baseline (FP32), pseudo FP16 with FP32 master copy, pseudo FP16 without FP32 master copy. Figure 2b shows the histogram for the exponents of weight gradients for Mandarin speech recognition training with FP32 weights. The gradients are sampled every 4,000 iterations during training for all the layers in the model.
o4 FP16 Representable range Become zero in FP16 FP16 denorms 2 v4 vs vis V32 1/64 1/128 Percentage of all activation gradient values 256 u512 © 75-60-45 -40 -38 96-94-22 -30 -28-26-24-22-20-18-16-14 12-10-86 4-2 0 2 46 6 10 124415 Jogs(magnitude)
Figure 3: Histogram of activation gradient values during the training of Multibox SSD network. Note that the bins on the x-axis cover varying ranges and thereâs a separate bin for zeros. For example, 2% of the values are in the [2â34, 2â32) range, 2% of values are in the [2â24, 2â23) range, and 67% of values are zero.
gradient values below 2â27 in magnitude were irrelevant to the training of this model, but values in the [2â27, 2â24) range were important to preserve.
One efï¬cient way to shift the gradient values into FP16-representable range is to scale the loss value computed in the forward pass, prior to starting back-propagation. By chain rule back-propagation ensures that all the gradient values are scaled by the same amount. This requires no extra operations during back-propagation and keeps the relevant gradient values from becoming zeros. Weight gradi- ents must be unscaled before weight update to maintain the update magnitudes as in FP32 training. It is simplest to perform this unscaling right after the backward pass but before gradient clipping or any other gradient-related computations, ensuring that no hyper-parameters (such as gradient clipping threshold, weight decay, etc.) have to be adjusted.
There are several options to choose the loss scaling factor. The simplest one is to pick a con- stant scaling factor. We trained a variety of networks with scaling factors ranging from 8 to 32K (many networks did not require a scaling factor). A constant scaling factor can be chosen empir-
4
Published as a conference paper at ICLR 2018
ically or, if gradient statistics are available, directly by choosing a factor so that its product with the maximum absolute gradient value is below 65,504 (the maximum value representable in FP16). There is no downside to choosing a large scaling factor as long as it does not cause overï¬ow during back-propagation - overï¬ows will result in inï¬nities and NaNs in the weight gradients which will irreversibly damage the weights after an update. Note that overï¬ows can be efï¬ciently detected by inspecting the computed weight gradients, for example, when weight gradient values are unscaled. One option is to skip the weight update when an overï¬ow is detected and simply move on to the next iteration.
# 3.3 ARITHMETIC PRECISION
By and large neural network arithmetic falls into three categories: vector dot-products, reductions, and point-wise operations. These categories beneï¬t from different treatment when it comes to re- duced precision arithmetic. To maintain model accuracy, we found that some networks require that FP16 vector dot-product accumulates the partial products into an FP32 value, which is converted to FP16 before writing to memory. Without this accumulation in FP32, some FP16 models did not match the accuracy of the baseline models. Whereas previous GPUs supported only FP16 multiply- add operation, NVIDIA Volta GPUs introduce Tensor Cores that multiply FP16 input matrices and accumulate products into either FP16 or FP32 outputs (NVIDIA, 2017).
Large reductions (sums across elements of a vector) should be carried out in FP32. Such reductions mostly come up in batch-normalization layers when accumulating statistics and softmax layers. Both of the layer types in our implementations still read and write FP16 tensors from memory, performing the arithmetic in FP32. This did not slow down the training process since these layers are memory-bandwidth limited and not sensitive to arithmetic speed.
Point-wise operations, such as non-linearities and element-wise matrix products, are memory- bandwidth limited. Since arithmetic precision does not impact the speed of these operations, either FP16 or FP32 math can be used.
# 4 RESULTS
We have run experiments for a variety of deep learning tasks covering a wide range of deep learning models. We conducted the following experiments for each application:
⢠Baseline (FP32) : Single-precision storage is used for activations, weights and gradients. All arithmetic is also in FP32.
⢠Mixed Precision (MP): FP16 is used for storage and arithmetic. Weights, activations and gradients are stored using in FP16, an FP32 master copy of weights is used for updates. Loss-scaling is used for some applications. Experiments with FP16 arithmetic used Tensor Core operations with accumulation into FP32 for convolutions, fully-connected layers, and matrix multiplies in recurrent layers.
The Baseline experiments were conducted on NVIDIAâs Maxwell or Pascal GPU. Mixed Precision experiments were conducted on Volta V100 that accumulates FP16 products into FP32. The mixed precision speech recognition experiments (Section 4.3) were conducted using Maxwell GPUs using FP16 storage only. This setup allows us to emulate the TensorCore operations on non-Volta hard- ware. A number of networks were trained in this mode to conï¬rm that resulting model accuracies are equivalent to MP training run on Volta V100 GPUs. This is intuitive since MP arithmetic was accumulating FP16 products into FP32 before converting the result to FP16 on a memory write.
4.1 CNNS FOR ILSVRC CLASSIFICATION
We trained several CNNs for ILSVRC classiï¬cation task (Russakovsky et al., 2015) using mixed precision: Alexnet, VGG-D, GoogLeNet, Inception v2, Inception v3, and pre-activation Resnet-50. In all of these cases we were able to match the top-1 accuracy of baseline FP32 training session using identical hyper-parameters. Networks were trained using Caffe (Jia et al., 2014) framework modiï¬ed to use Volta TensorOps, except for Resnet50 which used PyTorch (Paszke et al., 2017).
5
Published as a conference paper at ICLR 2018
Training schedules were used from public repositories, when available (training schedule for VGG- D has not been published). Top-1 accuracy on ILSVRC validation set are shown in Table 1. Baseline (FP32) accuracy in a few cases is different from published results due to single-crop testing and a simpler data augmentation. Our data augmentation in Caffe included random horizontal ï¬ipping and random cropping from 256x256 images, Resnet50 training in PyTorch used the full augmentation in the training script from PyTorch vision repository.
Table 1: ILSVRC12 classiï¬cation top-1 accuracy.
Model AlexNet VGG-D GoogLeNet (Inception v1) Inception v2 Inception v3 Resnet50 Baseline Mixed Precision 56.77% 65.40% 68.33% 70.03% 73.85% 75.92% 56.93% 65.43% 68.43% 70.02% 74.13% 76.04% Reference (Krizhevsky et al., 2012) (Simonyan and Zisserman, 2014) (Szegedy et al., 2015) (Ioffe and Szegedy, 2015) (Szegedy et al., 2016) (He et al., 2016b)
Loss-scaling technique was not required for successful mixed precision training of these networks. While all tensors in the forward and backward passes were in FP16, a master copy of weights was updated in FP32 as outlined in Section 3.1.
4.2 DETECTION CNNS
Object detection is a regression task, where bounding box coordinate values are predicted by the network (compared to classiï¬cation, where the predicted values are passed through a softmax layer to convert them to probabilities). Object detectors also have a classiï¬cation component, where prob- abilities for an object type are predicted for each bounding box. We trained two popular detection approaches: Faster-RCNN (Ren et al., 2015) and Multibox-SSD (Liu et al., 2015a). Both detectors used VGG-16 network as the backbone. Models and training scripts were from public repositories (Girshick; Liu). Mean average precision (mAP) was computed on Pascal VOC 2007 test set. Faster- RCNN was trained on VOC 2007 training set, whereas SSD was trained on a union of VOC 2007 and 2012 data, which is the reason behind baseline mAP difference in Table 2.
Table 2: Detection network average mean precision.
Model Faster R-CNN Multibox SSD Baseline MP without loss-scale MP with loss-scale 69.1% 76.9% 68.6% diverges 69.7% 77.1%
As can be seen in table 2, SSD detector failed to train in FP16 without loss-scaling. By losing small gradient values to zeros, as described in Section 3.2, poor weights are learned and training diverges. As described in Section 3.2, loss-scaling factor of 8 recovers the relevant gradient values and mixed-precision training matches FP32 mAP.
4.3 SPEECH RECOGNITION
We explore mixed precision training for speech data using the DeepSpeech 2 model for both English and Mandarin datasets. The model used for training on the English dataset consists of two 2D con- volution layers, three recurrent layers with GRU cells, 1 row convolution layer and Connectionist temporal classiï¬cation (CTC) cost layer (Graves et al., 2006). It has approximately 115 million pa- rameters. This model is trained on our internal dataset consisting of 6000 hours of English speech. The Mandarin model has a similar architecture with a total of 215 million parameters. The Man- darin model was trained on 2600 hours of our internal training set. For these models, we run the Baseline and Pseudo FP16 experiments. All the models were trained for 20 epochs using Nesterov Stochastic Gradient Descent (SGD). All hyper-parameters such as learning rate, annealing schedule and momentum were the same for baseline and pseudo FP16 experiments. Table 3 shows the results of these experiments on independent test sets.
6
Published as a conference paper at ICLR 2018
Table 3: Character Error Rate (CER) using mixed precision training for speech recognition. English results are reported on the WSJ â92 test set. Mandarin results are reported on our internal test set.
Model/Dataset Baseline Mixed Precision English Mandarin
Similar to classiï¬cation and detection networks, mixed precision training works well for recurrent neural networks trained on large scale speech datasets. These speech models are the largest models trained using this technique. Also, the number of time-steps involved in training a speech model are unusually large compared to other applications using recurrent layers. As shown in table 3, Pseudo FP16 results are roughly 5 to 10% better than the baseline. This suggests that the half-precision storage format may act as a regularizer during training.
f \ or ' sosaiet 1 âMibed precision, loss-scale 1024, \ -bedtn arse âraining Perplesty Ok 100K 200K «300K «400K © SCOK âGODK 700K «BOK «OK 1, 000K erations 2.40 â900K 820K BAOK ©â«8GK «IDK SDOK. «OK © OHOK BOK SEK 1,000
Figure 4: English to French translation network training perplexity, 3x1024 LSTM model with attention. Ref1, ref2 and ref3 represent three different FP32 training runs.
4.4 MACHINE TRANSLATION
For language translation we trained several variants of the model in TensorFlow tutorial for En- glish to French translation (Google). The model used word-vocabularies, 100K and 40K entries for English and French, respectively. The networks we trained had 3 or 5 layers in the encoder and decoder, each. In both cases a layer consisted of 1024 LSTM cells. SGD optimizer was used to train on WMT15 dataset. There was a noticeable variation in accuracy of different training sessions with the same settings. For example, see the three FP32 curves in Figure 4, which shows the 3-layer model. Mixed-precision with loss-scaling matched the FP32 results, while no loss-scaling resulted in a slight degradation in the results. The 5-layer model exhibited the same training behavior.
4.5 LANGUAGE MODELING
We trained English language model, designated as bigLSTM (Jozefowicz et al., 2016), on the 1 billion word dataset. The model consists of two layers of 8192 LSTM cells with projection to a 1024-dimensional embedding. This model was trained for 50 epochs using the Adagrad optimizer. The the vocabulary size is 793K words. During training, we use a sampled softmax layer with 8K negative samples. Batch size aggregated over 4 GPUs is 1024. To match FP32 perplexity training this network with FP16 requires loss-scaling, as shown in Figure 5. Without loss scaling the training perplexity curve for FP16 training diverges, compared with the FP32 training, after 300K iterations. Scaling factor of 128 recovers all the relevant gradient values and the accuracy of FP16 training matches the baseline run.
# 4.6 DCGAN RESULTS
Generative Adversarial Networks (GANs) combine regression and discrimination tasks during train- ing. For image tasks, the generator network regresses pixel colors. In our case, the generator predicts three channels of 8-bit color values each. The network was trained to generate 128x128 pixel im- ages of faces, using DCGAN methodology (Radford et al., 2015) and CelebFaces dataset (Liu et al.,
7
Published as a conference paper at ICLR 2018
5.0 â 45 4.0 1250K â1380K 480K LSSOK 650K 1. 750K 850K 1.9508 35 3.0 ay âMixed precision, loss scale 1 9.5 Mixed Precision, loss scale 128 0K 500K 1,000K 1,500K 2,000K
Figure 5: bigLSTM training perplexity
Figure 6: An uncurated set of face images generated by DCGAN. FP32 training (left) and mixed- precision training (right).
2015b). The generator had 7 layers of fractionally-strided convolutions, 6 with leaky ReLU activa- tions, 1 with tanh. The discriminator had 6 convolutions, and 2 fully-connected layers. All used leaky ReLU activations except for the last layer, which used sigmoid. Batch normalization was ap- plied to all layers except the last fully-connected layer of the discriminator. Adam optimizer was used to train for 100K iterations. An set of output images in Figure 6. Note that we show a randomly selected set of output images, whereas GAN publications typically show a curated set of outputs by excluding poor examples. Unlike other networks covered in this paper, GANs do not have a widely- accepted quantiï¬cation of their result quality. Qualitatively the outputs of FP32 and mixed-precision training appear comparable. This network did not require loss-scaling to match FP32 results.
# 5 CONCLUSIONS AND FUTURE WORK
Mixed precision training is an important technique that allows us to reduce the memory consump- tion as well as time spent in memory and arithmetic operations of deep neural networks. We have demonstrated that many different deep learning models can be trained using this technique with no loss in accuracy without any hyper-parameter tuning. For certain models with a large number of small gradient values, we introduce the gradient scaling method to help them converge to the same accuracy as FP32 baseline models.
DNN operations benchmarked with DeepBench1 on Volta GPU see 2-6x speedups compared to FP32 implementations if they are limited by memory or arithmetic bandwidth. Speedups are lower when operations are latency-limited. Full network training and inference speedups depend on library
# 1https://github.com/baidu-research/DeepBench
8
Published as a conference paper at ICLR 2018
and framework optimizations for mixed precision and are a focus of future work (experiments in this paper were carried out with early versions of both libraries and frameworks).
We would also like to extend this work to include generative models like text-to-speech systems and deep reinforcement learning applications. Furthermore, automating loss-scaling factor selection would further simplify training with mixed precision. Loss-scaling factor could be dynamically increased or decreased by inspecting the weight gradients for overï¬ow, skipping weight updates when an overï¬ow is detected.
9
Published as a conference paper at ICLR 2018
REFERENCES
D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos, et al. Deep speech 2: End-to-end speech recognition in english and In Proceedings of The 33rd International Conference on Machine Learning, pages mandarin. 173â182, 2016.
K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3123â3131. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/ 5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations. pdf.
R. Girshick. Faster r-cnn github repository. https://github.com/rbgirshick/ py-faster-rcnn.
Google. Tensorï¬ow tutorial: Sequence-to-sequence models. URL https://www. tensorflow.org/tutorials/seq2seq.
A. Graves, S. Fern´andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classiï¬cation: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369â376. ACM, 2006.
S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1737â1746, 2015.
A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sen- gupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016a.
K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016b.
Q. He, H. Wen, S. Zhou, Y. Wu, C. Yao, X. Zhou, and Y. Zou. Effective quantization methods for recurrent neural networks. arXiv preprint arXiv:1611.10176, 2016c.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735â1780, Nov. 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx.doi.org/10. 1162/neco.1997.9.8.1735.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In Advances in Neural Information Processing Systems, pages 4107â4115, 2016a.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural net- works: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016b.
S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reduc- In F. R. Bach and D. M. Blei, editors, ICML, volume 37 of ing internal covariate shift. JMLR Workshop and Conference Proceedings, pages 448â456. JMLR.org, 2015. URL http: //dblp.uni-trier.de/db/conf/icml/icml2015.html#IoffeS15.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
10
Published as a conference paper at ICLR 2018
R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling, 2016. URL https://arxiv.org/pdf/1602.02410.pdf.
lutional neural networks. berger, editors, Advances in Neural 1105. Curran Associates, 4824-imagenet-classification-with-deep-convolutional-neural-networks. pdf.
# W. Liu. Ssd github repository. https://github.com/weiliu89/caffe/tree/ssd.
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. E. Reed. Ssd: Single shot multibox detec- tor. CoRR, abs/1512.02325, 2015a. URL http://dblp.uni-trier.de/db/journals/ corr/corr1512.html#LiuAESR15.
Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015b.
A. Mishra, E. Nurvitadhi, J. Cook, and D. Marr. Wrpn: Wide reduced-precision networks. arXiv preprint arXiv:1709.01134, year=2017.
NVIDIA. Nvidia tesla v100 gpu architecture. https://images.nvidia.com/content/ volta-architecture/pdf/Volta-Architecture-Whitepaper-v1.0.pdf, 2017.
J. Ott, Z. Lin, Y. Zhang, S.-C. Liu, and Y. Bengio. Recurrent neural networks with limited numerical precision. arXiv preprint arXiv:1608.06902, 2016.
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolu- tional generative adversarial networks. CoRR, abs/1511.06434, 2015. URL http://dblp. uni-trier.de/db/journals/corr/corr1511.html#RadfordMC15.
M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet Classiï¬cation Using Binary Convolutional Neural Networks, pages 525â542. Springer International Publishing, Cham, 2016. ISBN 978-3-319-46493-0. doi: 10.1007/978-3-319-46493-0 32. URL https://doi. org/10.1007/978-3-319-46493-0_32.
S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Neural Information Processing Systems (NIPS), 2015.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Chal- lenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/ s11263-015-0816-y.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556, 2014.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Ra- binovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015. URL http://arxiv.org/abs/1409.4842.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architec- ture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
11
Published as a conference paper at ICLR 2018
S. Zhou, Z. Ni, X. Zhou, H. Wen, Y. Wu, and Y. Zou. Dorefa-net: Training low bitwidth con- volutional neural networks with low bitwidth gradients. CoRR, abs/1606.06160, 2016. URL http://arxiv.org/abs/1606.06160.
12 | {
"id": "1709.01134"
} |
1710.02298 | Rainbow: Combining Improvements in Deep Reinforcement Learning | The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance. | http://arxiv.org/pdf/1710.02298 | Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver | cs.AI, cs.LG | Under review as a conference paper at AAAI 2018 | null | cs.AI | 20171006 | 20171006 | 7 1 0 2
t c O 6 ] I A . s c [
1 v 8 9 2 2 0 . 0 1 7 1 : v i X r a
# Rainbow: Combining Improvements in Deep Reinforcement Learning
# Matteo Hessel DeepMind
# Joseph Modayil DeepMind
# Hado van Hasselt DeepMind
# Hado van Hasselt
# Tom Schaul DeepMind
# Georg Ostrovski DeepMind
# Will Dabney DeepMind
Dan Horgan DeepMind
# Bilal Piot DeepMind
# Mohammad Azar DeepMind
# David Silver DeepMind
# Abstract
The deep reinforcement learning community has made sev- eral independent improvements to the DQN algorithm. How- ever, it is unclear which of these extensions are complemen- tary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combina- tion provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efï¬ciency and ï¬nal perfor- mance. We also provide results from a detailed ablation study that shows the contribution of each component to overall per- formance.
Introduction The many recent successes in scaling reinforcement learn- ing (RL) to complex sequential decision-making problems were kick-started by the Deep Q-Networks algorithm (DQN; Mnih et al. 2013, 2015). Its combination of Q-learning with convolutional neural networks and experience replay en- abled it to learn, from raw pixels, how to play many Atari games at human-level performance. Since then, many exten- sions have been proposed that enhance its speed or stability. Double DQN (DDQN; van Hasselt, Guez, and Silver 2016) addresses an overestimation bias of Q-learning (van Hasselt 2010), by decoupling selection and evaluation of the bootstrap action. Prioritized experience replay (Schaul et al. 2015) improves data efï¬ciency, by replaying more of- ten transitions from which there is more to learn. The du- eling network architecture (Wang et al. 2016) helps to gen- eralize across actions by separately representing state val- ues and action advantages. Learning from multi-step boot- strap targets (Sutton 1988; Sutton and Barto 1998), as used in A3C (Mnih et al. 2016), shifts the bias-variance trade- off and helps to propagate newly observed rewards faster to earlier visited states. Distributional Q-learning (Bellemare, Dabney, and Munos 2017) learns a categorical distribution of discounted returns, instead of estimating the mean. Noisy DQN (Fortunato et al. 2017) uses stochastic network layers for exploration. This list is, of course, far from exhaustive.
Each of these algorithms enables substantial performance improvements in isolation. Since they do so by addressing
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
DON â DDQN â Prioritized DDQN â Dueling DDQN 200%- __ a3c â Distributional DON â Noisy DON Rainbow 100% Median human-normalized score c L 7 44 100 200 Millions of frames
Figure 1: Median human-normalized performance across 57 Atari games. We compare our integrated agent (rainbow- colored) to DQN (grey) and six published baselines. Note that we match DQNâs best performance after 7M frames, surpass any baseline within 44M frames, and reach sub- stantially improved ï¬nal performance. Curves are smoothed with a moving average over 5 points.
radically different issues, and since they build on a shared framework, they could plausibly be combined. In some cases this has been done: Prioritized DDQN and Dueling DDQN both use double Q-learning, and Dueling DDQN was also combined with prioritized experience replay. In this paper we propose to study an agent that combines all the afore- mentioned ingredients. We show how these different ideas can be integrated, and that they are indeed largely com- plementary. In fact, their combination results in new state- of-the-art results on the benchmark suite of 57 Atari 2600 games from the Arcade Learning Environment (Bellemare et al. 2013), both in terms of data efï¬ciency and of ï¬nal perfor- mance. Finally we show results from ablation studies to help understand the contributions of the different components.
Background Reinforcement learning addresses the problem of an agent learning to act in an environment in order to maximize a scalar reward signal. No direct supervision is provided to the agent, for instance it is never directly told the best action.
Agents and environments. At each discrete time step t = 0,1,2..., the environment provides the agent with an ob- servation S;, the agent responds by selecting an action A;, and then the environment provides the next reward Ry,+1, discount 7,41, and state S;41. This interaction is formalized as a Markov Decision Process, or MDP, which is a tuple (S,A,T,r,7), where S is a finite set of states, A is a finite set of actions, T(s,a,sâ) = P[Si41 = s' | S; = 8, A; = a] is the (stochastic) transition function, r(s,a) = E[Ri41 | S; = s, A, = aj is the reward function, and y ⬠[0, 1] is a discount factor. In our experiments MDPs will be episodic with a constant 7, = y, except on episode termination where 7 = 0, but the algorithms are expressed in the general form. On the agent side, action selection is given by a policy 7 that defines a probability distribution over actions for each state. From the state S; encountered at time t, we define the discounted return G; = Yeo oy) Regeas as the dis- counted sum of future rewards collected by the agent, where the discount for a reward k steps in the future is given by the product of discounts before that time, 4? = Th, VW+i- An agent aims to maximize the expected discounted return by finding a good policy.
The policy may be learned directly, or it may be con- structed as a function of some other learned quantities. In value-based reinforcement learning, the agent learns an es- timate of the expected discounted return, or value, when following a policy 7 starting from a given state, v"(s) = E,,[G,|S; = s], or state-action pair, g7(s,a) = E,[G,|S, = 8, Ay = a]. A common way of deriving a new policy from a state-action value function is to act e-greedily with respect to the action values. This corresponds to taking the action with the highest value (the greedy action) with probability (1âe), and to otherwise act uniformly at random with probability â¬. Policies of this kind are used to introduce a form of explo- ration: by randomly selecting actions that are sub-optimal according to its current estimates, the agent can discover and correct its estimates when appropriate. The main limitation is that it is difficult to discover alternative courses of action that extend far into the future; this has motivated research on more directed forms of exploration.
Deep reinforcement learning and DQN. Large state and/or action spaces make it intractable to learn Q value estimates for each state and action pair independently. In deep reinforcement learning, we represent the various com- ponents of agents, such as policies Ï(s, a) or values q(s, a), with deep (i.e., multi-layer) neural networks. The parameters of these networks are trained by gradient descent to mini- mize some suitable loss function.
In DQN (Mnih et al. 2015) deep networks and reinforce- ment learning were successfully combined by using a con- volutional neural net to approximate the action values for a
given state S; (which is fed as input to the network in the form of a stack of raw pixel frames). At each step, based on the current state, the agent selects an action e-greedily with respect to the action values, and adds a transition (St, At, Rei, Ye+1, 5:41) to a replay memory buffer (Lin 1992), that holds the last million transitions. The parame- ters of the neural network are optimized by using stochastic gradient descent to minimize the loss
(Rega + Ve41 max dy(St41, aâ) â q9(S;,A1))?, (A)
where t is a time step randomly picked from the replay memory. The gradient of the loss is back-propagated only into the parameters θ of the online network (which is also used to select actions); the term θ represents the parame- ters of a target network; a periodic copy of the online net- work which is not directly optimized. The optimization is performed using RMSprop (Tieleman and Hinton 2012), a variant of stochastic gradient descent, on mini-batches sam- pled uniformly from the experience replay. This means that in the loss above, the time index t will be a random time in- dex from the last million transitions, rather than the current time. The use of experience replay and target networks en- ables relatively stable learning of Q values, and led to super- human performance on several Atari games.
Extensions to DQN DQN has been an important milestone, but several limita- tions of this algorithm are now known, and many extensions have been proposed. We propose a selection of six exten- sions that each have addressed a limitation and improved overall performance. To keep the size of the selection man- ageable, we picked a set of extensions that address distinct concerns (e.g., just one of the many addressing exploration).
Double Q-learning. Conventional Q-learning is affected by an overestimation bias, due to the maximization step in Equation 1, and this can harm learning. Double Q-learning (van Hasselt 2010), addresses this overestimation by decou- pling, in the maximization performed for the bootstrap tar- get, the selection of the action from its evaluation. It is pos- sible to effectively combine this with DQN (van Hasselt, Guez, and Silver 2016), using the loss (Rt+1 +γt+1qθ(St+1, argmax
(Resi tyes (S41, argmax qo (S11, aâ))âqo(St, At))?.
This change was shown to reduce harmful overestimations that were present for DQN, thereby improving performance.
Prioritized replay. DQN samples uniformly from the re- play buffer. Ideally, we want to sample more frequently those transitions from which there is much to learn. As a proxy for learning potential, prioritized experience replay (Schaul et al. 2015) samples transitions with probability pt relative to the last encountered absolute TD error:
w pe X | Rega + Ve41 max g(Si41, 4â) â qo(St,Ar)]
where Ï is a hyper-parameter that determines the shape of the distribution. New transitions are inserted into the replay
buffer with maximum priority, providing a bias towards re- cent transitions. Note that stochastic transitions might also be favoured, even when there is little left to learn about them.
Dueling networks. The dueling network is a neural net- work architecture designed for value based RL. It fea- tures two streams of computation, the value and advantage streams, sharing a convolutional encoder, and merged by a special aggregator (Wang et al. 2016). This corresponds to the following factorization of action values: y
y au(s.a) = v9(fe(s)) + aplfo(s)va) â Me wel) .0!) actions
where ξ, η, and Ï are, respectively, the parameters of the shared encoder fξ, of the value stream vη, and of the advan- tage stream aÏ; and θ = {ξ, η, Ï} is their concatenation.
Multi-step learning. Q-learning accumulates a single re- ward and then uses the greedy action at the next step to boot- strap. Alternatively, forward-view multi-step targets can be used (Sutton 1988). We deï¬ne the truncated n-step return from a given state St as
n-1 RM = > at? Rises - (2) k=0
A multi-step variant of DQN is then deï¬ned by minimizing the alternative loss, t + γ(n) (R(n)
(Ry +94" max gg(Si4n,aâ) â qo(St, As).
Multi-step targets with suitably tuned n often lead to faster learning (Sutton and Barto 1998).
Distributional RL. We can learn to approximate the dis- tribution of returns instead of the expected return. Recently Bellemare, Dabney, and Munos (2017) proposed to model such distributions with probability masses placed on a dis- crete support z, where z is a vector with Natoms â N+ atoms, deï¬ned by zi = vmin + (i â 1) vmaxâvmin for Natomsâ1 i â {1, . . . , Natoms}. The approximating distribution dt at time t is deï¬ned on this support, with the probability mass pi θ(St, At) on each atom i, such that dt = (z, pθ(St, At)). The goal is to update θ such that this distribution closely matches the actual distribution of returns.
To learn the probability masses, the key insight is that return distributions satisfy a variant of Bellmanâs equation. For a given state S; and action A;, the distribution of the returns under the optimal policy 7* should match a tar- get distribution defined by taking the distribution for the next state S;4, and action af,, = 7*(S;41), contracting it towards zero according to the discount, and shifting it by the reward (or distribution of rewards, in the stochas- tic case). A distributional variant of Q-learning is then de- rived by first constructing a new support for the target dis- tribution, and then minimizing the Kullbeck-Leibler diver- gence between the distribution d, and the target distribution dy = (Rigi + V412, Dg(St41, G41);
# Dx (®zd;||dz)
Dx (®zd;||dz) - (3)
Here ®, is a L2-projection of the target distribution onto the fixed support z, and @,; = argmax, qg($141,@) is the greedy action with respect to the mean action values Gq(Si41,4) = 2! po(S141,q) in state S141.
As in the non-distributional case, we can use a frozen copy of the parameters θ to construct the target distribution. The parametrized distribution can be represented by a neu- ral network, as in DQN, but with Natoms à Nactions outputs. A softmax is applied independently for each action dimension of the output to ensure that the distribution for each action is appropriately normalized.
Noisy Nets. The limitations of exploring using e-greedy policies are clear in games such as Montezumaâs Revenge, where many actions must be executed to collect the first re- ward. Noisy Nets (Fortunato et al. 2017) propose a noisy linear layer that combines a deterministic and noisy stream,
y = (b+ Wa) + (Bnoisy © â¬? + (Wnoisy © â¬â)a), (4)
where e? and ¢â are random variables, and © denotes the element-wise product. This transformation can then be used in place of the standard linear y = b + Wa. Over time, the network can learn to ignore the noisy stream, but will do so at different rates in different parts of the state space, allowing state-conditional exploration with a form of self-annealing.
The Integrated Agent In this paper we integrate all the aforementioned compo- nents into a single integrated agent, which we call Rainbow. First, we replace the 1-step distributional loss (3) with a multi-step variant. We construct the target distribution by contracting the value distribution in St+n according to the cumulative discount, and shifting it by the truncated n-step discounted return. This corresponds to deï¬ning the target distribution as d(n) t+n)). The resulting loss is
DKL(Φzd(n) t ||dt) ,
where, again, Φz is the projection onto z.
We combine the multi-step distributional loss with double Q-learning by using the greedy action in St+n selected ac- cording to the online network as the bootstrap action aâ t+n, and evaluating such action using the target network.
In standard proportional prioritized replay (Schaul et al. 2015) the absolute TD error is used to prioritize the tran- sitions. This can be computed in the distributional setting, using the mean action values. However, in our experiments all distributional Rainbow variants prioritize transitions by the KL loss, since this is what the algorithm is minimizing:
pt â DKL(Φzd(n) t ||dt) .
The KL loss as priority might be more robust to noisy stochastic environments because the loss can continue to de- crease even when the returns are not deterministic.
The network architecture is a dueling network architec- ture adapted for use with return distributions. The network
has a shared representation fξ(s), which is then fed into a value stream vη with Natoms outputs, and into an advantage stream aξ with Natoms à Nactions outputs, where ai ξ(fξ(s), a) will denote the output corresponding to atom i and action a. For each atom zi, the value and advantage streams are aggregated, as in dueling DQN, and then passed through a softmax layer to obtain the normalised parametric distribu- tions used to estimate the returnsâ distributions: Ï(Ï, a) â ai
exp(v;(9) + a,(¢,.@) ~ Zy(s)) oj exp(vn(d) + ay(d,4) â G(s) | po(s,a) =
ay(d,4) â Vy ai,
where ¢ = fe(s) and @i,(s) = y4â Vy ai, (9,0).
We then replace all linear layers with their noisy equiva- lent described in Equation (4). Within these noisy linear lay- ers we use factorised Gaussian noise (Fortunato et al. 2017) to reduce the number of independent noise variables.
Experimental Methods We now describe the methods and setup used for conï¬guring and evaluating the learning agents.
Evaluation Methodology. We evaluated all agents on 57 Atari 2600 games from the arcade learning environment (Bellemare et al. 2013). We follow the training and evalu- ation procedures of Mnih et al. (2015) and van Hasselt et al. (2016). The average scores of the agent are evaluated during training, every 1M steps in the environment, by suspending learning and evaluating the latest agent for 500K frames. Episodes are truncated at 108K frames (or 30 minutes of simulated play), as in van Hasselt et al. (2016).
Agentsâ scores are normalized, per game, so that 0% cor- responds to a random agent and 100% to the average score of a human expert. Normalized scores can be aggregated across all Atari levels to compare the performance of dif- ferent agents. It is common to track the median human nor- malized performance across all games. We also consider the number of games where the agentâs performance is above some fraction of human performance, to disentangle where improvements in the median come from. The mean human normalized performance is potentially less informative, as it is dominated by a few games (e.g., Atlantis) where agents achieve scores orders of magnitude higher than humans do. Besides tracking the median performance as a function of environment steps, at the end of training we re-evaluate the best agent snapshot using two different testing regimes. In the no-ops starts regime, we insert a random number (up to 30) of no-op actions at the beginning of each episode (as we do also in training). In the human starts regime, episodes are initialized with points randomly sampled from the initial portion of human expert trajectories (Nair et al. 2015); the difference between the two regimes indicates the extent to which the agent has over-ï¬t to its own trajectories.
Due to space constraints, we focus on aggregate results across games. However, in the appendix we provide full learning curves for all games and all agents, as well as de- tailed comparison tables of raw and normalized scores, in both the no-op and human starts testing regimes.
Hyper-parameter tuning. All Rainbowâs components have a number of hyper-parameters. The combinatorial space of hyper-parameters is too large for an exhaustive search, therefore we have performed limited tuning. For each component, we started with the values used in the paper that introduced this component, and tuned the most sensitive among hyper-parameters by manual coordinate descent.
DQN and its variants do not perform learning updates dur- ing the ï¬rst 200K frames, to ensure sufï¬ciently uncorrelated updates. We have found that, with prioritized replay, it is possible to start learning sooner, after only 80K frames.
DQN starts with an exploration ⬠of 1, corresponding to acting uniformly at random; it anneals the amount of explo- ration over the first 4M frames, to a final value of 0.1 (low- ered to 0.01 in later variants). Whenever using Noisy Nets, we acted fully greedily (⬠= 0), with a value of 0.5 for the ao hyper-parameter used to initialize the weights in the noisy stream!. For agents without Noisy Nets, we used ¢-greedy but decreased the exploration rate faster than was previously used, annealing ⬠to 0.01 in the first 250 frames.
We used the Adam optimizer (Kingma and Ba 2014), which we found less sensitive to the choice of the learn- ing rate than RMSProp. DQN uses a learning rate of a = 0.00025 In all Rainbowâs variants we used a learning rate of a/4, selected among {a/2,a/4,a/6}, and a value of 1.5 x 1074 for Adamâs ⬠hyper-parameter.
For replay prioritization we used the recommended pro- portional variant, with priority exponent Ï of 0.5, and lin- early increased the importance sampling exponent β from 0.4 to 1 over the course of training. The priority exponent Ï was tuned comparing values of {0.4, 0.5, 0.7}. Using the KL loss of distributional DQN as priority, we have observed that performance is very robust to the choice of Ï.
The value of n in multi-step learning is a sensitive hyper-parameter of Rainbow. We compared values of n = 1, 3, and 5. We observed that both n = 3 and 5 did well initially, but overall n = 3 performed the best by the end.
The hyper-parameters (see Table 1) are identical across all 57 games, i.e., the Rainbow agent really is a single agent setup that performs well across all the games.
1The noise was generated on the GPU. Tensorï¬ow noise gen- eration can be unreliable on GPU. If generating the noise on the CPU, lowering Ï0 to 0.1 may be helpful.
Parameter Value Min history to start learning 80K frames Adam learning rate 0.0000625 Exploration ⬠0.0 Noisy Nets 00 0.5 Target Network Period 32K frames Adam ⬠1.5 x 10-4 Prioritization type proportional Prioritization exponent w 0.5 Prioritization importance sampling 3 0.4 > 1.0 Multi-step returns n 3 Distributional atoms 51 Distributional min/max values {â10, 10]
Table 1: Rainbow hyper-parameters
#games > 20% human #games > 50% human #games > 100% human #games > 200% human #games > 500% human 57 DQN â DDQN â Prioritized DDN 3 40 â Dueling DDQN E â ABC 5 â Distributional DQN 2 â Noisy DQN 8 25 â Rainbow 5 2 10 57 DQN == no double == no priority Ff =~ no dueling E == no multi-step hed =~ no distribution 5 == no noisy 2 â Rainbow § Fa ie) 50 100 150 200 ie) 50 100 150 200 ie) 50 Millions of frames Millions of frames Millions of frames 150 200 i) 50 100 150 200 ie) 50 100 150 200 Millions of frames Millions of frames
Figure 2: Each plot shows, for several agents, the number of games where they have achieved at least a given fraction of human performance, as a function of time. From left to right we consider the 20%, 50%, 100%, 200% and 500% thresholds. On the ï¬rst row we compare Rainbow to the baselines. On the second row we compare Rainbow to its ablations.
Analysis In this section we analyse the main experimental results. First, we show that Rainbow compares favorably to several published agents. Then we perform ablation studies, com- paring several variants of the agent, each corresponding to removing a single component from Rainbow.
mance. This allows us to identify where the overall improve- ments in performance come from. Note that the gap in per- formance between Rainbow and other agents is apparent at all levels of performance: the Rainbow agent is improving scores on games where the baseline agents were already good, as well as improving in games where baseline agents are still far from human performance.
Comparison to published baselines. In Figure 1 we com- pare the Rainbowâs performance (measured in terms of the median human normalized score across games) to the corre- sponding curves for A3C, DQN, DDQN, Prioritized DDQN, Dueling DDQN, Distributional DQN, and Noisy DQN. We thank the authors of the Dueling and Prioritized agents for providing the learning curves of these, and report our own re-runs for DQN, A3C, DDQN, Distributional DQN and Noisy DQN. The performance of Rainbow is signiï¬cantly better than any of the baselines, both in data efï¬ciency, as well as in ï¬nal performance. Note that we match ï¬nal per- formance of DQN after 7M frames, surpass the best ï¬nal performance of these baselines in 44M frames, and reach substantially improved ï¬nal performance.
In the ï¬nal evaluations of the agent, after the end of train- ing, Rainbow achieves a median score of 223% in the no-ops regime; in the human starts regime we measured a median score of 153%. In Table 2 we compare these scores to the published median scores of the individual baselines.
In Figure 2 (top row) we plot the number of games where an agent has reached some speciï¬ed level of human normal- ized performance. From left to right, the subplots show on how many games the different agents have achieved 20%, 50%, 100%, 200% and 500% human normalized perfor-
Learning speed. As in the original DQN setup, we ran each agent on a single GPU. The 7M frames required to match DQNâs ï¬nal performance correspond to less than 10 hours of wall-clock time. A full run of 200M frames cor- responds to approximately 10 days, and this varies by less than 20% between all of the discussed variants. The litera-
Agent DQN DDQN (*) Prioritized DDQN (*) Dueling DDQN (*) A3C (*) Noisy DQN Distributional DQN Rainbow no-ops 79% 117% 140% 151% - 118% 164% 223% human starts 68% 110% 128% 117% 116% 102% 125% 153%
Table 2: Median normalized scores of the best agent snap- shots for Rainbow and baselines. For methods marked with an asterisk, the scores come from the corresponding publica- tion. DQNâs scores comes from the dueling networks paper, since DQNâs paper did not report scores for all 57 games. The others scores come from our own implementations.
DQN â no double â no priority â no dueling 200% no multi-step â no distribution â no noisy == Rainbow 100% Median normalized score L 50 100 150 200 Millions of frames
Figure 3: Median human-normalized performance across 57 Atari games, as a function of time. We compare our in- tegrated agent (rainbow-colored) to DQN (gray) and to six different ablations (dashed lines). Curves are smoothed with a moving average over 5 points.
ture contains many alternative training setups that improve performance as a function of wall-clock time by exploiting parallelism, e.g., Nair et al. (2015), Salimans et al. (2017), and Mnih et al. (2016). Properly relating the performance across such very different hardware/compute resources is non-trivial, so we focused exclusively on algorithmic vari- ations, allowing apples-to-apples comparisons. While we consider them to be important and complementary, we leave questions of scalability and parallelism to future work.
Ablation studies. Since Rainbow integrates several differ- ent ideas into a single agent, we conducted additional exper- iments to understand the contribution of the various compo- nents, in the context of this speciï¬c combination.
To gain a better understanding of the contribution of each component to the Rainbow agent, we performed ablation studies. In each ablation, we removed one component from the full Rainbow combination. Figure 3 shows a compari- son for median normalized score of the full Rainbow to six ablated variants. Figure 2 (bottom row) shows a more de- tailed breakdown of how these ablations perform relative to different thresholds of human normalized performance, and Figure 4 shows the gain or loss from each ablation for every game, averaged over the full learning run.
Prioritized replay and multi-step learning were the two most crucial components of Rainbow, in that removing ei- ther component caused a large drop in median performance. Unsurprisingly, the removal of either of these hurt early per- formance. Perhaps more surprisingly, the removal of multi- step learning also hurt ï¬nal performance. Zooming in on in- dividual games (Figure 4), we see both components helped
almost uniformly across games (the full Rainbow performed better than either ablation in 53 games out of 57).
Distributional Q-learning ranked immediately below the previous techniques for relevance to the agentâs perfor- mance. Notably, in early learning no difference is appar- ent, as shown in Figure 3, where for the ï¬rst 40 million frames the distributional-ablation performed as well as the full agent. However, without distributions, the performance of the agent then started lagging behind. When the results are separated relatively to human performance in Figure 2, we see that the distributional-ablation primarily seems to lags on games that are above human level or near it.
In terms of median performance, the agent performed better when Noisy Nets were included; when these are re- moved and exploration is delegated to the traditional e- greedy mechanism, performance was worse in aggregate (red line in Figure 3). While the removal of Noisy Nets pro- duced a large drop in performance for several games, it also provided small increases in other games (Figure 4).
In aggregate, we did not observe a signiï¬cant difference when removing the dueling network from the full Rainbow. The median score, however, hides the fact that the impact of Dueling differed between games, as shown by Figure 4. Figure 2 shows that Dueling perhaps provided some im- provement on games with above-human performance levels (# games > 200%), and some degradation on games with sub-human performance (# games > 20%).
Also in the case of double Q-learning, the observed differ- ence in median performance (Figure 3) is limited, with the component sometimes harming or helping depending on the game (Figure 4). To further investigate the role of double Q- learning, we compared the predictions of our trained agents to the actual discounted returns computed from clipped re- wards. Comparing Rainbow to the agent where double Q- learning was ablated, we observed that the actual returns are often higher than 10 and therefore fall outside the support of the distribution, spanning from â10 to +10. This leads to underestimated returns, rather than overestimations. We hy- pothesize that clipping the values to this constrained range counteracts the overestimation bias of Q-learning. Note, however, that the importance of double Q-learning may in- crease if the support of the distributions is expanded.
In the appendix, for each game we show ï¬nal performance and learning curves for Rainbow, its ablations, and baselines.
Discussion We have demonstrated that several improvements to DQN can be successfully integrated into a single learning algo- rithm that achieves state-of-the-art performance. Moreover, we have shown that within the integrated algorithm, all but one of the components provided clear performance bene- ï¬ts. There are many more algorithmic components that we were not able to include, which would be promising candi- dates for further experiments on integrated agents. Among the many possible candidates, we discuss several below.
We have focused here on value-based methods in the Q-learning family. We have not considered purely policy- based RL algorithms such as trust-region policy optimisa-
a a a tie DQN gg DQN SE UO DQN > ye DQN soon ge gee DQN ee Rainbow no multi-step no distribution _no noisy no dueling no priority DQN SESEGL RS ST SSS es seseeo esl res sess sae geeesHEsoseyeeysceesesasas SSavroeectvdoesvs¢yscvaase ssssyZRsgagttx~oZ Fue Esssesgseceveage§ 3s asc x TE Tee Se RE Oe eee Be ye S Bee Beek Ve TES eee RES EAL BELGE setters SSoeeserosoezsah 238 FS 8 Ws g° st agr ses agrrvEeagsg s co gES sossuga Be vwES 3592 S$ ees fia ES fog ft, o ven 2 = rey =, os = oad oy ega JR £8 £ 2a SEES Es 8 eg esege a § go 2 . cy $4 a5 68 ⬠se 8 => 5 28 5 a 2 2 § 5 â¬
Figure 4: Performance drops of ablation agents on all 57 Atari games. Performance is the area under the learning curve, normalized relative to the Rainbow agent and DQN. Two games where DQN outperforms Rainbow are omitted. The ablation leading to the strongest drop is highlighted for each game. The removal of either prioritization or multi-step learning reduces performance across most games, but the contribution of each component varies substantially per game.
tion (Schulman et al. 2015), nor actor-critic methods (Mnih et al. 2016; OâDonoghue et al. 2016).
A number of algorithms exploit a sequence of data to achieve improved learning efï¬ciency. Optimality tightening (He et al. 2016) uses multi-step returns to construct addi- tional inequality bounds, instead of using them to replace the 1-step targets used in Q-learning. Eligibility traces al- low a soft combination over n-step returns (Sutton 1988). However, sequential methods all leverage more computa- tion per gradient than the multi-step targets used in Rainbow. Furthermore, introducing prioritized sequence replay raises questions of how to store, replay and prioritise sequences.
Episodic control (Blundell et al. 2016) also focuses on data efï¬ciency, and was shown to be very effective in some domains. It improves early learning by using episodic mem- ory as a complementary learning system, capable of imme- diately re-enacting successful action sequences.
Besides Noisy Nets, numerous other exploration methods could also be useful algorithmic ingredients: among these Bootstrapped DQN (Osband et al. 2016), intrinsic motiva- tion (Stadie, Levine, and Abbeel 2015) and count-based ex- ploration (Bellemare et al. 2016). Integration of these alter- native components is fruitful subject for further research.
dates, without exploring alternative computational architec- tures. Asynchronous learning from parallel copies of the en- vironment, as in A3C (Mnih et al. 2016), Gorila (Nair et al. 2015), or Evolution Strategies (Salimans et al. 2017), can be effective in speeding up learning, at least in terms of wall- clock time. Note, however, they can be less data efï¬cient.
Hierarchical RL has also been applied with success to sev- eral complex Atari games. Among successful applications of HRL we highlight h-DQN (Kulkarni et al. 2016a) and Feu- dal Networks (Vezhnevets et al. 2017).
The state representation could also be made more efï¬- cient by exploiting auxiliary tasks such as pixel control or feature control (Jaderberg et al. 2016), supervised predic- tions (Dosovitskiy and Koltun 2016) or successor features (Kulkarni et al. 2016b).
To evaluate Rainbow fairly against the baselines, we have followed the common domain modiï¬cations of clipping re- wards, ï¬xed action-repetition, and frame-stacking, but these might be removed by other learning algorithm improve- ments. Pop-Art normalization (van Hasselt et al. 2016) al- lows reward clipping to be removed, while preserving a similar level of performance. Fine-grained action repetition (Sharma, Lakshminarayanan, and Ravindran 2017) enabled to learn how to repeat actions. A recurrent state network
In this paper we have focused on the core learning up-
(Hausknecht and Stone 2015) can learn a temporal state rep- resentation, replacing the ï¬xed stack of observation frames. In general, we believe that exposing the real game to the agent is a promising direction for future research.
References Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. 2013. The arcade learning environment: An evaluation plat- form for general agents. J. Artif. Intell. Res. (JAIR) 47:253â 279. Bellemare, M. G.; Srinivasan, S.; Ostrovski, G.; Schaul, T.; Saxton, D.; and Munos, R. 2016. Unifying count-based exploration and intrinsic motivation. In NIPS. Bellemare, M. G.; Dabney, W.; and Munos, R. 2017. A dis- tributional perspective on reinforcement learning. In ICML. Blundell, C.; Uria, B.; Pritzel, A.; Li, Y.; Ruderman, A.; Leibo, J. Z.; Rae, J.; Wierstra, D.; and Hassabis, D. 2016. Model-Free Episodic Control. ArXiv e-prints. Dosovitskiy, A., and Koltun, V. 2016. Learning to act by predicting the future. CoRR abs/1611.01779. Fortunato, M.; Azar, M. G.; Piot, B.; Menick, J.; Osband, I.; Graves, A.; Mnih, V.; Munos, R.; Hassabis, D.; Pietquin, O.; Blundell, C.; and Legg, S. 2017. Noisy networks for exploration. CoRR abs/1706.10295. Hausknecht, M., and Stone, P. 2015. Deep recurrent Q- arXiv preprint learning for partially observable MDPs. arXiv:1507.06527. He, F. S.; Liu, Y.; Schwing, A. G.; and Peng, J. 2016. Learn- ing to play in a day: Faster deep reinforcement learning by optimality tightening. CoRR abs/1611.01606. Jaderberg, M.; Mnih, V.; Czarnecki, W. M.; Schaul, T.; Leibo, J. Z.; Silver, D.; and Kavukcuoglu, K. 2016. Rein- forcement learning with unsupervised auxiliary tasks. CoRR abs/1611.05397. Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. In Proceedings of the 3rd Interna- tional Conference on Learning Representations (ICLR). Kulkarni, T. D.; Narasimhan, K.; Saeedi, A.; and Tenen- baum, J. B. 2016a. Hierarchical deep reinforcement learn- ing: Integrating temporal abstraction and intrinsic motiva- tion. CoRR abs/1604.06057. Kulkarni, T. D.; Saeedi, A.; Gautam, S.; and Gershman, S. J. 2016b. Deep successor reinforcement learning. arXiv preprint arXiv:1606.02396. Lin, L.-J. 1992. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8(3):293â321. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. A. 2013. Playing atari with deep reinforcement learning. CoRR abs/1312.5602. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg,
S.; and Hassabis, D. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529â533. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asyn- chronous methods for deep reinforcement learning. In In- ternational Conference on Machine Learning. Nair, A.; Srinivasan, P.; Blackwell, S.; Alcicek, C.; Fearon, R.; De Maria, A.; Panneershelvam, V.; Suleyman, M.; Beat- tie, C.; Petersen, S.; Legg, S.; Mnih, V.; Kavukcuoglu, K.; and Silver, D. 2015. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296. OâDonoghue, B.; Munos, R.; Kavukcuoglu, K.; and Mnih, V. 2016. Pgq: Combining policy gradient and q-learning. CoRR abs/1611.01626. Osband, I.; Blundell, C.; Pritzel, A.; and Roy, B. V. 2016. Deep exploration via bootstrapped dqn. In NIPS. Salimans, T.; Ho, J.; Chen, X.; and Sutskever, I. 2017. Evo- lution strategies as a scalable alternative to reinforcement learning. CoRR abs/1703.03864. Schaul, T.; Quan, J.; Antonoglou, I.; and Silver, D. 2015. Prioritized experience replay. In Proc. of ICLR. Schulman, J.; Levine, S.; Moritz, P.; Jordan, M.; and Abbeel, P. 2015. Trust region policy optimization. In Proceedings of the 32Nd International Conference on International Con- ference on Machine Learning - Volume 37, ICMLâ15, 1889â 1897. JMLR.org. Sharma, S.; Lakshminarayanan, A. S.; and Ravindran, 2017. Learning to repeat: Fine grained action rep- B. arXiv preprint etition for deep reinforcement learning. arXiv:1702.06054. Stadie, B. C.; Levine, S.; and Abbeel, P. 2015. Incentivizing exploration in reinforcement learning with deep predictive models. CoRR abs/1507.00814. Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learn- ing: An Introduction. The MIT press, Cambridge MA. Sutton, R. S. 1988. Learning to predict by the methods of temporal differences. Machine learning 3(1):9â44. Tieleman, T., and Hinton, G. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent mag- nitude. COURSERA: Neural networks for machine learning 4(2):26â31. van Hasselt, H.; Guez, A.; Guez, A.; Hessel, M.; Mnih, V.; and Silver, D. 2016. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems 29, 4287â4295. van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep re- In Proc. of inforcement learning with double Q-learning. AAAI, 2094â2100. van Hasselt, H. 2010. Double Q-learning. In Advances in Neural Information Processing Systems 23, 2613â2621. Vezhnevets, A. S.; Osindero, S.; Schaul, T.; Heess, N.; Jader- berg, M.; Silver, D.; and Kavukcuoglu, K. 2017. Feu- dal networks for hierarchical reinforcement learning. CoRR abs/1703.01161.
Wang, Z.; Schaul, T.; Hessel, M.; van Hasselt, H.; Lanctot, M.; and de Freitas, N. 2016. Dueling network architec- tures for deep reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning, 1995â 2003.
# Appendix
Table 3 lists the preprocessing of environment frames, rewards and discounts introduced by DQN. Table 4 lists the additional hyper-parameters that Rainbow inherits from DQN and the other baselines considered in this paper. The hyper-parameters for which Rainbow uses non standard settings are instead listed in the main text. In the subsequent pages, we list the tables showing, for each game, the score achieved by Rainbow and several baselines in both the no-ops regime (Table 6) and the human-starts regime (Table 5). In Figures 5 and 6 we also plot, for each game, the learning curves of Rainbow, several baselines, and all ablation experiments. These learning curves are smoothed with a moving average over a window of 10.
Hyper-parameter Grey-scaling Observation down-sampling Frames stacked Action repetitions Reward clipping Terminal on loss of life Max frames per episode value True (84, 84) 4 4 [-1, 1] True 108K
Table 3: Preprocessing: the values of these hyper-parameters are the same used by DQN and its variants. They are here listed for completeness. Observations are grey-scaled and rescaled to 84 Ã 84 pixels. 4 consecutive frames are concatenated as each stateâs representation. Each action selected by the agent is repeated for 4 times. Rewards are clipped between â1, +1. In games where the player has multiple lives, transitions associated to the loss of a life are considered terminal. All episodes are capped after 108K frames.
Hyper-parameter Q network: channels Q network: ï¬lter size Q network: stride Q network: hidden units Q network: output units Number of actions Discount factor Memory size Replay period Minibatch size
Table 4: Additional hyper-parameters: the values of these hyper-parameters are the same used by DQN and itâs variants. The network has 3 convolutional layers: with 32, 64 and 64 channels. The layers use 8 à 8, 4 à 4, 3 à 3 ï¬lters with strides of 4, 2, 1, respectively. The value and advantage streams of the dueling architecture have both a hidden layer with 512 units. The output layer of the network has a number of units equal to the number of actions available in the game. We use a discount factor of 0.99, which is set to 0 on terminal transitions. We perform a learning update every 4 agent steps, using mini-batches of 32 transitions.
Game DQN A3C DDQN Prior. DDQN Duel. DDQN Distrib. DQN Noisy DQN Rainbow
alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham venture video pinball wizard of wor yars revenge zaxxon
634.0 178.4 3489.3 3170.5 1458.7 292491.0 312.7 23750.0 9743.2 493.4 56.5 70.3 354.5 3973.9 5017.0 98128.0 15917.5 12550.7 -6.0 626.7 -1.6 26.9 496.1 8190.4 298.0 14992.9 -1.6 4496.0 6206.0 20882.0 47.0 1092.3 6738.8 7484.8 -113.2 18.0 207.9 9271.5 35215.0 58.7 4216.7 -12142.1 1295.4 1293.8 52970.0 -6.0 11.1 4786.0 45.6 136.0 154414.1 1609.0 4577.5 4412.0
518.4 263.9 5474.9 22140.5 4474.5 911,091.0 970.1 12950.0 22707.9 817.9 35.1 59.8 681.9 3755.8 7021.0 112646.0 56533.0 113,308.4 -0.1 -82.5 18.8 0.1 190.5 10022.8 303.5 32464.1 -2.8 94.0 5560.0 28819.0 67.0 653.7 10476.1 52894.1 -78.5 5.6 206.9 15148.8 34216.0 32.8 2355.4 -10911.1 1956.0 15,730.5 138218.0 -9.7 -6.3 12,679.0 156.3 23.0 331628.1 17,244.0 7157.5 24,622.0
1033.4 169.1 6060.8 16837.0 1193.2 319688.0 886.0 24740.0 17417.2 1011.1 69.6 73.5 368.9 3853.5 3495.0 113782.0 27510.0 69803.4 -0.3 1216.6 3.2 28.8 1448.1 15253.0 200.5 14892.5 -2.5 11204.0 6796.1 30207.0 42.0 1241.3 8960.3 12366.5 -186.7 19.1 -575.5 11020.8 43156.0 59.1 14498.0 -11490.4 810.0 2628.7 58365.0 1.9 -7.8 6608.0 92.2 21.0 367823.7 6201.0 6270.6 8593.0
900.5 218.4 7,748.5 31,907.5 1,654.0 593,642.0 816.8 29,100.0 26,172.7 1,165.6 65.8 68.6 371.6 3,421.9 6,604.0 131,086.0 21,093.5 73,185.8 2.7 1,884.4 9.2 27.9 2,930.2 57,783.8 218.0 20,506.4 -1.0 10,241.0 7,406.5 31,244.0 13.0 1,824.6 11,836.1 27,430.1 -14.8 18.9 179.0 11,277.0 56,990.0 55.4 39,096.7 -10,852.8 2,238.2 9,063.0 51,959.0 -0.9 -2.0 7,448.0 33.6 244.0 374,886.9 7,451.0 5,965.1 9,501.0
1,486.5 172.7 3,994.8 15,840.0 2,035.4 445,360.0 1,129.3 31,320.0 14,591.3 910.6 65.7 77.3 411.6 4,881.0 3,784.0 124,566.0 33,996.0 56,322.8 -0.8 2,077.4 -4.1 0.2 2,332.4 20,051.4 297.0 15,207.9 -1.3 10,334.0 8,051.6 24,288.0 22.0 2,250.6 11,185.1 20,410.5 -46.9 18.8 292.6 14,175.8 58,549.0 62.0 37,361.6 -11,928.0 1,768.4 5,993.1 90,804.0 4.0 4.4 6,601.0 48.0 200.0 110,976.2 7,054.0 25,976.5 10,164.0
1,997.5 237.7 5,101.3 395,599.5 2,071.7 289,803.0 835.6 32,250.0 15,002.4 1,000.0 76.8 62.1 548.7 7,476.9 9,600.5 154,416.5 32,246.0 109,856.6 -3.7 2,133.4 -4.9 28.8 2,813.9 27,778.3 422.0 28,554.2 -0.1 9,555.5 6,757.8 33,890.0 130.0 2,064.1 11,382.3 31,358.3 -342.8 18.9 5,717.5 15,035.9 56,086.0 49.8 3,275.4 -13,247.7 2,530.2 6,368.6 67,054.5 4.5 22.6 7,684.5 124.3 462.0 455,052.7 11,824.5 8,267.7 15,130.0
533.3 148.0 5,124.3 8,277.3 4,078.1 303,666.5 955.0 26,985.0 15,241.5 670.8 79.3 66.3 423.3 4,214.4 8,778.5 98,576.5 18,037.5 25,207.8 -1.0 1,021.5 -3.7 27.1 418.8 13,131.0 250.5 2,454.2 -2.4 7,465.0 6,833.5 27,921.0 55.0 1,012.1 7,186.4 15,505.0 -154.4 18.0 5,955.4 9,176.6 35,376.5 50.9 2,353.1 -13,905.9 2,608.2 1,697.2 31,864.5 -3.1 -2.1 5,311.0 123.3 10.5 241,851.7 4,796.5 5,487.3 7,650.5
Table 5: Human Starts evaluation regime: Raw scores across all games, averaged over 200 testing episodes, from the agent snapshot that obtained the highest score during training. We report the published scores for DQN, A3C, DDQN, Dueling DDQN, and Prioritized DDQN. For Distributional DQN and Rainbow we report our own evaluations of the agents.
6,022.9 202.8 14,491.7 280,114.0 2,249.4 814,684.0 826.0 52,040.0 21,768.5 1,793.4 39.4 54.9 379.5 7,160.9 10,916.0 143,962.0 47,671.3 109,670.7 -0.6 2,061.1 22.6 29.1 4,141.1 72,595.7 567.5 50,496.8 -0.7 10,841.0 6,715.5 28,999.8 154.0 2,570.2 11,686.5 103,061.6 -37.6 19.0 1,704.4 18,397.6 54,261.0 55.2 19,176.0 -11,685.8 2,860.7 12,629.0 123,853.0 7.0 -2.2 11,190.5 126.9 45.0 506,817.2 14,631.5 93,007.9 19,658.0
Game DQN DDQN Prior. DDQN Duel. DDQN Distrib. DQN Noisy DQN Rainbow
alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham venture video pinball wizard of wor yars revenge zaxxon
1620.0 978.0 4280.0 4359.0 1364.5 279987.0 455.0 29900.0 8627.5 585.6 50.4 88.0 385.5 4657.7 6126.0 110763.0 23633.0 12149.4 -6.6 729.0 -4.9 30.8 797.4 8777.4 473.0 20437.8 -1.9 7259.0 8422.3 26059.0 0.0 3085.6 8207.8 8485.2 -286.1 19.5 146.7 13117.3 39544.0 63.9 5860.6 -13062.3 3482.8 1692.3 54282.0 -5.6 12.2 4870.0 68.1 163.0 196760.4 2704.0 18089.9 5363.0
3747.7 1793.3 5393.2 17356.5 734.7 106056.0 1030.6 31700.0 13772.8 1225.4 68.1 91.6 418.5 5409.4 5809.0 117282.0 35338.5 58044.2 -5.5 1211.8 15.5 33.3 1683.3 14840.8 412.0 20130.2 -2.7 12992.0 7920.5 29710.0 0.0 2711.4 10616.0 12252.5 -29.9 20.9 129.7 15088.5 44127.0 65.1 16452.7 -9021.8 3067.8 2525.5 60142.0 -2.9 -22.8 8339.0 218.4 98.0 309941.9 7492.0 11712.6 10163.0
6,648.6 2,051.8 7,965.7 41,268.0 1,699.3 427,658.0 1,126.8 38,130.0 22,430.7 1,614.2 62.6 98.8 381.5 5,175.4 5,135.0 183,137.0 24,162.5 70,171.8 4.8 2,155.0 30.2 32.9 3,421.6 49,097.4 330.5 27,153.9 0.3 14,492.0 10,263.1 43,470.0 0.0 4,751.2 13,439.4 32,808.3 0.0 20.7 200.0 18,802.8 62,785.0 58.6 44,417.4 -9,900.5 1,710.8 7,696.9 56,641.0 2.1 0.0 11,448.0 87.2 863.0 406,420.4 10,373.0 16,451.7 13,490.0
4,461.4 2,354.5 4,621.0 28,188.0 2,837.7 382,572.0 1,611.9 37,150.0 12,164.0 1,472.6 65.5 99.4 345.3 7,561.4 11,215.0 143,570.0 42,214.0 60,813.3 0.1 2,258.2 46.4 0.0 4,672.8 15,718.4 588.0 20,818.2 0.5 14,854.0 11,451.9 34,294.0 0.0 6,283.5 11,971.1 23,092.2 0.0 21.0 103.0 19,220.3 69,524.0 65.3 50,254.2 -8,857.4 2,250.8 6,427.3 89,238.0 4.4 5.1 11,666.0 211.4 497.0 98,209.5 7,855.0 49,622.1 12,944.0
4,055.8 1,267.9 5,909.0 400,529.5 2,354.7 273,895.0 1,056.7 41,145.0 13,213.4 1,421.8 74.1 98.1 612.5 9,015.5 13,136.0 178,355.0 37,896.8 110,626.5 -3.8 2,259.3 9.1 33.6 3,938.2 28,841.0 681.0 33,860.9 1.3 12,909.0 9,885.9 43,009.0 367.0 3,769.2 12,983.6 34,775.0 -2.1 20.8 15,172.9 16,956.0 63,366.0 54.2 4,754.4 -14,959.8 5,643.1 6,869.1 69,306.5 6.2 23.6 7,875.0 249.4 1,107.0 478,646.7 15,994.5 16,608.6 18,347.5
2,394.9 1,608.0 5,198.6 12,403.8 4,814.1 329,010.0 1,323.0 32,050.0 12,534.0 837.3 77.3 83.3 459.1 4,355.8 9,519.0 118,768.0 23,083.0 24,950.1 -1.8 1,129.2 7.7 32.0 583.6 15,107.9 443.5 5,053.1 -2.1 12,117.0 9,061.9 34,099.0 0.0 2,501.6 8,332.4 16,974.3 -18.2 21.0 3,966.0 15,276.3 41,681.0 53.5 2,495.4 -16,307.3 3,204.5 2,145.5 34,504.5 -3.3 0.0 6,157.0 231.6 0.0 270,444.6 5,432.0 9,570.1 9,390.0
9,491.7 5,131.2 14,198.5 428,200.3 2,712.8 826,659.5 1,358.0 62,010.0 16,850.2 2,545.6 30.0 99.6 417.5 8,167.3 16,654.0 168,788.5 55,105.0 111,185.2 -0.3 2,125.9 31.3 34.0 9,590.5 70,354.6 1,419.3 55,887.4 1.1 14,637.5 8,741.5 52,181.0 384.0 5,380.4 13,136.0 108,528.6 0.0 20.9 4,234.0 33,817.5 62,041.0 61.4 15,898.9 -12,957.8 3,560.3 18,789.0 127,029.0 9.7 -0.0 12,926.0 241.0 5.5 533,936.5 17,862.5 102,557.0 22,209.5
â
Table 6: No-op starts evaluation regime: Raw scores across all games, averaged over 200 testing episodes, from the agent snapshot that obtained the highest score during training. We report the published scores for DQN, DDQN, Dueling DDQN, and Prioritized DDQN. For Distributional DQN and Rainbow we report our own evaluations of the agents. A3C is not listed since the paper did not report the scores for the no-ops regime.
1e4 assault 1e5 asterix 1e3 asteroids battle zone berzerk le4 3 2 1 Z q 0 40 freeway 1.9 1¢4 frostbite g le4 gopher 3 163 gravitar 30 20 10 ) g le4 hero 10 ice_hockey 3 1e4 jamesbond 2 1 0 1e4 kung_fu_master 400, Montezuma_revenge gle3 ms_pacman Ls 1e4 name _this_game 1e5 phoenix 7 300 As ee | ep [7 100 of all 0 0 .0 b 4 0.0 283 pitfa 40 pong > 1e4 private_eye aied qbert riverrai -0.5 201 @ 3 orf 2 -1.0 _20 1 -1.5 -40 0 le4 road_runner robotank seaquest 0 50 1e4. wizard_of_wor 15 1.0 0.5 50 Bebe oN * 50 100 150 200 le5 yars revenge 100 150 200
1.5
# DQN
# Mmm
# A3C
1.0
0.5
# Mam
# Mm
# Mam
# DDQN Prioritized DDQN
# Dueling DDQN
# GE
@am
# MB
# Distributional DQN
# Noisy DQN Rainbow
0.0 0
50
100
150
200
0
50
100
150
200
Figure 5: Learning curves for Rainbow and the baselines discussed in the paper, for each individual game. Every curve is smoothed with a moving average of 10 to improve readability.
1e4 alien 15 1e3 le4 assault IE RS 1e5 asterix 1e3 asteroids 1e4__ battle_zone 1e4 beam_rider breakout pac ot ea defender freeway ice_hockey pitfall robotank 50 100 150 200
# lim
# DN
# no double
# lm
# lm
# no n-steps no distrib
# mm
# lm
# no prior no duel
# Mmm
# MM
# no noisy Rainbow
Figure 6: Learning curves for Rainbow and its ablations, for each individual game. Every curve is smoothed with a moving average of 10 to improve readability. | {
"id": "1507.06527"
} |
1710.00459 | Deep Abstract Q-Networks | We examine the problem of learning and planning on high-dimensional domains
with long horizons and sparse rewards. Recent approaches have shown great
successes in many Atari 2600 domains. However, domains with long horizons and
sparse rewards, such as Montezuma's Revenge and Venture, remain challenging for
existing methods. Methods using abstraction (Dietterich 2000; Sutton, Precup,
and Singh 1999) have shown to be useful in tackling long-horizon problems. We
combine recent techniques of deep reinforcement learning with existing
model-based approaches using an expert-provided state abstraction. We construct
toy domains that elucidate the problem of long horizons, sparse rewards and
high-dimensional inputs, and show that our algorithm significantly outperforms
previous methods on these domains. Our abstraction-based approach outperforms
Deep Q-Networks (Mnih et al. 2015) on Montezuma's Revenge and Venture, and
exhibits backtracking behavior that is absent from previous methods. | http://arxiv.org/pdf/1710.00459 | Melrose Roderick, Christopher Grimm, Stefanie Tellex | cs.LG, cs.AI | null | null | cs.LG | 20171002 | 20180825 | 8 1 0 2
g u A 5 2 ] G L . s c [
2 v 9 5 4 0 0 . 0 1 7 1 : v i X r a
# Deep Abstract Q-Networks
Melrose Roderick Carnegie Mellon University Pittsburgh, Pennsylvania mroderick@cmu.edu
Christopher Grimm University of Michigan Ann Arbor, Michigan crgrimm@umich.edu
Stefanie Tellex Brown University Providence, Rhode Island stefie10@cs.brown.edu
ABSTRACT We examine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Mon- tezumaâs Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be use- ful in tackling long-horizon problems. We combine recent tech- niques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We con- struct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep Q- Networks [11] on Montezumaâs Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.
# KEYWORDS Reinforcement Learning; Hierarchical Planning; Deep Learning
room is locked and the key is at a known location in another room in this domain. The agent must navigate through several rooms to find the key before retracing its steps to the door to unlock it. Learning to navigate each individual room is on its own challenging, but learning a policy to traverse multiple such rooms is much harder. While a complete solution is presently out of reach, there have been a number of promising attempts at improving the long-term planning of deep reinforcement learning agents. These approaches can be divided into two categories:
(1) Those that intrinsically motivate an agent to explore portions of the state-space that exhibit some form of novelty [3]. (2) Those that exploit some kind of abstraction to divide the learning problem into more manageable subparts [9, 15].
Both of these approaches suffer drawbacks. Novelty-based ap- proaches indeed encourage exploration. However, this intrinsic drive toward under-explored states tends to interfere with an agentâs ability to form long-term plans. As a result, the agent may be able to find the key in the rooms but is unable to make a plan to pick up the key and then use it to unlock the door.
ACM Reference Format: Melrose Roderick, Christopher Grimm, and Stefanie Tellex. 2018. Deep Abstract Q-Networks. In Proc. of the 17th International Conference on Au- tonomous Agents and Multiagent Systems (AAMAS 2018), Stockholm, Sweden, July 10â15, 2018, IFAAMAS, 8 pages.
1 INTRODUCTION Recent advances in deep learning have enabled the training of rein- forcement learning agents in high-dimensional domains. This was most popularly demonstrated by Mnih et al. [11] in their research into training Deep Q-Networks to play various Atari 2600 games. While the performance attained by Mnih et al. spans an impressive subset of the Atari 2600 library, several complicated games remain out of reach from existing techniques, including the notoriously difficult Montezumaâs Revenge (MR) and Venture. These anoma- lously difficult domains exhibit sparse reward signals and sprawling partially-observable mazes. The confluence of these traits produces difficult games beyond the capabilities of existing deep techniques to solve. In spite of these considerable challenges, these games are some of the closest analogs to real-world robotics problems since they require an agent to navigate a complex, unknown environment and manipulate objects to achieve long-term goals.
As an example of a long-horizon problem, consider a domain in which an agent is tasked with navigating through a series of cluttered rooms with only visual input. The door to enter the desired
Proc. of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018), M. Dastani, G. Sukthankar, E. André, S. Koenig (eds.), July 10â15, 2018, Stockholm, Sweden. © 2018 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Abstraction-based approaches focus on end-to-end learning of both the abstractions and the resulting sub-policies, and are hin- dered by an extremely difficult optimization problem that balances constructing a good abstraction while still exploring the state-space and learning the policies to navigate the abstraction while the ab- straction continues to change. Moreover, given the lack of strong theoretical underpinnings for the âgoodnessâ of an abstraction, lit- tle external guidance can be provided for any such optimization scheme.
To tackle domains with long horizons and sparse rewards, we propose the following method in which an experimenter provides a lightweight abstraction consisting of factored high-level states to the agent. We then employ the formalism of the Abstract Markov Decision Process (AMDP) [7] to divide a given domain into a sym- bolic, high-level representation for learning long-term policies and a pixel-based low-level representation to leverage the recent suc- cesses of deep-learning techniques. In our toy example, the high- level representation would be the current room of the agent and whether the agent has the key, and the low-level representation would be the pixel values of the image. The aforementioned fac- toring decomposes this symbolic, high-level state into collections of state-attributes with associated predicate functions in a manner similar to Object Oriented MDPs [6]. This factoring allows us to treat actions in our high-level domain as changes in attributes and predicates rather than as state-to-state transitions, while avoiding a combinatorial explosion in the action space as the number of objects increases. For example, once a key is retrieved, the agent should not have to re-learn how to navigate from room to room; holding a key should not generally change the way the agent navigates.
In this work, we detail our method for combining recent tech- niques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We then illustrate the advantages of this method on toy versions of the room navigation task, which are designed to exhibit long horizons, sparse reward signals, and high-dimensional inputs. We show experimen- tally that our method outperforms Deep Q-Networks (DQN) and competing novelty-based techniques on these domains. Finally, we apply our approach to Atari 2600 [2] Montezumaâs Revenge (MR) and Venture and show it outperforms DQN and exhibits backtrack- ing behavior that is absent from previous methods.
2 RELATED WORK We now survey existing long-horizon learning approaches includ- ing abstraction, options, and intrinsic motivation.
Subgoals and abstraction are common approaches for decreasing problem horizons, allowing agents to more efficiently learn and plan on long-horizon domains. One of the earliest reinforcement learning methods using these ideas is MAXQ [5], which decomposes a flat MDP into a hierarchy of subtasks. Each subtask is accompanied by a subgoal to be completed. The policy for these individual subtasks is easier to compute than the entire task. Additionally, MAXQ constrains the choice of subtasks depending on the context or parent task. A key drawback to this method is that the plans are computed recursively, meaning the high-level learning algorithm must recur down into the subtrees at training time. This limitation forces the use of a single learning algorithm for both the high-level and low- level. Our approach avoids this problem, allowing us to use deep reinforcement learning algorithms on the low-level to handle the high-dimensional input and model-based algorithms on the high- level to create long-term plans and guide exploration.
Temporally extended actions [10] and options [13] are other commonly used approaches to decreasing problem horizons, which bundles reusable segments of plans into single actions that can be used alongside the environment actions. Learning these options for high-dimensional domains, such as Atari games, is challenging and has only recently been performed by Option-Critic [1]. Option- Critic, however, fails to show improvements in long-horizon do- mains, such as Montezumaâs Revenge and Venture. In our work we seek to learn both the sub-policies and the high-level policy.
Some existing approaches have sought to learn both the options and high-level policies in parallel. The hierarchical-DQN (h-DQN) [9] is a two-tiered agent using Deep Q-Learning. The h-DQN is di- vided into a low-level controller and a high-level meta-controller. It is important to note that these tiers operate on different timescales, with the meta-controller specifying long-term, manually-annotated goals for the controller to focus on completing in the short-term. These manually-annotated goals are similar to the abstraction we provide to our agent: the goals in our case would be adjacent high- level states. However, although this method does perform action ab- straction, it does not perform state abstraction. Thus, the high-level learner still must learn over a massive high-dimensional state-space. Our approach, on the other hand, takes advantage of both state and action abstraction, which greatly decreases the high-level state- space allowing us to use a model-based planner at the high-level. This pattern of a high-level entity providing goal-based rewards to
a low-level agent is also explored in Vezhnevets et al. [15] with the FeUdal Network. Unlike the h-DQN, the FeUdal Network does not rely on experimenter-provided goals, opting to learn a low-level Worker and a high-level Manager in parallel, with the Manager sup- plying a vector from a learned goal-embedding to the worker. While this method was able to achieve a higher score on Montezumaâs Revenge than previous methods, it fails to explore as many rooms as novelty-based methods. In contrast, our approach provides the abstraction to the agent, allowing us to leverage existing model- based exploration algorithms, such as R-Max [4], which enable our agent to create long-term plans to explore new rooms.
In addition to methods that rely on a goal-based form of reward augmentation, there has been work on generally motivating agents to explore their environment. Particularly, Bellemare et al. [3] de- rive a pseudo-count formula which approximates naively counting the number of times a state occurs. These pseudo-counts generalize well to high-dimensional spaces and illuminate the degree to which different states have been explored. Using this information, Belle- mare et al. [3] are able to produce a reward-bonus to encourage learning agents to visit underexplored states; this method is referred to as Intrinsic Motivation (IM). This approach is shown to explore large portions of MR (15/24 rooms). While this method is able to explore significantly better than DQN, it still fails to execute plans that required to complete MR, such as collecting keys to unlock doors.
For example, in MR, after collecting its first key, the agent ends its current life rather than retracing its steps and unlocking the door, allowing it to retain the key while returning to the starting location, much closer to the doors. This counterintuitive behavior occurs because the factorization of the state-space in Bellemare et al. [3] renders the presence of the key and the agentâs position independent, resulting in the pseudo-counts along the path back to the door still being relatively large when compared to states near the key. Thus, the corresponding exploration bonuses for backtracking are lower than those for remaining near the key. Therefore, if the environment terminated after a single life, this method would never learn to leave the first room. This phenomenon is illustrated in our single-life MR results in Figure 5. Similarly, in Venture once the IM agent has collected an item from one of the rooms, the novelty of that room encourages it to remain in that room instead of collecting all four items and thereby completing the level. In contrast, our method allows the agent to learn a different policy before it collects the key or item and after, in order to systematically find the key or item and explore farther without dying.
Schema Networks [8] used a model-based, object-oriented ap- proach to improve knowledge transfer across similar Atari do- mains, requiring much less experience to perform well in the novel domains. This method, however, is not able to learn from high- dimensional image data and provides no evidence of improving performance on long-horizon domains.
3 FRAMEWORK AND NOTATION The domains considered in this work are assumed to be Markov Decision Processes (MDPs), defined as the tuple:
â¨S, A, R, T , Eâ© (1)
where S is a set of states, A is a set of actions that can be taken, R(s, a, s â²) is a function representing the reward incurred from tran- sitioning from state s to state s â² by taking action a, T (s, a, s â²) is a function representing the probability of transitioning from s to s â² by taking action a, and E â S is a set of terminal states that, once reached, prevent any future action. Under this formalism, an MDP represents an environment which is acted upon by an agent. The agent takes actions from the set A and receives a reward and an updated state from the environment. In reinforcement-learning problems, agents aim to learn policies, Ï (s) : S â A , to maximize their reward over time. Their success at this is typically measured as the discounted reward or value of acting under a policy from a given state:
V(s) =E [re tyres + y7rre2 + °° ] (2)
where the (rt ) is a sequence of random variables representing the reward of an agent acting under policy Ï over time, and γ â (0, 1) is a discount factor applied to future reward-signals.
To allow our agent to learn and plan on an abstract level, we employ the Abstract Markov Decision Process (AMDP) formal- ism presented in Gopalan et al. [7]. An AMDP is a hierarchy of MDPs allowing for planning over environments at various levels of abstraction. Formally, a node in this hierarchy is defined as an augmented MDP tuple:
⨠ËS,
ËA, ËT , ËR, ËE, F â©.
where ËS, ËR and ËE mirror the standard MDP components defined in Eq. 1, F : S â ËS is a state projection function that maps lower-level states in S to their abstract representations one-level ËS, and every Ëa â ËA represents another above in the hierarchy, augmented MDP or a base environment action.
As a concrete example, consider an environment containing four connected rooms. A simple two-tiered AMDP hierarchy might treat entire rooms as abstract states that can be transitioned between. Each action at the high-level would be a low-level MDP with the goal of transitioning from one room to the next. The action-set for these MDPs would be environment-level actions (such as UP, DOWN, LEFT, RIGHT) and a reward function would be 1 for a successful transition and a 0 otherwise.
4 MODEL We now describe our hierarchical system for learning agents that ex- hibit long-term plans. Our approach involves learning two coupled agents simultaneously: a high-level L1-agent and a low-level L0- agent. The AMDP framework allows for more levels of abstraction, but we think 2 levels of abstraction is sufficient for our domains.
The L0-agent operates on states received directly from the envi- ronment and the L1-agent operates on an abstraction provided by the experimenter. This abstraction is intended to be coarse, meaning that only limited information about the environment is provided to the L1-agent and many environment states cluster into a sin- gle L1 state. The coarseness of the abstraction allows for minimal engineering on the part of the experimenter. We use the AMDP formalism described above, defining the L1-agentâs environment as the MDP, ⨠ËS, ËA, ËT , ËR, ËEâ©, and the L0-agentâs environment as the
MDP, (S, A,7,R, &). We also denote the state projection function mapping Lo-states to corresponding Lj-states as F: St S.
4.1 Abstract States and Actions To allow our agent to plan at a higher level, we project the ground level states (e.g. Atari frames) into a much lower-dimensional ab- straction for the Ly-agent. Similar to Object Oriented MDPs [6], the L-agentâs abstraction is specified by: a set of abstract states fac- tored into attributes that represent independent state components and a set of predicate functions that are used to specify dependen- cies or interactions between particular values of the attributes. This information is provided to the agent in the form of a state projec- tion function, F: S tp S, which grounds abstract states to sets of environment states. More precisely, let N ⬠Z* be the number of attributes in each abstract state, M ⬠Z* be the number of predicate functions and S be the set of provided abstract states. For any $ ⬠s we will alternatively write (51,..., 5,7), to emphasize the N factors of s. We write (pi,...,pi) to denote the M predicate functions, where each pj; : SH {0,1} for j ⬠1,...,M. For example, the L; state space for MR (an Atari navigation task with rooms, doors, and keys) would consist of the attributes (Agent loc), (Num keys), (iâth Key collected), (jâth Door unlocked) and predicates (Near uncollected iâth Key), (Near unlocked jâth Door), (Near locked jâth Door with key) for all i and j.
This factorization prevents our state-action space from grow- ing combinatorially in the number of objects. In an unfactored domain, an action that is taken with the intent of transitioning from state S1 to state S2 can be thought of symbolically as the or- dered pair: (S1, S2). Since there is no predefined structure to S1 or S2, any variation in either state, however slight, mandates a new symbolic action. This is particularly expensive for agents acting across multiple levels of abstraction that need to explicitly learn how to perform each symbolic action on the low-level domain. We mitigate this learning-cost through the factorization imposed by our abstraction-attributes. For a given state (Ës1, . . . , ËsM ) â ËS, if we assume that each si is independent then we can represent each L1-action Ëa â ËA as a the ordered set of intended attribute changes by performing a. We refer to this representation as an attribute difference and define it formally as a tuple with M entries:
Dif f(s, 4 A {er 4) ifs; # S; 6) ) else.
In practice, it is seldom the case that each of the abstract attributes is completely independent. To allow for modeling dependencies between certain attributes, we use the predicate functions described above and augment our previous notion of L1-actions with inde- pendent attributes, representing actions as tuples of attribute dif- ferences and evaluated predicate functions: (Diff(s, s â²), p1(s), . . ., pL(s)) â ËA. In our example from above, this allows the agent to have different transition dynamics for when the doors in the room are open or closed or when the key in the room has been collected or not. For rooms with no doors or keys, however, the transition dynamics remain constant for any configuration of unlocked doors and collected keys in the state.
4.2 Interactions Between L1 and L0 Agents In order for the L0 agents to learn to transition between L1 abstract states, we need to define the L0 reward function in terms of L1 abstract states. It is important to note that, much like in Kulkarni et al. [9], the L1-agent operates at a different temporal scale than the L0-agent. However, unlike Kulkarni et al. [9], the L0 and L1- agents operate on different state-spaces, so we need to define the reward and terminal functions for each. Suppose that the L1-agent is in state Ësinit â ËS and takes action Ëa â ËA. Further suppose that Ësgoal â ËS is the intended result of applying action Ëa to state Ësinit. This high-level action causes the execution of an L0-policy with the following modified terminal set and reward function:
Eepisode = E U {s ⬠S : F(s) # Sinit} 1 if F(sâ) = Sgoal (4) Revi $,a,8â) = episode ( ) { else,
Notice that the Lo reward function ignores the ground-environment reward function, R. This information is instead passed to the L; reward function. Denote the rewards accrued over T steps of the Lo- episode as 7 = yr, Rr, denote whether the Lo-environment termi- nated as é, and denote the final Lo-state as sterm. At the termination of the Lo-episode, these quantities are returned to the L-agent to provide a complete experience tuple (Spit, a, 7, F(Sterm), â¬)-
5 LEARNING In the previous sections, we defined the semantics of our AMDP hierarchy but did not specify the precise learning algorithms to be used for the L1 and L0-agents. Indeed, any reinforcement learn- ing algorithm could be used for either of these agents since each operates on a classical MDP. In our work, we chose to use a deep reinforcement learning method for the L0 learner to process the high-dimensional pixel input and a model-based algorithm for the L1 learner to exploit its long-term planning capabilities.
5.1 Low Level Learner As described above, every transition between two L1 states is repre- sented by an L0 AMDP. So, if there are multiple hundred L1 states and each one has a few neighboring states, there could be hun- dreds or thousands of L0 AMDPs. Each L0 AMDP could be solved using a vanilla DQN, but it would take millions of observations to train each one to learn since every DQN would have to learn from scratch. To avoid this high computational cost, we share all parameters, except for those in the last fully connected layer of our network, between policies. For each policy we use a different set of parameters for the final fully connected layer. This encourages sharing high-level visual features between policies and imposes that the behavior of an individual L0-policy is specified by these interchangeable, final-layer parameters. In our implementation, we used the Double DQN loss [14] with the Mixed Monte-Carlo update as it has been shown to improve performance on sparse-reward domains [12].
Because we share all layers of the network between the DQNs, updating one network could change the output for another. This can sometimes lead to forgetting policies. To correct for this, we use an ϵ-greedy policy where we dynamically change epsilon based
(4)
Algorithm 1 Object-Oriented AMDP algorithm
1: procedure Learn S, A â â
2: while training do
2: S,Aâ0 3: while training do 4: s <â current environment state 5: ifs ¢S then 6: Add_State(s) 7: end if 8: a â arg max, (Q(s, a)) 9: sâ,r,t â perform action a 10: dresult â Dif f(s, sâ) U1: if desult,Pilsâ),-.-,pL(sâ)) ¢ A then 12: Add_Action(d,esy17+P1(sâ),---.px(sâ)) 13: end if 14: add (s,a,sâ,r,t) to transition table 15: run Value_Iteration 16: end while 17: end procedure 18: procedure VALUE_ITERATION 19: for Some number of steps do 20: fors¢ S do 21: for a ⬠all applicable actions for s do 22: sâ â apply Diff of atos 23: Qr(s,a) â â Lajen(ay 7 (4, 4j)[RE dj) + yVe-1(s)(1 â E(a, d;))] > Bellman update 24: end for 25: V;(s) â maxa(Qr(s, a)) 26: end for 27: end for 2s: end procedure
on how successful the L0 AMDP is. We measure the success of each L0 AMDP by periodically evaluating them (by setting ϵ = 0.01) and measuring the number of times the policy terminates at the goal state, Ësдoal . We then set ϵ equal to 1 minus the proportion of the time the L0 AMDP succeeds when evaluated (with a minimum epsilon of 0.01). We found this allows the agent to keep exploring actions that were not yet learned or have been forgotten, while exploiting actions that have already been learned. However, when the transition cannot be consistently completed by a random policy, this method tends to fail.
5.2 High Level Learner For our L1-agent, we use a tabular R-Max learning agent [4]. We chose this reinforcement learning algorithm for our L1-agent as it constructs long-term plans to navigate to under-explored states. Particularly, every action Ëa â ËA is given an R-Max reward until that action has been tried some number of times. We chose 100 for this number to ensure that a random policy could discover all possible next abstract states.
It is possible for L1 actions to continue running forever if the agent never transitions between L1 states. Thus, in practice we only run an L1 action for a maximum of 500 steps.
Impassable barrier B A ? C
Figure 1: Example of a non-Markovian abstraction. The tran- sition dynamics of room A depend on the side from which the agent enters the room.
5.3 Exploration for L1 and L0 Agents In this work, we assume the agent is given only the state projection function, F , minimizing the work the designer needs to do. However, this means that the agent must learn the transition dynamics of the L1 AMDP and build up the hierarchy on-the-fly.
To do so, our agent begins with an empty set of states and actions, ËS and ËA. Because we do not know the transition graph, every state needs to be sufficiently explored in order to find all neighbors. To aid in exploration, we give every state an explore action, which is simply an L0 AMDP with no goal state. Whenever a new state-state transition is discovered from Ës1 to Ës2, we add a new L1 AMDP action with the initial state Ës1 and goal state Ës2 to ËA. In practice, we limit each explore action to being executed Nexplore times. After being executed Nexplore times, we remove that explore action, assuming that it has been sufficiently explored. We use Nexplore = 100 in our experiments. The pseudo code is detailed in Algorithm 1.
6 CONSTRUCTING AN ABSTRACTION The main benefit of our abstractions is to shorten the reward hori- zon of the low-level learner. The guiding principal is to construct an abstraction such that L1-states encompass small collections of L0-states. This ensures that the L0-agents can reasonably experi- ence rewards from transitioning to all neighboring L1-states. It is crucial that the abstraction is as close to Markovian as possible: the transition dynamics for a state should not depend on the history of previous states. For example, imagine a four rooms domain where room A connects to rooms B and C (Figure 1). If for some reason there is an impassable wall in room A, then the agent can transition from A to B on one side of the wall and from A to C on the other side. So depending on how the agent entered the room (the history), the transition dynamics of room A would change. However, since the high-level learner has seen the agent transition from room B to A and A to C, it would think B and C are connected through A. The solution would be to divide room A into two smaller rooms split by the impassable barrier.
In our experiments, we split rooms up into smaller sectors in the abstraction to decrease the horizon for the L0 learners and, in some games, to retain the Markovian property of the abstraction. For Toy MR, these sectors were hand-made for each of the rooms (Figure 2c). We constructed the sectors such that there were more sectors on the âtight-ropes,â areas that required many correct actions to traverse and a single incorrect action would result in a terminal state. For the Atari experiments, we made square n Ãn grids of each of the rooms based on the coordinates of the agent: if the agent is in the top left corner of the screen, it is in sector 1. If it is in the 2 (Figure 3). For MR, we chose the bottom-right corner, sector n
grid to be 3 Ã 3. For Venture, we chose the grid to be 3 Ã 3 inside each of the rooms and 4 Ã 4 in the hallway, as the state-space in the hallway is much larger. We chose this particular gridding because it is both simple to implement and approximately Markovian across the gameâs different rooms. Note that any sufficiently fine-grained sector scheme would perform equivalently. Accordingly, our partic- ular choice of sector scheme should be regarded as arbitrary. Other abstractions could be used as long as they are also approximately Markovian.
7 EXPERIMENTS The aim of our experiments was to assess the effectiveness of our algorithm on complex domains that involve long horizons, sparse rewards, and high-dimensional inputs. We trained our agents for 50 million frames. As in Mnih et al. [11], every one million frames, we evaluated our agents for a half a million frames, recording the average episode reward over those evaluation frames. The source code of our implementation is available online1.
7.1 Baselines We chose two baselines to compare against our algorithm: Double DQN [14] and Pseudo-Count based IM [3], both using the Mixed Monte-Carlo return [12]. We chose Double DQN as it performed very well on many Atari games, but has not been optimized for exploration. We chose the IM agent as it explored the highest the number of rooms in Montezumaâs Revenge to the best of our knowl- edge. One of the key aspects to the success of this algorithm, that was not required for our algorithm, was giving the agent multi- ple lives, which was discussed in our Related Work section. We, therefore, also compared to the IM agent with this addition.
We tested our algorithm against these baselines in three different domains. It is important to note that we do provide the factorized state projection function and the set of predicate functions. How- ever, in many real world domains, there are natural decompositions of the low-level state into abstract components, such as the current room of the agent in the room navigation task.
For the toy domains and Single-Life MR (described below) we used our own implementation of pseudo-counts [3] as the authors were unwilling to provide their source code. Our implementation was not able to perform at the level of the results reported by Belle- mare et al., only discovering 7-10 rooms on Atari Montezumaâs Revenge in the time their implementation discovered 15 (50 million frames). Our implementation still explores more rooms than our baseline, Double DQN, which only discovered 2 rooms. We con- tacted other researchers who attempted to replicate these results, and they were likewise unable to. Bellemare et al., however, did kindly provide us with their raw results for Montezumaâs Revenge and Venture. We compared against these results, which were av- eraged over 5 trials. Due to our limited computing resources, our experiments were run for a single trial.
# 1Code: github.com/chrisgrimm/deep_abstract_q_network
# Four Rooms and Toy Montezumaâs Revenge
We constructed a toy version of the room navigation task: given a series of rooms, some locked by doors, navigate through the rooms to find the keys to unlock the doors and reach the goal room. In this domain, each room has a discrete grid layout. The rooms consist of keys (gold squares), doors (blue squares), impassible walls (black squares), and traps that end the episode if the agent runs into them (red squares). The state given to the agent is the pixel screen of the current room, rescaled to 84x84 and converted to gray-scale. We constructed two maps of rooms: Four Rooms and Toy Montezumaâs Revenge (Toy MR). Four Rooms consists of three maze-like rooms and one goal room (Figure 2b). Toy MR consists of 24 rooms designed to parallel the layout of the Atari Montezumaâs Revenge (Figure 2c). In the Four Rooms domain, the game terminates after 10 000 steps, while in Toy MR, there is no limit on the number of steps.
The abstraction provided to the agent consists of 10 attributes: the location of the agent, a Boolean for the state of each key (4 keys total) and each door (4 doors total), and the number of keys the agent had. The location of the agent consists of the current room and sector. We used sectors for Toy MR to decrease the horizon for each L0 learner (as detailed in the Section 6), but not for Four Rooms since it does not have deadly traps that hinder exploration. Although the sectors seem to divide much of the state-space, the low-level learners remain crucial to learning the policies to navigate around traps and transition between high-level states.
Our results (Four Rooms and Toy MR plots in Figure 5) show that for both domains, Double DQN and the IM agent failed to learn to complete the game, while our agent learned to consistently solve both toy problems. On the Toy MR domain, both agents fail to escape the first room when the agent is only provided one life. This reflects the issue with pseudo-counts for IM that we described previously: that the image is factored in a way that makes the key and agent pixels independent, with the result that the exploration bonuses of backtracking to the doors are lower than those of re- maining near the key. In contrast, our agent was not only able to explore all the rooms in Toy MR, but also to learn the complex task of collecting the key to unlock the first room, collecting two more keys from different rooms and then navigating to unlock the final two doors to the goal room (Figure 4).
We emphasize that this marked difference in performance is due to the different ways in which each method explores. Particularly, our DAQN technique is model-based at the high-level, allowing our coupled agents to quickly generate new long-term plans and execute them at the low-level. This is in contrast to IM, which must readjust large portions of the networkâs parameters in order to change long-term exploration policies.
7.3 Montezumaâs Revenge Atari 2600 Montezumaâs Revenge (MR) is an Atari game very similar to the rooms and doors toy problems: there is a series of rooms, some blocked by doors, and keys are spread throughout the game. There are also monsters to avoid, coins that give points, and time-based traps, such as bridges over lava pits that disappear and reappear on a timer.
(a) Example Screen (b) Map of Four Rooms
(c) Map of all rooms in Toy MR with color-coded sectors
Figure 2: 2a Example screen that is common across Four Rooms and Toy MR. The yellow square at the top left repre- sents that the agent is holding a key and the green bar on the right represents the agentâs remaining lives. 2b, 2c The map of all the rooms in Four Rooms and Toy MR. Blue squares are locked doors, yellow squares are keys that can unlock the doors, and the red squares are traps that result in a ter- minal state (or the loss of a life when playing with lives). The teal room with the âGâ is the goal room. Entering this room gives the agent a reward of 1 (the only reward in the game) and results in a terminal state. The sectors provided to the agent in Toy MR are color-coded.
Our abstraction had a similar state-space to Toy MR, consisting of 12 attributes: the location of the agent, a Boolean attribute for the presence of each key (4 keys total) and each door (6 doors total), and the number of keys. The location of the agent consists of the current room and sector. We created coarse sectors based on the agentâs location in a room by gridding each room into nine equal square regions. We prevented sector transitions while the agent was falling to avoid entering a sector and immediately dying from falling. As an example, consider the agent in Figure 3a. Figure 3b illustrates the sector that the agent occupies. The abstraction of this state would be: Room 1 (the starting room) and Sector (1, 2) with no keys collected or doors unlocked.
We also tested the DAQN on MR where the agent is only given a single life (i.e. the environment terminates after a single death). Normally in MR, when the agent dies, it returns to the location from which it entered the room (or the starting location in the first room) and retains the keys it has collected. Because of this, a valid policy for escaping the first room is to navigate to the key, collect it, and then purposefully end the life of the agent. This allows the agent to return to the starting location with the key and easily navigate to the adjacent doors. In this single life variant, the agent
(a) MR
(b) MR Sectors
(c) Venture
(d) Venture Sectors
Figure 3: 3a, 3c Example screens of Atari 2600 Montezumaâs Revenge (MR) and Venture. 3b, 3d Illustrations of the sectors we constructed for both a room in MR and the hallway in Venture. The sector the agent is currently occupying is in blue, the other possible sectors are in yellow.
a 5 Toy MR © 20 â DAQN 9 15 â _ Intrinsic © 10 â Intrinsict+L & ° â Double DON 2 0 10 20 30 40 50 Millions of Frames
Figure 4: Rooms discovered in the Toy MR domain using the Double DQN, DAQN, IM, and IM with a 5-lives variant of Toy MR (Intrinsic+L).
cannot exploit this game mechanic and, after collecting the key, must backtrack all the way to the starting location to unlock one of the doors. This comparison illustrates our algorithmâs ability to learn to separate policies for different tasks.
With lives, our algorithm did not discover as many rooms as the IM agent since our agent was not able to traverse the timing-based traps. These traps could not be traversed by random exploration, so our agent never learned that there is anything beyond these traps. Our agent discovered six rooms out of the total 24 â all the rooms that can be visited without passing these traps.
Our agent underperformed in Atari Montezumaâs Revenge (Mon- tezumaâs Revenge plot in Figure 5) because of timing based traps
that could not be easily represented in a discrete high-level state space. However, when we grant our agent only one life, our method greatly outperforms previous methods: not only was our agent able to escape the first room, but it also discovered five more, while the Double DQN and IM agents are not able to escape the first room (Single-Life MR plot in Figure 5). This is because the one-life setting necessitates backtracking-like behavior in a successful policy. As we mentioned before, the IM agent is incapable of learning to back- track and thus cannot perform in this setting. We emphasize that this inability arises on account of the pseudo-count probabilistic model treating the location of the agent and the presence of the key as independent. This property actively discourages the agent from backtracking because backtracking would lead to states with higher pseudo-counts and, thus, less intrinsic reward.
7.4 Venture Atari 2600 Venture is a game that consists of four rooms and a hallway. Every room contains one item. The agent must navigate through the hallway and the rooms, avoiding monsters, to collect these items. Once an item is collected and the agent leaves the room, that room becomes locked.
Our abstraction for this game consisted of 9 attributes: the lo- cation of the agent, a Boolean locked attribute for each room (4 rooms total), and a Boolean for whether the item in the current room has been collected (4 items total). The location of the agent consists of the current room and sector. Sectors were constructed with a coarse 3 Ã 3 gridding of each room and a 4 Ã 4 gridding of the hallway. As an example, in Figure 3c the agent is the the small pink dot at the bottom of the screen and Figure 3d shows the sector the agent occupies. In this state, the abstraction would be: Room 8 (the hallway) and Sector (1, 0) with no items collected.
In this experiment, we receive a much higher evaluation per- formance than both of our baselines (Venture plot in Figure 5), illustrating our agents ability to execute and learn long-term plans. At around 30 million frames, our agentâs performance greatly de- creases. This performance drop is due to our agent exploring further into new rooms and training the sub-policies to reach those new rooms. Since the sub-policies for exploitation are not trained during this time, as the DQN weights higher up in the network are updated to train the exploration sub-policies, the exploitation sub-policies are forgotten. Once the agent finishes exploring all L1 states, we would expect the agent would revisit those exploitation sub-policies and relearn them.
8 DISCUSSION AND FUTURE WORK In this paper, we presented a novel way of combining deep re- inforcement learning with tabular reinforcement learning using DAQN. The DAQN framework generally allows our agent to ex- plore much farther than previous methods on domains and exploit robust long-term policies.
In our experiments, we showed that our DAQN agent explores farther in most high-dimensional domains with long-horizons and sparse reward than competing approaches. This illustrates its ca- pacity to learn and execute long-term plans in such domains, suc- ceeding where these other approaches fail. Specifically, the DAQN was able to learn backtracking behavior, characteristic of long-term
Pp g 1 Four Rooms 1 Toy MR 2500 Montezuma! 's Revenge 350 Single-Life MR 700 Venture a 2000 300 600 ba 250 500 % 1500 200 400 @ 1000 150 300 o 100 200 ea 0 aos 50 100 2 o 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Millions of Frames Millions of Frames â DAQN _â Millions of Frames Intrinsic Millions of Frames Millions of Frames â _ Double DQN
Figure 5: Average reward in the Four Rooms, Toy MR, Atari MR, Single-Life Atari MR, and Atari Venture domains using the following models: DAQN (blue), Double DQN (green) and IM (orange). In Four Rooms and Toy MR, both IM and Double DQN fail to score an average reward above zero, and are thus overlapping. We use the raw IM and Double DQN data from Bellemare et al. [3] on Montezumaâs Revenge and Venture. All other plots show our implementationsâ results.
exploration, which is largely absent from existing state-of-the-art methods.
The main drawback to our approach is the requirement for a hand-annotated state-projection function that nicely divides the state-space. However, for our method allows this function need only specify abstract states, rather than abstract transitions or poli- cies, and thus requiring minimal engineering on the part of the experimenter. In future work, we hope to learn this state-projection function as well. We are exploring methods to learn from human demonstration, as well as methods that learn only from a high-level reward function. Ultimately, we seek to create compositional agents that can learn layers of knowledge from experience to create new, more complex skills. We also plan to incorporate a motivated ex- ploration algorithm, such as IM [3], with our L0 learner to address our difficulty with time-based traps in MR.
[5] Thomas G Dietterich. 2000. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res.(JAIR) 13 (2000), 227â303. [6] Carlos Diuk, Andre Cohen, and Michael L Littman. 2008. An object-oriented representation for efficient reinforcement learning. In Proceedings of the 25th international conference on Machine learning. ACM, 240â247.
[7] Nakul Gopalan, Marie desJardins, Michael L. Littman, James MacGlashan, Shawn Squire, Stefanie Tellex, John Winder, and Lawson L.S. Wong. 2017. Planning with Abstract Markov Decision Processes. In International Conference on Automated Planning and Scheduling.
[8] Ken Kansky, Tom Silver, David A Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. 2017. Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics. arXiv preprint arXiv:1706.04317 (2017).
[9] Tejas D. Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Joshua B. Tenen- baum. 2016. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. In NIPS.
[10] Amy McGovern, Richard S Sutton, and Andrew H Fagg. 1997. Roles of macro- actions in accelerating reinforcement learning. In Grace Hopper celebration of women in computing, Vol. 1317.
Our approach also has the ability to expand the hierarchy to multiple levels of abstraction, allowing for additional agents to learn even more abstract high-level plans. In the problems we investigated in this work, a single level of abstraction was sufficient, allowing our agent to reason at the level of rooms and sectors. However, in longer horizon domains, such as inter-building navigation and many real- world robotics tasks, additional levels of abstraction would greatly decrease the horizon of the L1 learner and thus facilitate more efficient learning.
[11] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529â533.
[12] Georg Ostrovski, Marc G Bellemare, Aaron van den Oord, and Rémi Munos. 2017. Count-based exploration with neural density models. arXiv preprint arXiv:1703.01310 (2017).
[13] Richard S Sutton, Doina Precup, and Satinder Singh. 1999. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence 112, 1-2 (1999), 181â211.
[14] Hado Van Hasselt, Arthur Guez, and David Silver. 2016. Deep Reinforcement Learning with Double Q-Learning.. In AAAI. 2094â2100.
[15] Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. 2017. FeUdal Networks for Hierarchical Reinforcement Learning. In ICML.
ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under grant numbers IIS-1426452, IIS-1652561, and IIS-1637614, DARPA under grant numbers W911NF-10-2-0016 and D15AP00102, and National Aeronautics and Space Administration under grant number NNX16AR61G.
REFERENCES [1] Pierre-Luc Bacon, Jean Harb, and Doina Precup. 2017. The Option-Critic Archi-
tecture.. In AAAI. 1726â1734.
[2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. 2013. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research 47 (jun 2013), 253â279.
[3] Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Rémi Munos. 2016. Unifying Count-Based Exploration and Intrinsic Motivation. In NIPS.
[4] Ronen I Brafman and Moshe Tennenholtz. 2002. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research 3, Oct (2002), 213â231. | {
"id": "1703.01310"
} |
1709.10089 | Overcoming Exploration in Reinforcement Learning with Demonstrations | Exploration in environments with sparse rewards has been a persistent problem
in reinforcement learning (RL). Many tasks are natural to specify with a sparse
reward, and manually shaping a reward function can result in suboptimal
performance. However, finding a non-zero reward is exponentially more difficult
with increasing task horizon or action dimensionality. This puts many
real-world tasks out of practical reach of RL methods. In this work, we use
demonstrations to overcome the exploration problem and successfully learn to
perform long-horizon, multi-step robotics tasks with continuous control such as
stacking blocks with a robot arm. Our method, which builds on top of Deep
Deterministic Policy Gradients and Hindsight Experience Replay, provides an
order of magnitude of speedup over RL on simulated robotics tasks. It is simple
to implement and makes only the additional assumption that we can collect a
small set of demonstrations. Furthermore, our method is able to solve tasks not
solvable by either RL or behavior cloning alone, and often ends up
outperforming the demonstrator policy. | http://arxiv.org/pdf/1709.10089 | Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter Abbeel | cs.LG, cs.AI, cs.NE, cs.RO | 8 pages, ICRA 2018 | null | cs.LG | 20170928 | 20180225 | 8 1 0 2
b e F 5 2 ] G L . s c [
2 v 9 8 0 0 1 . 9 0 7 1 : v i X r a
# Overcoming Exploration in Reinforcement Learning with Demonstrations
Ashvin Nair12, Bob McGrew1, Marcin Andrychowicz1, Wojciech Zaremba1, Pieter Abbeel12
Abstractâ Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, ï¬nding a non-zero reward is exponen- tially more difï¬cult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.
# I. INTRODUCTION
RL has found signiï¬cant success in decision making for solving games, so what makes it more challenging to apply in robotics? A key difference is the difï¬culty of exploration, which comes from the choice of reward function and compli- cated environment dynamics. In games, the reward function is usually given and can be directly optimized. In robotics, we often desire behavior to achieve some binary objective (e.g., move an object to a desired location or achieve a certain state of the system) which naturally induces a sparse reward. Sparse reward functions are easier to specify and recent work suggests that learning with a sparse reward results in learned policies that perform the desired objective instead of getting stuck in local optima [1], [2]. However, exploration in an environment with sparse reward is difï¬cult since with random exploration, the agent rarely sees a reward signal.
The difï¬culty posed by a sparse reward is exacerbated by the complicated environment dynamics in robotics. For example, system dynamics around contacts are difï¬cult to model and induce a sensitivity in the system to small errors. Many robotics tasks also require executing multiple steps successfully over a long horizon, involve high dimensional control, and require generalization to varying task instances. These conditions further result in a situation where the agent so rarely sees a reward initially that it is not able to learn at all.
All of the above means that random exploration is not a tenable solution. Instead, in this work we show that we can use demonstrations as a guide for our exploration. To test our
method, we solve the problem of stacking several blocks at a given location from a random initial state. Stacking blocks has been studied before in the literature [3], [4] and exhibits many of the difï¬culties mentioned: long horizons, contacts, and requires generalizing to each instance of the task. We limit ourselves to 100 human demonstrations collected via teleoperation in virtual reality. Using these demonstrations, we are able to solve a complex robotics task in simulation that is beyond the capability of both reinforcement learning and imitation learning.
The primary contribution of this paper is to show that demonstrations can be used with reinforcement learning to solve complex tasks where exploration is difï¬cult. We introduce a simple auxiliary objective on demonstrations, a method of annealing away the effect of the demonstrations when the learned policy is better than the demonstrations, and a method of resetting from demonstration states that signiï¬cantly improves and speeds up training policies. By effectively incorporating demonstrations into RL, we short- circuit the random exploration phase of RL and reach nonzero rewards and a reasonable policy early on in training. Finally, we extensively evaluate our method against other commonly used methods, such as initialization with learning from demonstrations and ï¬ne-tuning with RL, and show that our method signiï¬cantly outperforms them.
# II. RELATED WORK
Learning methods for decision making problems such as robotics largely divide into two classes: imitation learning and reinforcement learning (RL). In imitation learning (also called learning from demonstrations) the agent receives be- havior examples from an expert and attempts to solve a task by copying the expertâs behavior. In RL, an agent attempts to maximize expected reward through interaction with the environment. Our work combines aspects of both to solve complex tasks.
Imitation Learning: Perhaps the most common form of imitation learning is behavior cloning (BC), which learns a policy through supervised learning on demonstration state- action pairs. BC has seen success in autonomous driving [5], [6], quadcopter navigation [7], locomotion [8], [9]. BC struggles outside the manifold of demonstration data. Dataset Aggregation (DAGGER) augments the dataset by interleaving the learned and expert policy to address this problem of accumulating errors [10]. However, DAGGER is difï¬cult to use in practice as it requires access to an expert during all of training, instead of just a set of demonstrations.
1 OpenAI, 2 University of California, Berkeley.
Fig. 1: We present a method using reinforcement learning to solve the task of block stacking shown above. The robot starts with 6 blocks labelled A through F on a table in random positions and a target position for each block. The task is to move each block to its target position. The targets are marked in the above visualization with red spheres which do not interact with the environment. These targets are placed in order on top of block A so that the robot forms a tower of blocks. This is a complex, multi-step task where the agent needs to learn to successfully manage multiple contacts to succeed. Frames from rollouts of the learned policy are shown. A video of our experiments can be found at: http://ashvin.me/demoddpg-website
Fundamentally, BC approaches are limited because they do not take into account the task or environment. Inverse reinforcement learning (IRL) [11] is another form of imita- tion learning where a reward function is inferred from the demonstrations. Among other tasks, IRL has been applied to navigation [12], autonomous helicopter ï¬ight [13], and manipulation [14]. Since our work assumes knowledge of a reward function, we omit comparisons to IRL approaches.
Reinforcement Learning: Reinforcement learning meth- ods have been harder to apply in robotics, but are heavily investigated because of the autonomy they could enable. Through RL, robots have learned to play table tennis [15], swing up a cartpole, and balance a unicycle [16]. A renewal of interest in RL cascaded from success in games [17], [18], especially because of the ability of RL with large function approximators (ie. deep RL) to learn control from raw pixels. Robotics has been more challenging in general but there has been signiï¬cant progress. Deep RL has been applied to manipulation tasks [19], grasping [20], [21], opening a door [22], and locomotion [23], [24], [25]. However, results have been attained predominantly in simulation per high sample complexity, typically caused by exploration challenges.
learning forward models naturally have trouble modelling the sharply discontinuous dynamics of contacts; although they can learn to place a block, it is a much harder problem to grasp the block in the ï¬rst place. One-shot Imitation [4] learns to stack blocks in a way that generalizes to new target conï¬gurations, but uses more than 100,000 demonstrations to train the system. A heavily shaped reward can be used to learn to stack a Lego block on another with RL [30]. In contrast, our method can succeed from fully sparse rewards and handle stacking several blocks.
Combining RL and Imitation Learning: Previous work has combined reinforcement learning with demonstrations. Demonstrations have been used to accelerate learning on classical tasks such as cart-pole swing-up and balance [31]. This work initialized policies and (in model-based methods) initialized forward models with demonstrations. Initializing policies from demonstrations for RL has been used for learning to hit a baseball [32] and for underactuated swing- up [33]. Beyond initialization, we show how to extract more knowledge from demonstrations by using them effectively throughout the entire training process.
Robotic Block Stacking: Block stacking has been studied from the early days of AI and robotics as a task that encapsulates many difï¬culties of more complicated tasks we want to solve, including multi-step planning and complex contacts. SHRDLU [26] was one of the pioneering works, but studied block arrangements only in terms of logic and natural language understanding. More recent work on task and motion planning considers both logical and physical aspects of the task [27], [28], [29], but requires domain- speciï¬c engineering. In this work we study how an agent the need of domain-speciï¬c can learn this task without engineering.
One RL method, PILCO [16] has been applied to a simple version of stacking blocks where the task is to place a block on a tower [3]. Methods such as PILCO based on
to two recent approaches â Deep Q-Learning From Demonstrations (DQfD) [34] and DDPG From Demonstrations (DDPGfD) [2] which combine demonstrations with reinforcement learning. DQfD improves learning speed on Atari, including a margin loss which encourages the expert actions to have higher Q-values than all other actions. This loss can make improving upon the demonstrator policy impossible which is not the case for our method. Prior work has previously explored improving beyond the demonstrator policy in simple environments by introducing slack variables [35], but our method uses a learned value to actively inform the improvement. DDPGfD solves simple robotics tasks akin to peg insertion using DDPG with demonstrations in the replay buffer. In contrast to this prior work, the tasks we consider exhibit additional difï¬culties that are of key interest in robotics: multi-step
behaviours, and generalization to varying goal states. While previous work focuses on speeding up already solvable tasks, we show that we can extend the state of the art in RL with demonstrations by introducing new methods to incorporate demonstrations.
# III. BACKGROUND
A. Reinforcement Learning
We consider the standard Markov Decision Process frame- work for picking optimal actions to maximize rewards over discrete timesteps in an environment EL. We assume that the environment is fully observable. At every timestep t, an agent is in a state 2, takes an action a;, receives a reward r;, and E evolves to state x;41. In reinforcement learning, the agent must learn a policy a, = m(x,) to maximize expected returns. We denote the return by R; = ve, 79 r;, where T is the horizon that the agent optimizes over and 7Â¥ is a discount factor for future rewards. The agentâs objective is to maximize expected return from the start distribution J = Ep,.s:~B,a:~[Ro]-
A variety of reinforcement learning algorithms have been developed to solve this problem. Many involve constructing an estimate of the expected return from a given state after taking an action:
QÏ(st, at) = Eri,siâ¼E,aiâ¼Ï[Rt|st, at] (1)
= Ert,st+1â¼E[rt + γ Eat+1â¼Ï[QÏ(st+1, at+1)]] (2) We call QÏ the action-value function. Equation 2 is a recursive version of equation 1, and is known as the Bell- man equation. The Bellman equation allows for methods to estimate Q that resemble dynamic programming.
# B. DDPG
Our method combines demonstrations with one such method: Deep Deterministic Policy Gradients (DDPG) [23]. DDPG is an off-policy model-free reinforcement learning algorithm for continuous control which can utilize large function approximators such as neural networks. DDPG is an actor-critic method, which bridges the gap between policy gradient methods and value approximation methods for RL. At a high level, DDPG learns an action-value function (critic) by minimizing the Bellman error, while simultaneously learning a policy (actor) by directly maxi- mizing the estimated action-value function with respect to the parameters of the policy.
Concretely, DDPG maintains an actor function Ï(s) with parameters θÏ, a critic function Q(s, a) with parameters θQ, and a replay buffer R as a set of tuples (st, at, rt, st+1) for each transition experienced. DDPG alternates between running the policy to collect experience and updating the parameters. Training rollouts are collected with extra noise for exploration: at = Ï(s) + N , where N is a noise process. During each training step, DDPG samples a minibatch consisting of N tuples from R to update the actor and critic networks. DDPG minimizes the following loss L w.r.t. θQ to update the critic:
yi = ri + γQ(si+1, Ï(si+1)) (3)
1 L=5 diy = Q(si, ai|9Q))â (4)
The actor parameters Î¸Ï are updated using the policy gradient:
1 Vo,J = id DS VaQ(s, a|9Q)|s=s;,a=n(s) V0, 7(5|Ox) Si (5)
(5) To stabilize learning, the Q value in equation 3 is usually computed using a separate network (called the target net- work) whose weights are an exponential average over time of the critic network. This results in smoother target values. Note that DDPG is a natural ï¬t for using demonstra- tions. Since DDPG can be trained off-policy, we can use demonstration data as off-policy training data. We also take advantage of the action-value function Q(s, a) learned by DDPG to better use demonstrations.
C. Multi-Goal RL
Instead of the standard RL setting, we train agents with parametrized goals, which lead to more general policies [36] and have recently been shown to make learning with sparse rewards easier [1]. Goals describe the task we expect in our case the agent they specify the desired positions of all objects. We sample the goal g at he beginning of every episode. The function approximators, here Ï and Q, take the current goal as an additional input.
D. Hindsight Experience Replay (HER)
To handle varying task instances and parametrized goals, we use Hindsight Experience Replay (HER) [1]. The key insight of HER is that even in failed rollouts where no reward was obtained, the agent can transform them into successful ones by assuming that a state it saw in the rollout was the actual goal. HER can be used with any off-policy RL algorithm assuming that for every state we can ï¬nd a goal corresponding to this state (i.e. a goal which leads to a positive reward in this state).
For every episode the agent experiences, we store it in the replay buffer twice: once with the original goal pursued in the episode and once with the goal corresponding to the ï¬nal state achieved in the episode, as if the agent intended on reaching this state from the very beginning.
# IV. METHOD
Our method combines DDPG and demonstrations in sev- eral ways to maximally use demonstrations to improve learning. We describe our method below and evaluate these ideas in our experiments.
A. Demonstration Buffer
First, we maintain a second replay buffer RD where we store our demonstration data in the same format as R. In each minibatch, we draw an extra ND examples from RD to use as off-policy replay data for the update step. These examples are included in both the actor and critic update. This idea has been introduced in [2].
B. Behavior Cloning Loss
Second, we introduce a new loss computed only on the demonstration examples for training the actor.
Np Lec = > |r(si i=l Or) â ailâ (6)
This loss is a standard loss in imitation learning, but we show that using it as an auxiliary loss for RL improves learning signiï¬cantly. The gradient applied to the actor parameters Î¸Ï is:
λ1âÎ¸Ï J â λ2âÎ¸Ï LBC (7)
(Note that we maximize J and minimize LBC.) Using this loss directly prevents the learned policy from improving signiï¬cantly beyond the demonstration policy, as the actor is always tied back to the demonstrations. Next, we show how to account for suboptimal demonstrations using the learned action-value function.
# C. Q-Filter
We account for the possibility that demonstrations can be suboptimal by applying the behavior cloning loss only to states where the critic Q(s, a) determines that the demon- strator action is better than the actor action:
Np Lac => |r (si|9e) = ailâ Necea)>Q(s,.m(s)) 8) i=1
The gradient applied to the actor parameters is as in equation 7. We label this method using the behavior cloning loss and Q-ï¬lter âOursâ in the following experiments.
D. Resets to demonstration states
To overcome the problem of sparse rewards in very long horizon tasks, we reset some training episodes using states and goals from demonstration episodes. Restarts from within demonstrations expose the agent to higher reward states dur- ing training. This method makes the additional assumption that we can restart episodes from a given state, as is true in simulation. To reset
to a demonstration state, we ï¬rst sample a demonstration D = (x0, u0, x1, u1, ...xN , uN ) from the set of demonstrations. We then uniformly sample a state xi from D. As in HER, we use the ï¬nal state achieved in the demonstration as the goal. We roll out the trajectory with the given initial state and goal for the usual number of timesteps. At evaluation time, we do not use this procedure.
We label our method with the behavior cloning loss, Q- ï¬lter, and resets from demonstration states as âOurs, Resetsâ in the following experiments.
# V. EXPERIMENTAL SETUP
# A. Environments
We evaluate our method on several simulated MuJoCo [37] environments. In all experiments, we use a simulated 7-DOF Fetch Robotics arm with parallel grippers to manipulate one or more objects placed on a table in front of the robot.
The agent receives the positions of the relevant objects on the table as its observations. The control for the agent is continuous and 4-dimensional: 3 dimensions that specify the desired end-effector position1 and 1 dimension that speciï¬es the desired distance between the robot ï¬ngers. The agent is controlled at 50Hz frequency.
We collect demonstrations in a virtual reality environment. The demonstrator sees a rendering of the same observations as the agent, and records actions through a HTC Vive interface at the same frequency as the agent. We have the option to accept or reject a demonstration; we only accept demonstrations we judge to be mostly correct. The demonstrations are not optimal. The most extreme example is the âslidingâ task, where only 7 of the 100 demonstrations are successful, but the agent still sees rewards for these demonstrations with HER.
# B. Training Details
To train our models, we use Adam [38] as the optimizer with learning rate 10â3. We use N = 1024, ND = 128, λ1 = 10â3, λ2 = 1.0/ND. The discount factor γ is 0.98. We use 100 demonstrations to initialize RD. The function approxi- mators Ï and Q are deep neural networks with ReLU activa- tions and L2 regularization with the coefï¬cient 5Ã10â3. The ï¬nal activation function for Ï is tanh, and the output value is scaled to the range of each action dimension. To explore during training, we sample random actions uniformly within the action space with probability 0.1 at every step, and the noise process N is uniform over ±10% of the maximum value of each action dimension. Task-speciï¬c information, including network architectures, are provided in the next section.
C. Overview of Experiments
We perform three sets of experiments. In Sec. VI, we provide a comparison to previous work. In Sec. VII we solve block stacking, a difï¬cult multi-step task with complex contacts that the baselines struggle to solve. In Sec. VIII we do ablations of our own method to show the effect of individual components.
VI. COMPARISON WITH PRIOR WORK
A. Tasks
We ï¬rst show the results of our method on the simulated tasks presented in the Hindsight Experience Replay paper [1]. We apply our method to three tasks:
1) Pushing. A block placed randomly on the table must be moved to a target location on the table by the robot (ï¬ngers are blocked to avoid grasping).
2) Sliding. A puck placed randomly on the table must be moved to a given target location. The target is outside the robotâs reach so it must apply enough force that the puck reaches the target and stops due to friction. 3) Pick-and-place. A block placed randomly on the table must be moved to a target location in the air. Note
1In the 10cm x 10cm x 10cm cube around the current gripper position
ae Pushing â ours â HER â 8c Success Rate _ Sliding re Pick and Place Success Rate am T om 10m Timesteps am fae âow om 20m âTimesteps om imesteps am 10M
Fig. 2: Baseline comparisons on tasks from [1]. Frames from the learned policy are shown above each task. Our method signiï¬cantly outperforms the baselines. On the right plot, the HER baseline always fails.
that the original paper used a form of initializing from favorable states to solve this task. We omit this for our experiment but discuss and evaluate the initialization idea in an ablation.
the block through demonstrations. Providing demonstrations does not require expert knowledge of the learning system, which makes it a more compelling way to provide prior information.
As in the prior work, we use a fully sparse reward for this task. The agent is penalized if the object is not at its goal position:
0, â1, otherwise if ||xi â gi|| < δ rt = (9)
where the threshold δ is 5cm.
# B. Results
Fig. 2 compares our method to HER without demonstra- tions and behavior cloning. Our method is signiï¬cantly faster at learning these tasks than HER, and achieves signiï¬cantly better policies than behavior cloning does. Measuring the number of timesteps to get to convergence, we exhibit a 4x speedup over HER in pushing, a 2x speedup over HER in sliding, and our method solves the pick-and-place task while HER baseline cannot solve it at all.
# VII. MULTI-STEP EXPERIMENTS
A. Block Stacking Task
To show that our method can solve more complex tasks with longer horizon and sparser reward, we study the task of block stacking in a simulated environment as shown in Fig. 1 with the same physical properties as the previous experiments. Our experiments show that our approach can solve the task in full and learn a policy to stack 6 blocks with demonstrations and RL. To measure and communicate various properties of our method, we also show experiments on stacking fewer blocks, a subset of the full task.
We initialize the task with blocks at 6 random locations x1...x6. We also provide 6 goal locations g1...g6. To form a tower of blocks, we let g1 = x1 and gi = giâ1 + (0, 0, 5cm) for i â 2, 3, 4, 5.
The pick-and-place task showcases the shortcoming of RL in sparse reward settings, even with HER. In pick-and-place, the key action is to grasp the block. If the robot could manage to grasp it a small fraction of the time, HER discovers how to achieve goals in the air and reinforces the grasping behavior. However, grasping the block with random actions is extremely unlikely. Our method pushes the policy towards demonstration actions, which are more likely to succeed.
In the HER paper, HER solves the pick-and-place task by initializing half of the rollouts with the gripper grasping the block. With this addition, pick-and-place becomes the easiest of the three tasks tested. This initialization is similar in spirit to our initialization idea, but takes advantage of the fact that pick-and-place with any goal can be solved starting from a block grasped at a certain location. This is not always true (for example, if there are multiple objects to be moved) and ï¬nding such a keyframe for other tasks would be dif- ï¬cult, requiring some engineering and sacriï¬cing autonomy. Instead, our method guides the exploration towards grasping
By stacking N blocks, we mean N blocks reach their target locations. Since the target locations are always on top of x1, we start with the ï¬rst block already in position. So stacking N blocks involves N â1 pick-and-place actions. To solve stacking N , we allow the agent 50 â (N â 1) timesteps. This means that to stack 6 blocks, the robot executes 250 actions or 5 seconds.
We recorded 100 demonstrations to stack 6 blocks, and use subsets of these demonstrations as demonstrations for stacking fewer blocks. The demonstrations are not perfect; they include occasionally dropping blocks, but our method can handle suboptimal demonstrations. We still rejected more than half the demonstrations and excluded them from the demonstration data because we knocked down the tower of blocks when releasing a block.
# B. Rewards
Two different reward functions are used. To test the performance of our method under fully sparse reward, we
Task Stack 2, Sparse Stack 3, Sparse Stack 4, Sparse Stack 4, Step Stack 5, Step Stack 6, Step Ours 99% 99% 1% 91% 49% 4% BC+ Ours, Resets HER 97% 65% 0% 65% 1% 89% - 54% 0% 73% - 50% - 32% BC HER 1% - 0% - - 0% - 0% - -
Fig. 3: Comparison of our method against baselines. The value reported is the median of the best performance (success rate) of all randomly seeded runs of each method.
reward the agent only if all blocks are at their goal positions:
rt = min i 1||xiâgi||<δ (10)
The threshold δ is the size of a block, 5cm. Throughout the paper we call this the âsparseâ reward.
To enable solving the longer horizon tasks of stacking 4 or more blocks, we use the âstepâ reward :
rt = â1 + 1||xiâgi||<δ i (11)
Note the step reward is still very sparse; the robot only sees the reward change when it moves a block into its target location. We subtract 1 only to make the reward more interpretable, as in the initial state the ï¬rst block is already at its target.
Regardless of the reward type, an episode is considered successful for computing success rate if all blocks are at their goal position in their ï¬nal state.
C. Network architectures
We use a 4 layer networks with 256 hidden units per layer for Ï and Q for the HER tasks and stacking 3 or fewer blocks. For stacking 4 blocks or more, we use an attention mechanism [39] for the actor and a larger network. The attention mechanism uses a 3 layer network with 128 hidden units per layer to query the states and goals with one shared head. Once a state and goal is extracted, we use a 5 layer network with 256 hidden units per layer after the attention mechanism. Attention speeds up training slightly but does not change training outcomes.
D. Baselines
to baselines on stacking 2 to 6 blocks. 2 Ours: Refers to our method as described in section IV-C. Ours, Resets: Refers to our method as described in section IV-C with resets from demonstration states (Sec. IV-D). BC: This method uses behavior cloning to learn a policy. Given the set of demonstration transitions RD, we train the
2Because of computational constraints, we were limited to 5 random seeds per method for stacking 3 blocks, 2 random seeds per method for stacking 4 and 5 blocks, and 1 random seed per method for stacking 6 blocks. Although we are careful to draw conclusions from few random seeds, the results are consistent with our collective experience training these models. We report the median of the random seeds everywhere applicable.
Stack 3, Sparse Reward Ours Ours, Resets No Q-Filter No BC No HER Success Rate 300M 350M 400M
Fig. 4: Ablation results on stacking 3 blocks with a fully sparse reward. We run each method 5 times with random seeds. The bold line shows the median of the 5 runs while each training run is plotted in a lighter color. Note âNo HERâ is always at 0% success rate. Our method without resets learns faster than the ablations. Our method with resets initially learns faster but converges to a worse success rate.
policy Ï by supervised learning. Behavior cloning requires much less computation than RL. For a fairer comparison, we performed a large hyperparameter sweep over various networks sizes, attention hyperparameters, and learning rates and report the success rate achieved by the best policy found. HER: This method is exactly the one described in Hindsight Experience Replay [1], using HER and DDPG. BC+HER: This method ï¬rst initializes a policy (actor) with BC, then ï¬netunes the policy with RL as described above.
# E. Results
We are able to learn much longer horizon tasks than the other methods, as shown in Fig. 3. The stacking task is extremely difï¬cult using HER without demonstrations because the chance of grasping an object using random actions is close to 0. Initializing a policy with demonstrations and then running RL also fails since the actor updates depend on a reasonable critic and although the actor is pretrained, the critic is not. The pretrained actor weights are therefore destroyed in the very ï¬rst epoch, and the result is no better than BC alone. We attempted variants of this method where initially the critic was trained from replay data. However, this also fails without seeing on-policy data.
The results with sparse rewards are very encouraging. We are able to stack 3 blocks with a fully sparse reward without resetting to the states from demonstrations, and 4 blocks with a fully sparse reward if we use resetting. With resets from demonstration states and the step reward, we are able to learn a policy to stack 6 blocks.
# VIII. ABLATION EXPERIMENTS
In this section we perform a series of ablation experiments to measure the importance of various components of our method. We evaluate our method on stacking 3 to 6 blocks. We perform the following ablations on the best performing
We perform the following ablations on the best performing of our models on each task:
of our models on each task: No BC Loss: This method does not apply the behavior cloning gradient during training. It still has access to demon- strations through the demonstration replay buffer.
_ Stack 4, Step Reward Ours Ours, Resets, 06-â Ours â Ours, Resets 05. â No Q-Filter Rate Success Rate Reward J a 5 (OM 100M 200M 300M 400M 500M OOM 700M sooM om soo Timesteps Stack 5, Step Reward Stack 6, Step Reward â ours â Ours, Resets 0.00 ul led om 500m 000m 150m 2000 000 500m om 500m 000m 1500m 2000 Timesteps Timesteps
Fig. 5: Ablation results on longer horizon tasks with a step reward. The upper row shows the success rate while the lower row shows the average reward at the ï¬nal step of each episode obtained by different algorithms. For stacking 4 and 5 blocks, we use 2 random seeds per method. The median of the runs is shown in bold and each training run is plotted in a lighter color. Note that for stacking 4 blocks, the âNo BCâ method is always at 0% success rate. As the number of blocks increases, resets from demonstrations becomes more important to learn the task.
No Q-Filter: This method uses standard behavioral cloning loss instead of the loss from equation Eq. 8, which means that the actor tries to mimic the demonstratorâs behaviour regardless of the critic. No HER: Hindsight Experience Replay is not used.
A. Behavior Cloning Loss
by Q-value gives a natural way to anneal the effect of the demonstrations as it automatically disables the BC loss when a better action is found. However, it gives mixed results on the longer horizon tasks. One explanation is that in the step reward case, learning relies less on the demonstrations because the reward signal is stronger. Therefore, the training is less affected by suboptimal demonstrations.
Without the behavior cloning loss, the method is signiï¬- cantly worse in every task we try. Fig. 4 shows the training curve for learning to stack 3 blocks with a fully sparse reward. Without the behavior cloning loss, the system is about 2x slower to learn. On longer horizon tasks, we do not achieve any success without this loss.
To see why, consider the training curves for stacking 4 blocks shown in Fig. 5. The âNo BCâ policy learns to stack only one additional block. Without the behavior cloning loss, the agent only has access to the demonstrations through the demonstration replay buffer. This allows it to view high- reward states and incentivizes the agent to stack more blocks, but there is a stronger disincentive: stacking the tower higher is risky and could result in lower reward if the agent knocks over a block that is already correctly placed. Because of this risk, which is fundamentally just another instance of the agent ï¬nding a local optimum in a shaped reward, the agent learns the safer behavior of pausing after achieving a certain reward. Explicitly weighting behavior cloning steps into gradient updates forces the policy to continue the task.
C. Resets From Demonstrations
We ï¬nd that initializing rollouts from within demonstra- tion states greatly helps to learn to stack 5 and 6 blocks but hurts training with fewer blocks, as shown in Fig. 5. Note that even where resets from demonstration states helps the ï¬nal success rate, learning takes off faster when this technique is not used. However, since stacking the tower higher is risky, the agent learns the safer behavior of stopping after achieving a certain reward. Resetting from demonstration states alleviates this problem because the agent regularly experiences higher rewards.
This method changes the sampled state distribution, bi- asing it towards later states. It also inï¬ates the Q values unrealistically. Therefore, on tasks where the RL algorithm does not get stuck in solving a subset of the full problem, it could hurt performance.
# IX. DISCUSSION AND FUTURE WORK
# B. Q-Filter
The Q-Filter is effective in accelerating learning and achieving optimal performance. Fig. 4 shows that the method without ï¬ltering is slower to learn. One issue with the behavior cloning loss is that if the demonstrations are sub- optimal, the learned policy will also be suboptimal. Filtering
We present a system to utilize demonstrations along with reinforcement learning to solve complicated multi-step tasks. We believe this can accelerate learning of many tasks, especially those with sparse rewards or other difï¬culties in exploration. Our method is very general, and can be applied on any continuous control task where a success criterion can be speciï¬ed and demonstrations obtained.
An exciting future direction is to train policies directly on a physical robot. Fig. 2 shows that learning the pick-and- place task takes about 1 million timesteps, which is about 6 hours of real world interaction time. This can realistically be trained on a physical robot, short-cutting the simulation- reality gap entirely. Many automation tasks found in factories and warehouses are similar to pick-and-place but without the variation in initial and goal states, so the samples required could be much lower. With our method, no expert needs to be in the loop to train these systems: demonstrations can be collected by users without knowledge about machine learning or robotics and rewards could be directly obtained from human feedback.
A major limitation of this work is sample efï¬ciency on solving harder tasks. While we could not solve these tasks with other learning methods, our method requires a large amount of experience which is impractical outside of simulation. To run these tasks on physical robots, the sample efï¬ciency will have to improved considerably. We also require demonstrations which are not easy to collect for all tasks. If demonstrations are not available but the environment can be reset to arbitrary states, one way to learn goal-reaching but avoid using demonstrations is to reuse successful rollouts as in [40].
Finally, our method of resets from demonstration states requires the ability to reset to arbitrary states. Although we can solve many long-horizon tasks without this ability, it is very effective for the hardest tasks. Resetting from demon- stration rollouts resembles curriculum learning: we solve a hard task by ï¬rst solving easier tasks. If the environment does not afford setting arbitrary states, then other curriculum methods will have to be used.
X. ACKNOWLEDGEMENTS We thank Vikash Kumar and Aravind Rajeswaran for valu- able discussions. We thank Sergey Levine, Chelsea Finn, and Carlos Florensa for feedback on initial versions of this paper. Finally, we thank OpenAI for providing a supportive research environment.
# REFERENCES
[1] M. Andrychowicz et al., âHindsight experience replay,â in Advances in neural information processing systems, 2017.
[2] M. VeËcer´ık et al., âLeveraging Demonstrations for Deep Reinforce- ment Learning on Robotics Problems with Sparse Rewards,â arXiv preprint arxiv:1707.08817, 2017.
[3] M. P. Deisenroth, C. E. Rasmussen, and D. Fox, âLearning to Control a Low-Cost Manipulator using Data-Efï¬cient Reinforcement Learning,â Robotics: Science and Systems, vol. VII, pp. 57â64, 2011. [4] Y. Duan et al., âOne-shot imitation learning,â in NIPS, 2017. [5] D. A. Pomerleau, âAlvinn: An autonomous land vehicle in a neural
network,â NIPS, pp. 305â313, 1989.
[6] M. Bojarski et al., âEnd to End Learning for Self-Driving Cars,â arXiv preprint arXiv:1604.07316, 2016.
[7] A. Giusti et al., âA Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots,â in IEEE Robotics and Automation Letters., 2015, pp. 2377â3766.
[8] J. Nakanishi et al., âLearning from demonstration and adaptation of biped locomotion,â in Robotics and Autonomous Systems, vol. 47, no. 2-3, 2004, pp. 79â91.
[9] M. Kalakrishnan et al., âLearning Locomotion over Rough Terrain using Terrain Templates,â in The 2009 IEEE/RSJ International Con- ference on Intelligent Robots and Systems, 2009.
[10] S. Ross, G. J. Gordon, and J. A. Bagnell, âA Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning,â in Proceedings of the 14th International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), 2011.
[11] A. Ng and S. Russell, âAlgorithms for Inverse Reinforcement Learn- ing,â International Conference on Machine Learning (ICML), 2000.
[12] B. D. Ziebart et al., âMaximum Entropy Inverse Reinforcement Learning.â in AAAI Conference on Artiï¬cial Intelligence, 2008, pp. 1433â1438.
[13] P. Abbeel and A. Y. Ng, âApprenticeship learning via inverse rein- forcement learning,â in ICML, 2004, p. 1.
[14] C. Finn, S. Levine, and P. Abbeel, âGuided Cost Learning: Deep Inverse Optimal Control via Policy Optimization,â in ICML, 2016.
[15] J. Peters, K. M¨ulling, and Y. Alt¨un, âRelative Entropy Policy Search,â Artiï¬cial Intelligence, pp. 1607â1612, 2010.
[16] M. P. Deisenroth and C. E. Rasmussen, âPilco: A model-based and data-efï¬cient approach to policy search,â in ICML, 2011, pp. 465â472. [17] V. Mnih et al., âHuman-level control through deep reinforcement
learning,â Nature, vol. 518, no. 7540, pp. 529â533, 2015.
[18] D. Silver et al., âMastering the game of Go with deep neural networks and tree search,â Nature, vol. 529, no. 7587, pp. 484â489, Jan 2016. [19] S. Levine et al., âEnd-to-end training of deep visuomotor policies,â
CoRR, vol. abs/1504.00702, 2015.
[20] L. Pinto and A. Gupta, âSupersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours,â arXiv preprint arXiv:1509.06825, 2015.
[21] S. Levine et al., âLearning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,â arXiv preprint arXiv:1603.02199, 2016.
[22] S. Gu et al., âDeep Reinforcement Learning for Robotic Ma- nipulation with Asynchronous Off-Policy Updates,â arXiv preprint arXiv:1610.00633, 2016.
[23] T. P. Lillicrap et al., âContinuous control with deep reinforcement learning,â arXiv preprint arXiv:1509.02971, 2015.
[24] V. Mnih et al., âAsynchronous methods for deep reinforcement learn- ing,â in ICML, 2016.
[25] J. Schulman et al., âTrust region policy optimization,â in Proceedings of the twenty-ï¬rst international conference on Machine learning, 2015. [26] T. Winograd, Understanding Natural Language. Academic Press,
1972.
[27] L. P. Kaelbling and T. Lozano-Perez, âHierarchical task and motion planning in the now,â IEEE International Conference on Robotics and Automation, pp. 1470â1477, 2011.
[28] L. Kavraki et al., âProbabilistic roadmaps for path planning in high- dimensional conï¬guration spaces,â IEEE transactions on Robotics and Automation, vol. 12, no. 4, pp. 566â580, 1996.
[29] S. Srivastava et al., âCombined Task and Motion Planning Through an Extensible Planner-Independent Interface Layer,â in International Conference on Robotics and Automation, 2014.
[30] I. Popov et al., âData-efï¬cient Deep Reinforcement Learning for Dexterous Manipulation,â arXiv preprint arXiv:1704.03073, 2017. [31] S. Schaal, âRobot learning from demonstration,â Advances in Neural Information Processing Systems, no. 9, pp. 1040â1046, 1997. [32] J. Peters and S. Schaal, âReinforcement learning of motor skills with policy gradients,â Neural Networks, vol. 21, no. 4, pp. 682â697, 2008. [33] J. Kober and J. Peter, âPolicy search for motor primitives in robotics,â
in Advances in neural information processing systems, 2008.
[34] T. Hester et al., âLearning from Demonstrations for Real World Reinforcement Learning,â arXiv preprint arxiv:1704.03732, 2017. [35] B. Kim et al., âLearning from Limited Demonstrations,â Neural
Information Processing Systems., 2013.
[36] T. Schaul et al., âUniversal Value Function Approximators,â Proceed- ings of The 32nd International Conference on Machine Learning, pp. 1312â1320, 2015.
[37] E. Todorov, T. Erez, and Y. Tassa, âMuJoCo: A physics engine for model-based control,â in The IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
[38] D. Kingma and J. Ba, âAdam: A method for stochastic optimization,â International Conference on Learning Representations (ICLR), 2015. [39] D. Bahdanau, K. Cho, and Y. Bengio, âNeural Machine Translation
by Jointly Learning to Align and Translate,â in ICLR, 2015.
[40] C. Florensa et al., âReverse Curriculum Generation for Reinforcement Learning,â in Conference on robot learning, 2017. | {
"id": "1509.02971"
} |
1709.08568 | The Consciousness Prior | A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms. | http://arxiv.org/pdf/1709.08568 | Yoshua Bengio | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170925 | 20191202 | 9 1 0 2
c e D 2 ] G L . s c [ 2 v 8 6 5 8 0 . 9 0 7 1 : v i X r a
# The Consciousness Prior
# Yoshua Bengio Université de Montréal, Mila â
First posted October 15th 2017; revised, December 1, 2019
# Abstract
A new prior is proposed for learning representations of high-level concepts of the kind we manipulate with language. This prior can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by cognitive neuroscience theories of consciousness, seen as a bottleneck through which just a few elements, after having been selected by attention from a broader pool, are then broadcast and condition further processing, both in perception and decision-making. The set of recently selected elements one becomes aware of is seen as forming a low-dimensional conscious state. This con- scious state is combining the few concepts constituting a conscious thought, i.e., what one is immediately conscious of at a particular moment. We claim that this architectural and information-processing con- straint corresponds to assumptions about the joint distribution between high-level concepts. To the extent that these assumptions are generally true (and the form of natural language seems consistent with them), they can form a useful prior for representation learning. A low-dimensional thought or conscious state is analogous to a sentence: it involves only a few variables and yet can make a statement with very high probability of being true. This is consistent with a joint distribution (over high-level concepts) which has the form of a sparse factor graph, i.e., where the dependencies captured by each factor of the factor graph involve only very few variables while creating a strong dip in the overall energy function. Instead of making predictions in the sensory (e.g. pixel) space, one can thus make predictions in this high-level abstract space, which do not have to be limited to just the next time step but can relate events far away from each other in time. The consciousness prior also makes it natural to map conscious states to natu- ral language utterances or to express classical AI knowledge in a form similar to facts and rules, albeit capturing uncertainty as well as efï¬cient search mechanisms implemented by attention mechanisms.
1
# 1 Introduction
We propose here a new kind of prior for top-level abstract representations of concepts of the kind hu- mans manipulate with natural language, inspired by modern theories of consciousness such as the global workspace theory [Baars, 1988, 1997, 2002, Dehaene and Naccache, 2001, Dehaene et al., 2017] as a form of awareness [van Gulick, 2004], i.e., as deï¬ned by Locke, consciousness is âthe perception of what passes in a manâs own mindâ, or awareness of an external object or something within oneself (Wikipedia deï¬nition). The main contribution of this paper is proposing a machine learning justiï¬cation for an as- pect of this theory, stipulating that elements of a conscious thought are selected through an attention mechanism (such as the content-based attention mechanism we introduced in [Bahdanau et al., 2015]) and then broadcast to the rest of the brain, strongly inï¬uencing downstream perception and action as well as the content of the next conscious thought. The paper sees this as a computational mechanism which is consistent with a hypothesis about the form of the joint distribution between the type of high-level vari- ables which can form a conscious thought. Since a conscious thought only refers to very few variables at a time, we suggest that this corresponds to a form of knowledge representation which is factored into pieces involving a few variables at a time. From a probabilistic modeling point of view, this corresponds to a sparse factor graph. Each âfactor" captures the possibly strong dependency between a few variables. Although a variable can participate in many such factors, each factor links very few variables, similarly to words or concepts linked together in a sentence in natural language.
# âAlso CIFAR Senior Fellow
1
# 2 System 2 Processing and Global Workspace Theory of Con- sciousness
For lack of a generally accepted deï¬nition of consciousness - because there are still many competing the- ories - we consider conscious aspects of cognition as those which humans can report about through lan- guage. We closely associate conscious processing to Kahnemanâs system 2 cognitive abilities [Kahneman, 2011]. System 1 tasks align well with the current successful applications of deep learning, e.g., low-level perception (and to a lesser extent low-level action) and intuitive knowledge (e.g. knowing that a particular Go move is good or that a given picture contains the image of a dog), i.e., knowledge which is difï¬cult to verbalize, and which can typically be applied very quickly (in less than a second). On the other hand, system 2 cognitive abilities are those which can can be described verbally, and thus includes the part of our cognitive abilities which we can communicate explicitly to a computer (typically as a sequence of computational steps), and include things like reasoning, planning and imagination. Typical system 2 tasks require a sequence of conscious steps, which also means that they tend to take more time than system 1 tasks. By this deï¬nition, system 2 abilities are closely related to consciousness.
Cognitive neuroscience has been investigating consciousness for several decades and a dominant fam- ily of theories on which this paper is anchored are those based on the Global Workspace Theory [Baars, 1988, 1997, 2002, Dehaene and Naccache, 2001, Dehaene et al., 2017]. This theory posits that we be- come aware of speciï¬c pieces of information which will momentarily form the content of working mem- ory. A conscious thought is thus a set of these elements of which we have become aware, joined together and made globally available to other computational processes taking place in the brain at an unconscious level. Consciousness thus provides a form of bottleneck for information which has a strong inï¬uence on decision-making (voluntary action), memory (we tend to very quickly forget what we have not been con- sciously aware of) and perception (we may be blind to elements of our sensory input which may distract us from the current focus of conscious attention).
There are other aspects of consciousness which the global workspace theory does not directly address, such as the notion of self and that of subjective perception, and we do not study them here. Instead, we are interested in the use of machine learning ideas and experiments as ways to formalize theories of consciousness (particularly the global workspace theory), identify advantages which they can bring to a learning agent (e.g. as a useful prior for speciï¬c aspects of the world), and as a way to test these theories via machine learning experiments measuring for example their effect on sample efï¬ciency (or the speed of learning) and out-of-distribution generalization.
# 3 Consciousness Prior Theory
We explain a machine learning framework for these ideas in more detail below, and place them in the context of a learning agent with goals (see Sutton and Barto [1998] for basic notions of reinforcement learning).
# 3.1 Extracting a Conscious State
Let xt be the observation at time t for a learning agent, and let ht be the high-level representation derived from xt (and from past observed values {xtâk} in the partially observable case). For example, ht could be the output of some kind of recurrent neural network (or RNN, with whatever architecture is appropriate) that reads the sequence of xt as input and produces an output ht at each time step:
ht = F (xt, htâ1) (1)
where we call F the representation RNN or encoder and ht the unconscious representation state. We can think of ht as a very large vector or as a set containing all the possible elements which could be brought to consciousness via an attention mechanism.
A core objective for the learner is to learn good representations in ht, which disentangles abstract explanatory factors, in the sense that there exist a simple transformation of ht which can select the in- formation about a single factor (its value or uncertainty about it). With ht seen as a set, we can think of each element e â ht as one of the variables over which the learner needs to form a joint distribution in order to make sense of the high-level dependencies. These dependencies do not have to be limited to those between elements in the same ht: they could also relate elements arising at different time steps.
In contrast, we will deï¬ne the conscious state ct as a very low-dimensional set which is derived from ht by a form of attention mechanism applied on ht, taking into account the previous conscious state and
2
memory as context:
ct = C(ht, ctâ1, mtâ1, zt) (2)
where zt is a random noise source and mt is the content of memory at time t. The memory content gets updated by possibly committing ct to memory:
mt = M (mtâ1, ct). (3)
We do not explicitly put them in the notation but a realistic agent would also have goals as part of the context which conditions both the selection of unconscious items (in F ) and the update of the conscious state (in C), then seen as a search mechanism. Also, although we do not explore the architecture of memory mechanisms very much here, it is clear that different kinds of memory mechanisms exist in the brain, starting with short-term memory from which very recently accessed conscious elements can be retrieved, as well as longer-term memory, which contains a subset of the elements stored in short-term memory. The cognitive interpretation of the above equations is that the value of ct is a set of consciously accessed elements and corresponds to the content of a thought one is conscious of at time t. The conscious state ct is a very small subset of all the information available to us unconsciously, ht, but which has been brought to our awareness by a particular form of attention which picks several elements or projections from ht. The function C is the consciousness process and because of its random noise inputs, produces a random choice of the elements on which the attention gets focused. This is useful if we think of the consciousness process as a tool for exploring interpretations or plans or to sample predictions about the future or simply imagined scenarios. We can also think of the consciousness process as the tool to make a series of associations forming a coherent argument (for reasoning). It isolates particular high-level abstractions and extracts the information about each of them (some identifying information and attributes, a value, and uncertainty about it or even the fact that it is observed or not). This would happen if we think about a single factor, but in general C will aggregate a few (e.g. a handful) of such factors into a more complex and composed thought.
# 3.2 Sparse Factor Graphs
A factor graph is a way to represent the joint distribution between a set of variables. Let S = {V1, . . . Vn} be that set and P (S) be their joint distribution. In a factor graph, the joint is represented as a product of potential functions fj , each of which only depends on a subset Sj â S:
P (S) = Qj fj (Sj) Z (4)
where Z is a normalization constant. We call each fj a factor and it creates a direct dependency between the variables in Sj. Indirect dependencies exist between variables by following paths in the bipartite graph formed on one hand with the variables Vk and the factors fj (each associated with a subset Sj of variables).
Translated in probabilistic terms, the consciousness prior amounts to the assumption that the factor graph for the joint distribution between the elements in the set ht (or more generally for the set containing all of the elements in mt and all those one could think of in the future) is sparse1. This is because the cardinality of all Sjâs is small. The motivation for this assumption comes from observing the structure of natural language (broken down into phrases, statements or sentences, each of which involves very few words) as well as the structure of formal knowledge representations such as the sets of facts and rules studied in classical symbolic / logic AI or in ontologies and knowledge graphs [Ehrlinger and WöÃ, 2016]. In addition to being sparse, we believe that a related assumption can be made: most factors in the graph describe a strong dependency, i.e., one which makes low-entropy predictions (e.g. about some of the variables in Sj given the others). Since factor graphs are also generally understood as energy- based models (the logarithm of each potential function contributes an additive term in the energy function corresponding to the overall joint distribution), we can also say that each potential function creates a strong dip in the energy function. Otherwise, they would not be worth putting in the factor graph. This is related to the fact that we should think of this joint distribution as a very rough approximation of the world built by learning agents to help them plan, reason, imagine, etc.
An important purpose for the consciousness prior, from a machine learning point of view, is that it should help a learner discover an encoder which captures the kind of high-level variables which humans talk about when they communicate with language, since natural language statements naturally tend to
1and we probably do not want to represent that graph explicitly, and instead use conscious attention to selectively traverse and explore only relevant parts of it, in the context of given goals
3
satisfy both the sparsity requirement (each sentence involves few words) and the "strong dip" requirement (otherwise the statement is not worth communicating). In the quest to discover encoding functions which disentangle [Bengio, 2009, Bengio et al., 2013] high-level concepts from each other, we should see the consciousness prior as one of many tools to constrain the learner towards better high-level representa- tions. Please note in passing that by âdisentangled" we do not generally mean marginally independent (that would make all the top-level variables independent of each other), as in recent work on variational autoencoders [Higgins et al., 2017]. Indeed, notice how natural language concepts (like say "fork" and "knife") tend to not be independent of each other, but instead may be combined to form probable state- ments (like "she was eating with her knife and fork").
The analogy with natural language and with knowledge graphs, ontologies and formal declarative knowledge also suggests that new potential functions can be created as needed. Instead of having a large but ï¬xed set of potential functions, what we have are mechanisms for creating new ones which "make sense" according to observations, reasoning, or imagination. Instead of enumerating all the possible potential functions, the brain may have the ability to instantiate new ones on the ï¬y. This connects the previous section, which was about the attention mechanisms for selecting a small set of variables forming a conscious thought (ct) with the topic of this section, which is about the declarative knowledge formed by the set of potential functions each linking a few variables together. Whereas the sparse factor graph constraint is about the underlying beliefs about the world (when expressed with the high-level variables), the attention mechanisms used to build conscious thoughts are part of the inference mechanisms used to compute efï¬ciently according to the consciousness prior.
# 3.3 Training Objectives
To capture the assumption that a conscious thought can encapsulate a statement about the future, we could introduce a veriï¬er network which can match a current representation state ht with a past conscious state ctâk stored in memory mtâ1:
# V (ht, ctâk) â R
(5)
which should be structured so that V (ht, ctâk) indicates the consistency of ctâk with ht, e.g., estimating the probability of the corresponding statement being true, given ht.
More generally, we would like to deï¬ne an objective (or reward) function which embodies the idea that the attended (conscious) elements are useful, in a way which can be quantiï¬ed and optimized, i.e., that the representation RNN and the attention mechanism which extracts ct from ht are trained to optimize this objective function. This can be in addition to other objectives such as being able to reconstruct the raw input or any other supervised, RL, or unsupervised objectives which we probably want to throw in.
There are two distinct mechanisms at play which contribute to map the high-level state representation to the objective function: (1) the attention mechanism (e.g. the consciousness RNN) which selects and combines a few elements from the high-level state representation into a low-dimensional âconscious sub- stateâ object (the current content of our consciousness), and (2) the predictions or actions which are derived from the sequence of these conscious sub-states. The second mechanism is easy to grasp and frame in standard ML practice, either in deep learning or RL, e.g. for supervised or unsupervised or RL tasks. For example, the attention mechanism could select elements B from the current representation state and choose to make a prediction about future elements A. Then to improve the quality of the prediction mechanism we may just want to maximize logP (A|B) or some proxy for it, e.g., using a variational auto-encoder [Kingma and Welling, 2014] objective or a a conditional GAN [Mirza and Osindero, 2014] if one wants to sample accurately an A from B. Note again that such an objective function is not just used to learn the mapping from B to A (or to probabilities over the space of A values), but also drives the learning of the representation function itself, i.e., is back-propagated into the representation RNN). However, this part of the objective function (e.g. predictive value, computed by V above) is not sufï¬cient and in fact is not appropriate to train the attention mechanism itself (which variables A and B should be selected?). Indeed, if that was the driving objective for attention, the learner would always pick a pair (A, B) such that A is trivially predictable from B (and there are such aspects of reality which are trivially predictable yet do not help us to further understand the world and make sense of it or achieve our goals). It remains an open question what other objectives would be appropriate for learning how to attend to the most useful elements, but ultimately we should be able to use the actual RL reward of the learning agent using ct for taking decisions. Some form of mutual information, entropy or diversity may be needed so that the attention mechanism is stochastic and can choose a very diverse set of possible attended elements, so as to cover widely the possible variables A on which a prediction is made, i.e., the entropy of (A, B) pairs.
4
# 3.4 Naming Variables and Indirection
Content-based soft-attention or hard-attention mechanisms [Bahdanau et al., 2015, Xu et al., 2015] ex- tract a value from a set of element by taking a convex weighted sum of values from an input set of values. Those weights are the attention weights and they are computed by an attention mechanism which gives a larger weight on the element with the most appropriate "key", according to some context.
In standard neural networks without attention, a neuron i is identiï¬ed by its position in its layer and the signal it sends to some other neuron j downstream does not need to be identiï¬ed as coming from i. However, when attention mechanisms such as described above are used to provide an input value to j, the input could come from any of the elements over which attention is making a selection. Depending on the computation performed, it could thus be useful for downstream layers with attention mechanisms selecting their input to receive not just the weighted (selected) value but also information about the source of the information. We can think of that information as a variable name (and possibly other attributes which we can interpret as variable type), which complement the variable value. The idea of (key,value) pairs was used in memory augmented neural networks [Graves et al., 2014, Weston et al., 2014], although it is not clear if a distinction between keys and values exists in the brain, or if a general auto-associative mechanism is used instead.
When elements from the unconscious state ht are selected to enter the conscious state ct using content- based soft-attention [Bahdanau et al., 2015], it is not just a value which should be copied but also some "key" which identiï¬es the origin of that value. Modern attention-based deep learning architectures such as Transformers [Vaswani et al., 2017] bind (key,value) pairs together precisely for that purpose. For example, the kind of veriï¬er network discussed above needs to associate a (key,prediction) pair made in the past with a (key,realization) pair observed later. The key thus acts like a name and provides a form of indirection or reference. If the key and value were mixed up and the predicted value differs substantially from the observed value, a simple associative process might miss the opportunity to match these and thus provide a strong training signal (to correct the predictor). Another reason to represent keys separately from values is that the keys can be used to represent a form of type information, to help match the expected argument type of a downstream computation with an appropriate element selected by an attention mechanism. This is important in order to obtain systematic generalization [Lake and Baroni, 2017] and combinatorial properties omnipresent in natural language, making it easier to combine different pieces of neural hardware together dynamically, with keys being used to decide which information should be routed where. We could thus see the conscious state as a bottleneck to route such information across many different modules.
# 3.5 Connection to Language and Symbolic Knowledge Representation
We hypothesize that conscious processing of the kind described above could thus help the brain (and future machine learning systems) achieve better systematic generalization and combine concepts in ï¬uent and combinatorial ways. The fact that we deï¬ne consciousness in terms of verbal reporting may be important to note here. All this indeed suggests that there is a fairly simple transformation of conscious states into natural language sentences. Conversely, an externally provided sentence (heard or read) could also elicit an associated conscious state, although we postulate that the conscious state is generally a richer object than the uttered sentence, i.e., mapping from conscious states to sentences loses information (think about visual imagery, or artistic expression, which are difï¬cult to put in words), and the same sentence could thus be interpreted differently depending on context and the particulars of the agent who reads that sentence. Formally, we could use another RNN to map a conscious state to an utterance ut:
ut = U (ct, utâ1). (6)
A learning agent which uses language could thus beneï¬t from an additional regularization effect putting pressure on the encoder: the set of currently consciously attended elements should have a direct two- way mapping with natural language utterances which may be uttered by other agents, such as a human teacher. This would act as a weak form of supervision for the concepts produced by the encoder. A sentence focuses on just a handful of elements and concepts, unlike our full internal state. This imposes soft constraints on the representation function in that its individual elements or dimensions are more likely to correspond to concepts which can typically be expressed by a single word or phrase. Based on these arguments, it is reasonable to hypothesize that language may actually help humans build sharper internal representations (which are better disentangled) as well as facilitate learning â see the arguments around curriculum learning [Bengio et al., 2009] and cultural learning [Bengio, 2014] â and enable collaborative task-solving.
Along the same line, this research opens the door to the possibility of better connecting deep learn- ing with classical symbolic AI and cognitive science, and move deep learning from perception (where
5
it currently shines) to higher-level cognition and knowledge representation (where many questions re- main open). For example, declarative knowledge is classically represented by facts and rules: each of them is a very sharp statement (true with high probability) about reality involving just a few concepts. Such a nugget of information or knowledge seems to ï¬t well as a conscious state. Combining such conscious states sequentially in order to make more complex predictions and inferences or actions is ba- sically what reasoning is about. However, pasting symbolic logic computations on top of a deep learning encoder might not succeed for several reasons. This would lose the ability manipulate uncertainty as well as represent the context-dependent effect of goals and background knowledge which deep learning with content-based attention can provide, in addition to the ability to improve generalization through distributed representations. Instead, we envision extensions of deep learning based on attention that im- plement conscious processing functionalities associated with system 2 tasks in humans. Progress in this direction would also address the often expressed concern about obtaining explanations from deep nets, since the approach proposed here would make it easier for a trained agent to communicate verbally its high-level state.
# 4 Considerations for Experimenting with the Consciousness Prior
Because this is a novel theory which may be developped in many different ways, it is important to start with simple toy experiments allowing one to test and evaluate qualitatively different approaches, such that the turnaround time for each experiment is very short and the analysis of the representations learned very easy (because we already have a preconceived idea of what concepts would be the most appropriate to disentangle).
Although working with natural language input would be likely to help the agent learn better and more abstract representations, it might be better to start with experiments with no linguistic input, to make sure that it is the training objective and the training framework alone which are leading to the discovery of the appropriate high-level concepts. For example, learning some form of intuitive physics is done by babies without the need for linguistic guidance. Similarly, although the consciousness prior could be used in supervised learning or task-oriented RL, testing its ability alone to discover high-level abstractions would be best done in the context of unsupervised RL, e.g., using an intrinsic reward which favours the discovery of how the environment works.
It would be more interesting for the learning task to involve meaningful abstractions which have a high predictive power. For example, consider predicting whether a pile of blocks will fall on or off a table. It involves a high-level discrete outcome which can be predicted easily, even if the details of where the blocks will fall is very difï¬cult even for humans to predict. In that case, predicting the future at the pixel level would be extremely difï¬cult because future states have high entropy, with a highly multi-modal distribution. However, some aspects of the future may have low entropy. If in addition, these aspects have a big impact on predicting what will come next (or on taking the right decisions now), then the consciousness prior should be very useful.
# Acknowledgements
The author wants to thank Philippe Beaudoin, Gerry (Tong) Che, William Fedus, Devon Hjelm and Anirudh Goyal for preliminary discussions about the consciousness prior, as well as funding from NSERC, CIFAR, the Canada Research Chairs, and the Open Philanthropy Project.
# References
Bernard J. Baars. A Cognitive Theory of Consciousness. Cambridge, MA: Cambridge University Press, 1988.
Bernard J. Baars. In the Theater of Consciousness. New York, NY: Oxford University Press, 1997.
Bernard J. Baars. The conscious access hypothesis: Origins and recent evidence, volume 6. 2002.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLRâ2015, arXiv:1409.0473, 2015.
Yoshua Bengio. Learning deep architectures for AI. Now Publishers, 2009.
6
Yoshua Bengio. Deep learning and cultural evolution. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, pages 1â2. ACM, 2014. URL http://dl.acm.org/citation.cfm?id=2598395.
Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICMLâ09, 2009.
Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new per- spectives. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 35(8):1798â1828, 2013.
S. Dehaene and L. Naccache. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79(1â2):1â37, 2001.
S. Dehaene, H. Lau, and S. Kouider. What is consciousness, and could machines have it? Science, 358 (6362):486â492, 2017.
Lisa Ehrlinger and Wolfram WöÃ. Towards a deï¬nition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS), 48, 2016.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained vari- ational framework. ICLR, 2(5):6, 2017.
Daniel Kahneman. Thinking, Fast and Slow. Macmillan, 2011.
Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
Brenden M Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. arXiv preprint arXiv:1711.00350, 2017.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
Robert van Gulick. Consciousness. In Stanford Encyclopedia of Philosophy. 2004.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICMLâ2015, 2015.
7 | {
"id": "1711.00350"
} |
1709.06560 | Deep Reinforcement Learning that Matters | In recent years, significant progress has been made in solving challenging
problems across various domains using deep reinforcement learning (RL).
Reproducing existing work and accurately judging the improvements offered by
novel methods is vital to sustaining this progress. Unfortunately, reproducing
results for state-of-the-art deep RL methods is seldom straightforward. In
particular, non-determinism in standard benchmark environments, combined with
variance intrinsic to the methods, can make reported results tough to
interpret. Without significance metrics and tighter standardization of
experimental reporting, it is difficult to determine whether improvements over
the prior state-of-the-art are meaningful. In this paper, we investigate
challenges posed by reproducibility, proper experimental techniques, and
reporting procedures. We illustrate the variability in reported metrics and
results when comparing against common baselines and suggest guidelines to make
future results in deep RL more reproducible. We aim to spur discussion about
how to ensure continued progress in the field by minimizing wasted effort
stemming from results that are non-reproducible and easily misinterpreted. | http://arxiv.org/pdf/1709.06560 | Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger | cs.LG, stat.ML | Accepted to the Thirthy-Second AAAI Conference On Artificial
Intelligence (AAAI), 2018 | null | cs.LG | 20170919 | 20190130 | 9 1 0 2 n a J 0 3 ] G L . s c [
3 v 0 6 5 6 0 . 9 0 7 1 : v i X r a
# Deep Reinforcement Learning that Matters
Peter Henderson1â, Riashat Islam1,2â, Philip Bachman2 Joelle Pineau1, Doina Precup1, David Meger1 1 McGill University, Montreal, Canada 2 Microsoft Maluuba, Montreal, Canada {peter.henderson,riashat.islam}@mail.mcgill.ca, phbachma@microsoft.com {jpineau,dprecup}@cs.mcgill.ca, dmeger@cim.mcgill.ca
# Abstract
In recent years, signiï¬cant progress has been made in solving challenging problems across various domains using deep re- inforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel meth- ods is vital to sustaining this progress. Unfortunately, repro- ducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without signiï¬cance metrics and tighter standardization of experimental reporting, it is difï¬cult to determine whether im- provements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the ï¬eld by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
Introduction Reinforcement learning (RL) is the study of how an agent can interact with its environment to learn a policy which maximizes expected cumulative rewards for a task. Recently, RL has experienced dramatic growth in attention and interest due to promising results in areas like: controlling continuous systems in robotics (Lillicrap et al. 2015a), playing Go (Silver et al. 2016), Atari (Mnih et al. 2013), and competitive video games (Vinyals et al. 2017; Silva and Chaimowicz 2017). Figure 1 illustrates growth of the ï¬eld through the number of publications per year. To maintain rapid progress in RL research, it is important that existing works can be easily reproduced and compared to accurately judge improvements offered by novel methods.
However, reproducing deep RL results is seldom straight- forward, and the literature reports a wide range of results for the same baseline algorithms (Islam et al. 2017). Re- producibility can be affected by extrinsic factors (e.g. hy- perparameters or codebases) and intrinsic factors (e.g. ef-
âThese two authors contributed equally
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
15,000 10,000 5,000 0 1990 1995 2000 2005 2010 2015
Figure 1: Growth of published reinforcement learning papers. Shown are the number of RL-related publications (y-axis) per year (x-axis) scraped from Google Scholar searches.
fects of random seeds or environment properties). We inves- tigate these sources of variance in reported results through a representative set of experiments. For clarity, we focus our investigation on policy gradient (PG) methods in con- tinuous control. Policy gradient methods with neural net- work function approximators have been particularly suc- cessful in continuous control (Schulman et al. 2015a; 2017; Lillicrap et al. 2015b) and are competitive with value-based methods in discrete settings. We note that the diversity of metrics and lack of signiï¬cance testing in the RL literature creates the potential for misleading reporting of results. We demonstrate possible beneï¬ts of signiï¬cance testing using techniques common in machine learning and statistics.
Several works touch upon evaluating RL algorithms. Duan et al. (2016) benchmark several RL algorithms and provide the community with baseline implementations. Generaliz- able RL evaluation metrics are proposed in (Whiteson et al. 2011). Machado et al. (2017) revisit the Arcade Learning Environment to propose better evaluation methods in these benchmarks. However, while the question of reproducibility and good experimental practice has been examined in related ï¬elds (Wagstaff 2012; Boulesteix, Lauer, and Eugster 2013; Stodden, Leisch, and Peng 2014; Bouckaert and Frank 2004; Bouckaert 2004; Vaughan and Wawerla 2012), to the best of our knowledge this is the ï¬rst work to address this important question in the context of deep RL.
In each section of our experimental analysis, we pose ques- tions regarding key factors affecting reproducibility. We ï¬nd that there are numerous sources of non-determinism when reproducing and comparing RL algorithms. To this end, we show that ï¬ne details of experimental procedure can be crit-
ical. Based on our experiments, we conclude with possible recommendations, lines of investigation, and points of dis- cussion for future works to ensure that deep reinforcement learning is reproducible and continues to matter.
Technical Background This work focuses on several model-free policy gradient algorithms with publicly available implementations which appear frequently in the literature as baselines for compar- ison against novel methods. We experiment with Trust Re- gion Policy Optimization (TRPO) (Schulman et al. 2015a), Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al. 2015b), Proximal Policy Optimization (PPO) (Schulman et al. 2017), and Actor Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al. 2017). These methods have shown promising results in continuous control MuJoCo domain tasks (Todorov, Erez, and Tassa 2012) from Ope- nAI Gym (Brockman et al. 2016). Generally, they optimize p(O, 80) = Ex, [Dopo 7'r(sz)|So], using the policy gradient theorem: S0(9,80) = DY, ure (sl50) 0 orolals) (s,a). Here, pix, (s|80) = So¢297'P(s: = sso) (Sutton et al. 2000). TRPO (Schulman et al. 2015a) and PPO (Schulman et al. 2017) use constraints and advantage estimation to per- form this update, reformulating the optimization problem To(az|se) we, (aslo) A: (se, a)| . Here, A; is the general- ized advantage function (Schulman et al. 2015b). TRPO uses conjugate gradient descent as the optimization method with a KL constraint: E; [KL [79,,, (-|Sz), 79(-|S¢)]] < 6. PPO re- formulates the constraint as a penalty (or clipping objective). DDPG and ACKTR use actor-critic methods which estimate Q(s, a) and optimize a policy that maximizes the Q-function based on Monte-Carlo rollouts. DDPG does this using deter- ministic policies, while ACKTR uses Kronecketer-factored trust regions to ensure stability with stochastic policies. as: maxg E;, [
Experimental Analysis We pose several questions about the factors affecting repro- ducibility of state-of-the-art RL methods. We perform a set of experiments designed to provide insight into the questions posed. In particular, we investigate the effects of: speciï¬c hyperparameters on algorithm performance if not properly tuned; random seeds and the number of averaged experi- ment trials; speciï¬c environment characteristics; differences in algorithm performance due to stochastic environments; differences due to codebases with most other factors held constant. For most of our experiments1, except for those com- paring codebases, we generally use the OpenAI Baselines2 implementations of the following algorithms: ACKTR (Wu et al. 2017), PPO (Schulman et al. 2017), DDPG (Plappert et al. 2017), TRPO (Schulman et al. 2017). We use the Hopper- v1 and HalfCheetah-v1 MuJoCo (Todorov, Erez, and Tassa 2012) environments from OpenAI Gym (Brockman et al. 2016). These two environments provide contrasting dynam- ics (the former being more unstable).
1Speciï¬c details can be found in the supplemental and code can be found at: https://git.io/vFHnf
2https://www.github.com/openai/baselines
To ensure fairness we run ï¬ve experiment trials for each evaluation, each with a different preset random seed (all experiments use the same set of random seeds). In all cases, we highlight important results here, with full descriptions of experimental setups and additional learning curves included in the supplemental material. Unless otherwise mentioned, we use default settings whenever possible, while modifying only the hyperparameters of interest. All results (including graphs) show mean and standard error across random seeds. We use multilayer perceptron function approximators in all cases. We denote the hidden layer sizes and activations as (N, M, activation). For default settings, we vary the hy- perparameters under investigation one at a time. For DDPG we use a network structure of (64, 64, ReLU) for both actor and critic. For TRPO and PPO, we use (64, 64, tanh) for the policy. For ACKTR, we use (64, 64, tanh) for the actor and (64, 64, ELU) for the critic.
Hyperparameters What is the magnitude of the effect hyperparameter settings can have on baseline performance?
Tuned hyperparameters play a large role in eliciting the best results from many algorithms. However, the choice of op- timal hyperparameter conï¬guration is often not consistent in related literature, and the range of values considered is often not reported3. Furthermore, poor hyperparameter selec- tion can be detrimental to a fair comparison against baseline algorithms. Here, we investigate several aspects of hyperpa- rameter selection on performance.
Network Architecture How does the choice of network architecture for the policy and value function approximation affect performance?
In (Islam et al. 2017), it is shown that policy network architec- ture can signiï¬cantly impact results in both TRPO and DDPG. Furthermore, certain activation functions such as Rectiï¬ed Linear Unit (ReLU) have been shown to cause worsened learning performance due to the âdying reluâ problem (Xu et al. 2015). As such, we examine network architecture and ac- tivation functions for both policy and value function approxi- mators. In the literature, similar lines of investigation have shown the differences in performance when comparing linear approximators, RBFs, and neural networks (Rajeswaran et al. 2017). Tables 1 and 2 summarize the ï¬nal evaluation per- formance of all architectural variations after training on 2M samples (i.e. 2M timesteps in the environment). All learning curves and details on setup can be found in the supplemental material. We vary hyperparameters one at a time, while using a default setting for all others. We investigate three multilayer perceptron (MLP) architectures commonly seen in the liter- ature: (64, 64), (100, 50, 25), and (400, 300). Furthermore, we vary the activation functions of both the value and policy networks across tanh, ReLU, and Leaky ReLU activations. Results Figure 2 shows how signiï¬cantly performance can be affected by simple changes to the policy or value network
3A sampled literature review can be found in the supplemental.
HalfCheetah-vi (PPO, Policy Network Structure) HalfCheetah-v1 (TRPO, Policy Network Activation) DDPG with HalfCheetah Environment - Critic Network Activations 6000 2000) oa 1000 1000) 3004 2000 Average Ret Average Returns 1000 (6464) 1000. (100,50,25) 2000) (400,300) tah rely Critic Network Activation ~ ReLU ~~ Critic Network Activation â TanH leaky ela 70 7 - 1000 zh a a | i Timesteps nu ~ Critic Network Activation Ca a Timesteps OM 035 050 O75 100 135 150 Timesteps vat
DDPG with HalfCheetah Environment - Critic Network Activations 6000 oa 1000) 3004 2000 Average Returns 1000. Critic Network Activation ~ ReLU ~~ Critic Network Activation â TanH 1000 ~ Critic Network Activation OM 035 050 O75 100 135 150 Timesteps vat
Figure 2: Signiï¬cance of Policy Network Structure and Activation Functions PPO (left), TRPO (middle) and DDPG (right).
sa HalfCheetah-vi (DDPG, Reward Scale, Layer Norm) HalfCheetah-vi (DDPG, Reward Scale, No Layer Norm) 4000 sano) 2000 Average Return 1000 1 Timesteps Tio 155 Timesteps
Figure 3: DDPG reward rescaling on HalfCheetah-v1, with and without layer norm.
activations. We ï¬nd that usually ReLU or Leaky ReLU acti- vations perform the best across environments and algorithms. The effects are not consistent across algorithms or environ- ments. This inconsistency demonstrates how interconnected network architecture is to algorithm methodology. For exam- ple, using a large network with PPO may require tweaking other hyperparameters such as the trust region clipping or learning rate to compensate for the architectural change4. This intricate interplay of hyperparameters is one of the rea- sons reproducing current policy gradient methods is so dif- ï¬cult. It is exceedingly important to choose an appropriate architecture for proper baseline results. This also suggests a possible need for hyperparameter agnostic algorithmsâthat is algorithms that incorporate hyperparameter adaptation as part of the designâsuch that fair comparisons can be made without concern about improper settings for the task at hand.
Reward Scale How can the reward scale affect results? Why is reward rescaling used?
Reward rescaling has been used in several recent works (Duan et al. 2016; Gu et al. 2016) to improve results for DDPG. This involves simply multiplying the rewards gen- erated from an environment by some scalar (Ër = rËÏ) for training. Often, these works report using a reward scale of ËÏ = 0.1. In Atari domains, this is akin to clipping the rewards to [0, 1]. By intuition, in gradient based methods (as used in most deep RL) a large and sparse output scale can result in problems regarding saturation and inefï¬ciency in learning (LeCun et al. 2012; Glorot and Bengio 2010; Vincent, de Br´ebisson, and Bouthillier 2015). Therefore clip- ping or rescaling rewards compresses the space of estimated
4We ï¬nd that the KL divergence of updates with the large net- work (400, 300) seen in Figure 2 is on average 33.52 times higher than the KL divergence of updates with the (64, 64) network.
expected returns in action value function based methods such as DDPG. We run a set of experiments using reward rescaling in DDPG (with and without layer normalization) for insights into how this aspect affects performance.
Results Our analysis shows that reward rescaling can have a large effect (full experiment results can be found in the supplemental material), but results were inconsistent across environments and scaling values. Figure 3 shows one such ex- ample where reward rescaling affects results, causing a failure to learn in small settings below ËÏ = 0.01. In particular, layer normalization changes how the rescaling factor affects results, suggesting that these impacts are due to the use of deep net- works and gradient-based methods. With the value function approximator tracking a moving target distribution, this can potentially affect learning in unstable environments where a deep Q-value function approximator is used. Furthermore, some environments may have untuned reward scales (e.g. the HumanoidStandup-v1 of OpenAI gym which can reach rewards in the scale of millions). Therefore, we suggest that this hyperparameter has the potential to have a large impact if considered properly. Rather than rescaling rewards in some environments, a more principled approach should be taken to address this. An initial foray into this problem is made in (van Hasselt et al. 2016), where the authors adaptively rescale reward targets with normalized stochastic gradient, but further research is needed.
Random Seeds and Trials Can random seeds drastically alter performance? Can one distort results by averaging an improper number of trials?
A major concern with deep RL is the variance in results due to environment stochasticity or stochasticity in the learning process (e.g. random weight initialization). As such, even averaging several learning results together across totally dif- ferent random seeds can lead to the reporting of misleading results. We highlight this in the form of an experiment.
Algorithm TRPO (Schulman et al. 2015a) TRPO (Duan et al. 2016) TRPO (Schulman et al. 2017) PPO (Schulman et al. 2017) DDPG (Plappert et al. 2017) DDPG (Gu et al. 2016) DDPG (Duan et al. 2016) ACKTR (Wu et al. 2017) Environment Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 400,300 2980 ± 35 1791 ± 224 1243 ± 55 738 ± 240 2909 ± 87 -155 ± 188 61 ± 33 -1180 ± 444 1419 ± 313 5579 ± 354 600 ± 126 2845 ± 589 506 ± 208 850 ± 41 2577 ± 529 2653 ± 408 64,64 2674 ± 227 1939 ± 140 1303 ± 89 834 ± 317 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1632 ± 459 4198 ± 606 593 ± 155 2771 ± 535 749 ± 271 1573 ± 385 1608 ± 66 2691 ± 231 100,50,25 3110 ± 78 2151 ± 27 1243 ± 55 850±378 2812 ± 88 306 ± 261 2592 ± 196 1314 ± 340 2142 ± 436 5600 ± 601 501 ± 129 1638 ± 624 629 ± 138 1224 ± 553 2287 ± 946 2498 ± 112 tanh 2674 ± 227 1939 ± 140 1303 ± 89 834 ± 317 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1491 ± 205 5325 ± 281 436 ± 48 1638 ± 624 354 ± 91 1311 ± 271 1608 ± 66 2621 ± 381 ReLU 2772 ± 211 3041 ± 161 1131 ± 65 784 ± 352 2941 ± 91 1045 ± 114 2695 ± 86 2971 ± 364 1632 ± 459 4198 ± 606 593 ± 155 2771 ± 535 749 ± 271 1573 ± 385 2835 ± 503 2160 ± 151 LeakyReLU - - 1341± 127 1139 ±364 2865 ± 189 778 ± 177 2587 ± 53 2895 ± 365 1384 ± 285 4094 ± 233 319 ± 127 1405± 511 - - 2718 ± 434 2691 ± 231
Table 1: Results for our policy architecture permutations across various implementations and algorithms. Final average ± standard error across 5 trials of returns across the last 100 trajectories after 2M training samples. For ACKTR, we use ELU activations instead of leaky ReLU.
Algorithm TRPO (Schulman et al. 2015a) TRPO (Schulman et al. 2017) PPO (Schulman et al. 2017) DDPG (Plappert et al. 2017) DDPG (Gu et al. 2016) DDPG (Duan et al. 2016) ACKTR (Wu et al. 2017) Environment Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 Hopper-v1 HalfCheetah-v1 400,300 3011 ± 171 2355 ± 48 2909 ± 87 178 ± 242 2704 ± 37 1523 ± 297 1419 ± 312 5600 ± 601 523 ± 248 1373 ± 678 1208 ± 423 789 ± 91 152 ± 47 518 ± 632 64,64 2674 ± 227 1939 ± 140 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 1632 ± 458 4197 ± 606 343 ± 34 1717 ± 508 394 ± 144 1095 ± 139 1930 ± 185 3018 ± 386 100,50,25 2782 ± 120 1673 ± 148 2812 ± 88 172 ± 257 2969 ± 111 1807 ± 309 1569 ± 453 4713 ± 374 345 ± 44 1868 ± 620 380 ± 65 988 ± 52 1589 ± 225 2554 ± 219 tanh 2674 ± 227 1939 ± 140 2828 ± 70 205 ± 256 2790 ± 62 2201 ± 323 971 ± 137 3908 ± 293 436 ± 48 1128 ± 511 354 ± 91 1311 ± 271 691 ± 55 2547 ± 172 ReLU 3104 ± 84 2281 ± 91 2829 ± 76 235 ± 260 2687 ± 144 1288 ± 12 852 ± 143 4197 ± 606 343 ± 34 1717 ± 508 394 ± 144 1095 ± 139 500 ± 379 3362 ± 682 LeakyReLU - - 3047 ± 68 325 ± 208 2748 ± 77 1227 ± 462 843 ± 160 5324 ± 280 - - - - 1930 ± 185 3018 ± 38
Table 2: Results for our value function (Q or V ) architecture permutations across various implementations and algorithms. Final average standard error across 5 trials of returns across the last 100 trajectories after 2M training samples. For ACKTR, we use ELU activations instead of leaky ReLU.
6000. 5000 4000 3000 2000. Average Return 1000 0 1000 HalfCheetah Environment eae poe 0.75 Loo Timesteps 135 3000 pd OWOCE 000) 100 Average Return 2.00 0.00 Hopper Environmer 050075 1.00 Timesteps 1 150 nt 200 150 100. Average Return 0.00 Swimmer Environment Ae 035 050 075 Lio 1.35 Timestens Protest ih almyaald iba 1.50
200 150 100. Average Return 0.00 Swimmer Environment Ae 035 050 075 Lio 1.35 Timestens Protest ih almyaald iba 1.50
6000. 5000 4000 3000 2000. Average Return 1000 0 1000 HalfCheetah Environment eae poe 0.75 Loo Timesteps 135 pd OWOCE 2.00
3000 000) 100 Average Return 0.00 Hopper Environmer 050075 1.00 Timesteps 1 150 nt
Figure 4: Performance of several policy gradient algorithms across benchmark MuJoCo environment suites
Environment HalfCheetah-v1 Hopper-v1 Walker2d-v1 Swimmer-v1 DDPG 5037 (3664, 6574) 1632 (607, 2370) 1582 (901, 2174) 31 (21, 46) ACKTR 3888 (2288, 5131) 2546 (1875, 3217) 2285 (1246, 3235) 50 (42, 55) TRPO 1254.5 (999, 1464) 2965 (2854, 3076) 3072 (2957, 3183) 214 (141, 287) PPO 3043 (1920, 4165) 2715 (2589, 2847) 2926 (2514, 3361) 107 (101, 118)
Table 3: Bootstrap mean and 95% conï¬dence bounds for a subset of environment experiments. 10k bootstrap iterations and the pivotal method were used.
HalfCheetah-v1 (TRPO, Different Random Seeds) 5000 4000 3000 2000 Average Return 1000. 5 Random Average (5 runs) ~== Random Average (5 runs) 0.00025 0.50 0.75 1.001.350 752.00 Timesteps sage
Figure 5: TRPO on HalfCheetah-v1 using the same hyperpa- rameter conï¬gurations averaged over two sets of 5 different random seeds each. The average 2-sample t-test across entire training distribution resulted in t =
â
Results We perform 10 experiment trials, for the same hyperparameter conï¬guration, only varying the random seed across all 10 trials. We then split the trials into two sets of 5 and average these two groupings together. As shown in Figure 5, we ï¬nd that the performance of algorithms can be drastically different. We demonstrate that the variance between runs is enough to create statistically different dis- tributions just from varying random seeds. Unfortunately, in recent reported results, it is not uncommon for the top-N tri- als to be selected from among several trials (Wu et al. 2017; Mnih et al. 2016) or averaged over only small number of tri- als (N < 5) (Gu et al. 2017; Wu et al. 2017). Our experiment with random seeds shows that this can be potentially mislead- ing. Particularly for HalfCheetah, it is possible to get learning curves that do not fall within the same distribution at all, just by averaging different runs with the same hyperparameters, but different random seeds. While there can be no speciï¬c number of trials speciï¬ed as a recommendation, it is possible that power analysis methods can be used to give a general idea to this extent as we will discuss later. However, more investigation is needed to answer this open problem.
Environments How do the environment properties affect variability in re- ported RL algorithm performance?
To assess how the choice of evaluation environment can af- fect the presented results, we use our aforementioned default set of hyperparameters across our chosen testbed of algo- rithms and investigate how well each algorithm performs across an extended suite of continuous control tasks. For these experiments, we use the following environments from OpenAI Gym: Hopper-v1, HalfCheetah-v1, Swimmer-v1 and Walker2d-v1. The choice of environment often plays an im- portant role in demonstrating how well a new proposed algo- rithm performs against baselines. In continuous control tasks, often the environments have random stochasticity, shortened trajectories, or different dynamic properties. We demonstrate that, as a result of these differences, algorithm performance can vary across environments and the best performing algo- rithm across all environments is not always clear. Thus it is increasingly important to present results for a wide range of
environments and not only pick those which show a novel work outperforming other methods.
Results As shown in Figure 4, in environments with sta- ble dynamics (e.g. HalfCheetah-v1), DDPG outperforms all other algorithsm. However, as dynamics become more unsta- ble (e.g. in Hopper-v1) performance gains rapidly diminish. As DDPG is an off-policy method, exploration noise can cause sudden failures in unstable environments. Therefore, learning a proper Q-value estimation of expected returns is difï¬cult, particularly since many exploratory paths will result in failure. Since failures in such tasks are characterized by shortened trajectories, a local optimum in this case would be simply to survive until the maximum length of the trajectory (corresponding to one thousand timesteps and similar reward due to a survival bonus in the case of Hopper-v1). As can be seen in Figure 4, DDPG with Hopper does exactly this. This is a clear example where showing only the favourable and sta- ble HalfCheetah when reporting DDPG-based experiments would be unfair.
Furthermore, let us consider the Swimmer-v1 environment shown in Figure 4. Here, TRPO signiï¬cantly outperforms all other algorithms. Due to the dynamics of the water-like environment, a local optimum for the system is to curl up and ï¬ail without proper swimming. However, this corresponds 130. By reaching a local optimum, learning to a return of curves can indicate successful optimization of the policy over time, when in reality the returns achieved are not qualitatively representative of learning the desired behaviour, as demon- strated in video replays of the learned policy5. Therefore, it is important to show not only returns but demonstrations of the learned policy in action. Without understanding what the evaluation returns indicate, it is possible that misleading results can be reported which in reality only optimize local optima rather than reaching the desired behaviour.
Codebases Are commonly used baseline implementations comparable?
In many cases, authors implement their own versions of base- line algorithms to compare against. We investigate the Ope- nAI baselines implementation of TRPO as used in (Schulman et al. 2017), the original TRPO code (Schulman et al. 2015a), and the rllab (Duan et al. 2016) Tensorï¬ow implementation of TRPO. We also compare the rllab Theano (Duan et al. 2016), rllabplusplus (Gu et al. 2016), and OpenAI baselines (Plap- pert et al. 2017) implementations of DDPG. Our goal is to draw attention to the variance due to implementation details across algorithms. We run a subset of our architecture experi- ments as with the OpenAI baselines implementations using the same hyperparameters as in those experiments6.
Results We ï¬nd that implementation differences which are often not reï¬ected in publications can have dramatic impacts on performance. This can be seen for our ï¬nal evalu- ation performance after training on 2M samples in Tables 1 and 2, as well as a sample comparison in Figure 6. This
5https://youtu.be/lKpUQYjgm80 6Differences are discussed in the supplemental (e.g. use of dif- ferent optimizers for the value function baseline). Leaky ReLU activations are left out to narrow the experiment scope.
HalfCheetah-v1 (TRPO, Codebase Comparison) 2000 1500. L000. 500 Average Return a Timesteps HalfCheetah-v1 (DDPG, Codebase Comparison) 5000 4000: 3000 2000 Average Return 1000. 0 om 035 050 0 100 135 150 Timesteps
Figure 6: TRPO codebase comparison using our default set of hyperparameters (as used in other experiments).
demonstrates the necessity that implementation details be enumerated, codebases packaged with publications, and that performance of baseline experiments in novel works matches the original baseline publication code.
# Reporting Evaluation Metrics
In this section we analyze some of the evaluation metrics commonly used in the reinforcement learning literature. In practice, RL algorithms are often evaluated by simply pre- senting plots or tables of average cumulative reward (average returns) and, more recently, of maximum reward achieved over a ï¬xed number of timesteps. Due to the unstable na- ture of many of these algorithms, simply reporting the max- imum returns is typically inadequate for fair comparison; even reporting average returns can be misleading as the range of performance across seeds and trials is unknown. Alone, these may not provide a clear picture of an algorithmâs range of performance. However, when combined with conï¬dence intervals, this may be adequate to make an informed deci- sion given a large enough number of trials. As such, we investigate using the bootstrap and signiï¬cance testing as in ML (Kohavi and others 1995; Bouckaert and Frank 2004; Nadeau and Bengio 2000) to evaluate algorithm performance. Online View vs. Policy Optimization An important dis- tinction when reporting results is the online learning view versus the policy optimization view of RL. In the online view, an agent will optimize the returns across the entire learning process and there is not necessarily an end to the agentâs trajectory. In this view, evaluations can use the average cumu- lative rewards across the entire learning process (balancing exploration and exploitation) as in (Hofer and Gimbert 2016), or can possibly use ofï¬ine evaluation as in (Mandel et al. 2016). The alternate view corresponds to policy optimization, where evaluation is performed using a target policy in an of- ï¬ine manner. In the policy optimization view it is important to
run evaluations across the entire length of the task trajectory with a single target policy to determine the average returns that the target can obtain. We focus on evaluation methods for the policy optimization view (with ofï¬ine evaluation), but the same principles can be applied to the online view.
Conï¬dence Bounds The sample bootstrap has been a pop- ular method to gain insight into a population distribution from a smaller sample (Efron and Tibshirani 1994). Boot- strap methods are particularly popular for A/B testing, and we can borrow some ideas from this ï¬eld. Generally a boot- strap estimator is obtained by resampling with replacement many times to generate a statistically relevant mean and con- ï¬dence bound. Using this technique, we can gain insight into what is the 95% conï¬dence interval of the results from our section on environments. Table 3 shows the bootstrap mean and 95% conï¬dence bounds on our environment experiments. Conï¬dence intervals can vary wildly between algorithms and environments. We ï¬nd that TRPO and PPO are the most stable with small conï¬dence bounds from the bootstrap. In cases where conï¬dence bounds are exceedingly large, it may be necessary to run more trials (i.e. increase the sample size). Power Analysis Another method to determine if the sample size must be increased is bootstrap power analy- sis (Tuff´ery 2011; Yuan and Hayashi 2003). If we use our sample and give it some uniform lift (for example, scaling uni- formly by 1.25), we can run many bootstrap simulations and determine what percentage of the simulations result in statis- tically signiï¬cant values with the lift. If there is a small per- centage of signiï¬cant values, a larger sample size is needed (more trials must be run). We do this across all environment experiment trial runs and indeed ï¬nd that, in more unstable settings, the bootstrap power percentage leans towards in- signiï¬cant results in the lift experiment. Conversely, in stable trials (e.g. TRPO on Hopper-v1) with a small sample size, the lift experiment shows that no more trials are needed to generate signiï¬cant comparisons. These results are provided in the supplemental material.
Signiï¬cance An important factor when deciding on an RL algorithm to use is the signiï¬cance of the reported gains based on a given metric. Several works have investigated the use of signiï¬cance metrics to assess the reliability of reported evaluation metrics in ML. However, few works in reinforcement learning assess the signiï¬cance of reported metrics. Based on our experimental results which indicate that algorithm performance can vary wildly based simply on perturbations of random seeds, it is clear that some metric is necessary for assessing the signiï¬cance of algorithm perfor- mance gains and the conï¬dence of reported metrics. While more research and investigation is needed to determine the best metrics for assessing RL algorithms, we investigate an initial set of metrics based on results from ML.
In supervised learning, k-fold t-test, corrected resampled t- test, and other signiï¬cance metrics have been discussed when comparing machine learning results (Bouckaert and Frank 2004; Nadeau and Bengio 2000). However, the assumptions pertaining to the underlying data with corrected metrics do not necessarily apply in RL. Further work is needed to inves- tigate proper corrected signiï¬cance tests for RL. Nonetheless, we explore several signiï¬cance measures which give insight
into whether a novel algorithm is truly performing as the state- of-the-art. We consider the simple 2-sample t-test (sorting all ï¬nal evaluation returns across N random trials with different random seeds); the Kolmogorov-Smirnov test (Wilcox 2005); and bootstrap percent differences with 95% conï¬dence in- tervals. All calculated metrics can be found in the supple- mental. Generally, we ï¬nd that the signiï¬cance values match up to what is to be expected. Take, for example, comparing Walker2d-v1 performance of ACKTR vs. DDPG. ACKTR performs slightly better, but this performance is not signiï¬- cant due to the overlapping conï¬dence intervals of the two: t = 1.03, p = 0.334, KS = 0.40, p = 0.697, bootstrapped percent difference 44.47% (-80.62%, 111.72%).
Discussion and Conclusion Through experimental methods focusing on PG methods for continuous control, we investigate problems with repro- ducibility in deep RL. We ï¬nd that both intrinsic (e.g. random seeds, environment properties) and extrinsic sources (e.g. hy- perparameters, codebases) of non-determinism can contribute to difï¬culties in reproducing baseline algorithms. Moreover, we ï¬nd that highly varied results due to intrinsic sources bolster the need for using proper signiï¬cance analysis. We propose several such methods and show their value on a subset of our experiments.
What recommendations can we draw from our experiments?
Based on our experimental results and investigations, we can provide some general recommendations. Hyperparame- ters can have signiï¬cantly different effects across algorithms and environments. Thus it is important to ï¬nd the work- ing set which at least matches the original reported perfor- mance of baseline algorithms through standard hyperparame- ter searches. Similarly, new baseline algorithm implementa- tions used for comparison should match the original codebase results if available. Overall, due to the high variance across trials and random seeds of reinforcement learning algorithms, many trials must be run with different random seeds when comparing performance. Unless random seed selection is explicitly part of the algorithm, averaging multiple runs over different random seeds gives insight into the population dis- tribution of the algorithm performance on an environment. Similarly, due to these effects, it is important to perform proper signiï¬cance testing to determine if the higher average returns are in fact representative of better performance.
We highlight several forms of signiï¬cance testing and ï¬nd that they give generally expected results when taking conï¬- dence intervals into consideration. Furthermore, we demon- strate that bootstrapping and power analysis are possible ways to gain insight into the number of trial runs necessary to make an informed decision about the signiï¬cance of algorithm per- formance gains. In general, however, the most important step to reproducibility is to report all hyperparameters, implemen- tation details, experimental setup, and evaluation methods for both baseline comparison methods and novel work. Without the publication of implementations and related details, wasted effort on reproducing state-of-the-art works will plague the community and slow down progress.
What are possible future lines of investigation?
Due to the signiï¬cant effects of hyperparameters (partic- ularly reward scaling), another possibly important line of future investigation is in building hyperparameter agnostic algorithms. Such an approach would ensure that there is no unfairness introduced from external sources when compar- ing algorithms agnostic to parameters such as reward scale, batch size, or network structure. Furthermore, while we in- vestigate an initial set of signiï¬cance metrics here, they may not be the best ï¬t for comparing RL algorithms. Several works have begun investigating policy evaluation methods for the purposes of safe RL (Thomas and Brunskill 2016; Thomas, Theocharous, and Ghavamzadeh 2015), but further work is needed in signiï¬cance testing and statistical analysis. Similar lines of investigation to (Nadeau and Bengio 2000; Bouckaert and Frank 2004) would be helpful to determine the best methods for evaluating performance gain signiï¬cance.
How can we ensure that deep RL matters?
We discuss many different factors affecting reproducibility of RL algorithms. The sensitivity of these algorithms to changes in reward scale, environment dynamics, and random seeds can be considerable and varies between algorithms and set- tings. Since benchmark environments are proxies for real- world applications to gauge generalized algorithm perfor- mance, perhaps more emphasis should be placed on the appli- cability of RL algorithms to real-world tasks. That is, as there is often no clear winner among all benchmark environments, perhaps recommended areas of application should be demon- strated along with benchmark environment results when pre- senting a new algorithm. Maybe new methods should be answering the question: in what setting would this work be useful? This is something that is addressed for machine learn- ing in (Wagstaff 2012) and may warrant more discussion for RL. As a community, we must not only ensure reproducible results with fair comparisons, but we must also consider what are the best ways to demonstrate that RL continues to matter.
Acknowledgements We thank NSERC, CIFAR, the Open Philanthropy Project, and the AWS Cloud Credits for Research Program.
References Bouckaert, R. R., and Frank, E. 2004. Evaluating the replicability of signiï¬cance tests for comparing learning algorithms. In PAKDD, 3â12. Springer. Bouckaert, R. R. 2004. Estimating replicability of classiï¬er learning experiments. In Proceedings of the 21st International Conference on Machine Learning (ICML). Boulesteix, A.-L.; Lauer, S.; and Eugster, M. J. 2013. A plea for neutral comparison studies in computational sciences. PloS one 8(4):e61562. Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. OpenAI gym. arXiv preprint arXiv:1606.01540. Duan, Y.; Chen, X.; Houthooft, R.; Schulman, J.; and Abbeel, P. 2016. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the 33rd International Conference on Machine Learning (ICML).
Efron, B., and Tibshirani, R. J. 1994. An introduction to the boot- strap. CRC press. Glorot, X., and Bengio, Y. 2010. Understanding the difï¬culty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, 249â256. Gu, S.; Lillicrap, T.; Ghahramani, Z.; Turner, R. E.; and Levine, S. 2016. Q-prop: Sample-efï¬cient policy gradient with an off-policy critic. arXiv preprint arXiv:1611.02247. Gu, S.; Lillicrap, T.; Ghahramani, Z.; Turner, R. E.; Sch¨olkopf, B.; and Levine, S. 2017. Interpolated policy gradient: Merging on- policy and off-policy gradient estimation for deep reinforcement learning. arXiv preprint arXiv:1706.00387. Hofer, L., and Gimbert, H. 2016. Online reinforcement learning for real-time exploration in continuous state and action markov decision processes. arXiv preprint arXiv:1612.03780. Islam, R.; Henderson, P.; Gomrokchi, M.; and Precup, D. 2017. Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. ICML Reproducibility in Machine Learning Workshop. Kohavi, R., et al. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In IJCAI, volume 14. LeCun, Y. A.; Bottou, L.; Orr, G. B.; and M¨uller, K.-R. 2012. Efï¬- cient backprop. In Neural Networks: Tricks of the Trade. Springer. Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015a. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015b. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Machado, M. C.; Bellemare, M. G.; Talvitie, E.; Veness, J.; Hausknecht, M.; and Bowling, M. 2017. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. arXiv preprint arXiv:1709.06009. Mandel, T.; Liu, Y.-E.; Brunskill, E.; and Popovic, Z. 2016. Ofï¬ine Evaluation of Online Reinforcement Learning Algorithms. In AAAI. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; and Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 1928â1937. Nadeau, C., and Bengio, Y. 2000. Inference for the generalization error. In Advances in neural information processing systems. Plappert, M.; Houthooft, R.; Dhariwal, P.; Sidor, S.; Chen, R.; Chen, X.; Asfour, T.; Abbeel, P.; and Andrychowicz, M. 2017. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905. Rajeswaran, A.; Lowrey, K.; Todorov, E.; and Kakade, S. 2017. Towards generalization and simplicity in continuous control. arXiv preprint arXiv:1703.02660. Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M.; and Moritz, P. 2015a. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML). Schulman, J.; Moritz, P.; Levine, S.; Jordan, M.; and Abbeel, P. 2015b. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
Silva, V. d. N., and Chaimowicz, L. 2017. Moba: a new arena for game ai. arXiv preprint arXiv:1705.10443. Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershel- vam, V.; Lanctot, M.; et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484â489. Stadie, B. C.; Abbeel, P.; and Sutskever, I. 2017. Third-person imitation learning. arXiv preprint arXiv:1703.01703. Stodden, V.; Leisch, F.; and Peng, R. D. 2014. reproducible research. CRC Press. Sutton, R. S.; McAllester, D. A.; Singh, S. P.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with func- tion approximation. In Advances in neural information processing systems. Thomas, P., and Brunskill, E. 2016. Data-efï¬cient off-policy policy evaluation for reinforcement learning. In International Conference on Machine Learning, 2139â2148. Thomas, P. S.; Theocharous, G.; and Ghavamzadeh, M. 2015. High- Conï¬dence Off-Policy Evaluation. In AAAI. Todorov, E.; Erez, T.; and Tassa, Y. 2012. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Confer- ence on Intelligent Robots and Systems, IROS 2012, Vilamoura, Algarve, Portugal, October 7-12, 2012, 5026â5033. Tuff´ery, S. 2011. Data mining and statistics for decision making, volume 2. Wiley Chichester. van Hasselt, H. P.; Guez, A.; Hessel, M.; Mnih, V.; and Silver, D. 2016. Learning values across many orders of magnitude. In Advances in Neural Information Processing Systems, 4287â4295. Vaughan, R., and Wawerla, J. 2012. Publishing identiï¬able exper- iment code and conï¬guration is important, good and easy. arXiv preprint arXiv:1204.2235. Vincent, P.; de Br´ebisson, A.; and Bouthillier, X. 2015. Efï¬cient exact gradient update for training deep networks with very large sparse targets. In Advances in Neural Information Processing Sys- tems, 1108â1116. Vinyals, O.; Ewalds, T.; Bartunov, S.; Georgiev, P.; Vezhnevets, A. S.; Yeo, M.; Makhzani, A.; K¨uttler, H.; Agapiou, J.; Schrittwieser, J.; et al. 2017. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782. Wagstaff, K. 2012. Machine learning that matters. arXiv preprint arXiv:1206.4656. Whiteson, S.; Tanner, B.; Taylor, M. E.; and Stone, P. 2011. Pro- tecting against evaluation overï¬tting in empirical reinforcement learning. In 2011 IEEE Symposium on Adaptive Dynamic Program- ming And Reinforcement Learning, ADPRL 2011, Paris, France, April 12-14, 2011, 120â127. Wilcox, R. 2005. Kolmogorovâsmirnov test. Encyclopedia of biostatistics. Wu, Y.; Mansimov, E.; Liao, S.; Grosse, R.; and Ba, J. 2017. Scal- able trust-region method for deep reinforcement learning using kronecker-factored approximation. arXiv preprint:1708.05144. Xu, B.; Wang, N.; Chen, T.; and Li, M. 2015. Empirical evaluation of rectiï¬ed activations in convolutional network. arXiv preprint arXiv:1505.00853. Yuan, K.-H., and Hayashi, K. 2003. Bootstrap approach to inference and power analysis based on three test statistics for covariance structure models. British Journal of Mathematical and Statistical Psychology 56(1):93â110.
# Supplemental Material
In this supplemental material, we include a detailed review of experiment conï¬gurations of related work with policy gradient methods in continuous control MuJoCo (Todorov, Erez, and Tassa 2012) environment tasks from OpenAI Gym (Brockman et al. 2016). We include a detailed list of the hyperparameters and reported metrics typically used in policy gradient literature in deep RL. We also include all our experimental results, with baseline algorithms DDPG (Lillicrap et al. 2015b), TRPO (Schulman et al. 2015a), PPO (Schulman et al. 2017) and ACKTR (Wu et al. 2017)) as discussed in the paper. Our experimental results include ï¬gures with different hyperparameters (network architectures, activation functions) to highlight the differences this can have across algorithms and environments. Finally, as discussed in the paper, we include discussion of signiï¬cance metrics and show how these metrics can be useful for evaluating deep RL algorithms.
# Literature Reviews
# Hyperparameters
In this section, we include a list of hyperparameters that are reported in related literature, as shown in ï¬gure 4. Our analysis shows that often there is no consistency in the type of network architectures and activation functions that are used in related literature. As shown in the paper and from our experimental results in later sections, we ï¬nd, however, that these hyperparameters can have a signiï¬cant effect in the performance of algorithms across benchmark environments typically used.
Table 4: Evaluation Hyperparameters of baseline algorithms reported in related literature
Related Work (Algorithm) DDPG TRPO PPO ACKTR Q-Prop (DDPG) Q-Prop (TRPO) IPG (TRPO) Param Noise (DDPG) Param Noise (TRPO) Benchmarking (DDPG) Benchmarking (TRPO) Policy Network 64x64 64x64 64x64 64x64 100x50x25 100x50x25 100x50x25 64x64 64x64 400x300 Policy Network Activation ReLU TanH TanH TanH TanH TanH TanH ReLU TanH ReLU Value Network 64x64 64x64 64x64 64x64 100x100 100x100 100x100 64x64 64x64 400x300 Value Network Activation ReLU TanH TanH ELU ReLU ReLU ReLU ReLU TanH ReLU Reward Scaling 1.0 - - - 0.1 - - - - 0.1 Batch Size 128 5k 2048 2500 64 5k 10k 128 5k 64 100x50x25 TanH 100x50x25 TanH - 25k
# Reported Results on Benchmarked Environments
We then demonstrate how experimental reported results, on two different environments (HalfCheetah-v1 and Hopper-v1) can vary across different related work that uses these algorithms for baseline comparison. We further show the results we get, using the same hyperparameter conï¬guration, but using two different codebase implementations (note that these implementations are often used as baseline codebase to develop algorithms). We highlight that, depending on the codebase used, experimental results can vary signiï¬cantly.
Table 5: Comparison with Related Reported Results with Hopper Environment Number of Iterations Average Return Max Average Return rllab 500 1183.3 - QProp 500 - 2486 IPG TRPO 500 - 500 - 3668.8 Our Results (rllab) 500 2021.34 3229.1
# Our Results (Baselines) 500 2965.3 3034.4
Table 6: Comparison with Related Reported Results with HalfCheetah Environment Environment Metric TRPO on HalfCheetah Environment Number of Iterations Average Return Max Average Return rllab 500 1914.0 - QProp 500 4734 IPG 500 - 2889 TRPO 500 - 4855 Our Results (rllab) 500 3576.08 5197 Our Results (Baselines) 500 1045.6 1045.6
Work (Mnih et al. 2016) (Schulman et al. 2017) (Duan et al. 2016) (Gu et al. 2017) (Lillicrap et al. 2015b) (Schulman et al. 2015a) (Wu et al. 2017) Number of Trials top-5 3-9 5 (5) 3 5 5 top-2, top-3
Table 7: Number of trials reported during evaluation in various works.
Reported Evaluation Metrics in Related Work In table 8 we show the evaluation metrics, and reported results in further details across related work.
Table 8: Reported Evaluation Metrics of baseline algorithms in related literature
Related Work (Algorithm) Environments Timesteps or Episodes or Iterations Evaluation Metrics PPO ACKTR Q-Prop (DDPG) Q-Prop (TRPO) IPG (TRPO) Param Noise (DDPG) Param Noise (TRPO) Benchmarking (DDPG) Benchmarking (TRPO) HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper HalfCheetah Hopper 1M 1M 6k (eps) 5k (timesteps) 10k (eps) 1M 1M 500 iters (25k eps) 500 iters (925k eps) Average Return 1800 2200 2400 3500 6000 - 4000 - 3000 - 1800 500 3900 2400 2148 267 1914 1183 â¼ â¼ â¼ â¼ â¼ â¼ â¼ Max Return - - 7490 2604 4734 2486 2889 - - - - - - - - Std Error - - - - - - - - - - - - 702 43 150 120
â¼ â¼ â¼ â¼ â¼ â¼ â¼ â¼
Experimental Setup In this section, we show detailed analysis of our experimental results, using same hyperparameter conï¬gurations used in related work. Experimental results are included for the OpenAI Gym (Brockman et al. 2016) Hopper-v1 and HalfCheetah-v1 environments, using the policy gradient algorithms including DDPG, TRPO, PPO and ACKTR. Our experiments are done using the available codebase from OpenAI rllab (Duan et al. 2016) and OpenAI Baselines. Each of our experiments are performed over 5 experimental trials with different random seeds, and results averaged over all trials. Unless explicitly speciï¬ed as otherwise (such as in hyperparameter modiï¬cations where we alter a hyperparameter under investigation), hyperparameters were as follows. All results (including graphs) show mean and standard error across random seeds.
⢠DDPG
â Policy Network: (64, relu, 64, relu, tanh); Q Network (64, relu, 64, relu, linear) â Normalized observations with running mean ï¬lter â Actor LR: 1e â 4; Critic LR: 1e â 3 â Reward Scale: 1.0 â Noise type: O-U 0.2 â Soft target update Ï = .01 â γ = 0.995 â batch size = 128 â Critic L2 reg 1e â 2
⢠PPO
â Policy Network: (64, tanh, 64, tanh, Linear) + Standard Deviation variable; Value Network (64, tanh, 64, tanh, linear) â Normalized observations with running mean ï¬lter â Timesteps per batch 2048 â clip param = 0.2 â entropy coeff = 0.0 â Optimizer epochs per iteration = 10 â Optimizer step size 3e â 4 â Optimizer batch size 64 â Discount γ = 0.995, GAE λ = 0.97 â learning rate schedule is constant
⢠TRPO
â Policy Network: (64, tanh, 64, tanh, Linear) + Standard Deviation variable; Value Network (64, tanh, 64, tanh, linear) â Normalized observations with running mean ï¬lter â Timesteps per batch 5000 â max KL=0.01 â Conjugate gradient iterations = 20 â CG damping = 0.1 â VF Iterations = 5 â VF Batch Size = 64 â VF Step Size = 1e â 3 â entropy coeff = 0.0 â Discount γ = 0.995, GAE λ = 0.97
⢠ACKTR
â Policy Network: (64, tanh, 64, tanh, Linear) + Standard Deviation variable; Value Network (64, elu, 64, elu, linear) â Normalized observations with running mean ï¬lter â Timesteps per batch 2500 â desired KL = .002 â Discount γ = 0.995, GAE λ = 0.97
Modiï¬cations to Baseline Implementations To ensure fairness of comparison, we make several modiï¬cations to the existing implementations. First, we change evaluation in DDPG (Plappert et al. 2017) such that during evaluation at the end of an epoch, 10 full trajectories are evaluated. In the current implementation, only a partial trajectory is evaluated immediately after training such that a full trajectory will be evaluated across several different policies, this corresponds more closely to the online view of evaluation, while we take a policy optimization view when evaluating algorithms.
Hyperparameters : Network Structures and Activation Functions Below, we examine the signiï¬cance of the network conï¬gurations used for the non-linear function approximators in policy gradient methods. Several related work have used different sets of network conï¬gurations (network sizes and activation functions). We use the reported network conï¬gurations from other works, and demonstrate the signiï¬cance of careful ï¬ne tuning that is required. We demonstrate results using the network activation functions, ReLU, TanH and Leaky ReLU, where most papers use ReLU and TanH as activation functions without detailed reporting of the effect of these activation functions. We analyse the signifcance of using different activations in the policy and action value networks. Previously, we included a detailed table showing average reward with standard error obtained for each of the hyperparameter conï¬gurations. In the results below, we show detailed results of how each of these policy gradient algorithms are affected by the choice of the network conï¬guration.
Proximal Policy Optimization (PPO)
Hopper-v1 (PPO, Policy Network Activation) 3000 2500 § 2000 [a4 0 fa g <= 1000 500 â tanh rel a 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10°
HalfCheetah-v1 (PPO, Policy Network Activation) 3000 2500 § 2000 & 1500 o © 1000 2 = 500 0 â tanh 5 â rau i ae 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 Timesteps x10"
3000 3000 2500 2500 2000 § 2000 1500 [a4 0 1000 fa g 500 <= 1000 0 500 â tanh â tanh 5 â rau rel i ae a 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10" Timesteps x10° HalfCheetah-v1 (PPO, Value Network Activation) sooo Hopper-v1 (PPO, Value Network Activation) 2500 â Frâtâse 2500 2000. ⬠1500 § 2000 Z 3 a wo 1000 1500 oy bo g id 0 Zac 0 = tanh 500 â tanh â500 ââ â rel leaky lu 5 â leaky se 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Lis 2.00 Timesteps xi Timesteps x10?
sooo Hopper-v1 (PPO, Value Network Activation) 2500 § 2000 3 a 1500 bo id Zac 500 â tanh â rel 5 â leaky se 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Lis 2.00 Timesteps x10?
# 5 aoe
# g
Figure 7: PPO Policy and Value Network activation
Experiment results in Figure 7, 8, and 9 in this section show the effect of the policy network structures and activation functions in the Proximal Policy Optimization (PPO) algorithm.
Hopper-v1 (PPO, Policy Network Structure) 3000 2500 « 5 2000 3 a ©1500 é 5 100 500 â (64.64) aan 0 â . 0.00 025 0.50 0.75 1.00 155 1.50 5 2.00 Timesteps x108
HalfCheetah-v1 (PPO, Policy Network Structure) Hopper-v1 (PPO, Policy Network Structure) _ . - 3000 2000. 2500 ⬠« S100 5 2000 1000 3 a ©1500 2 0 é 5 _ | 100 1000 (64,64) (100,50,25) 500 â (64.64) 2000 ~ (400,300) aan 0 â . 0.00 0.25 0.50 0.75 1.00 1.25 1.50 L75 2.00 0.00 025 0.50 0.75 1.00 155 1.50 5 2.00 Timesteps x10? Timesteps x108
# 3 a
# g
Figure 8: PPO Policy Network structure
HalfCheetah-v1 (PPO, Value Network Structure) Hopper-vl (PPO, Value Network Structure) 2500. 3000 2500 1500 8 Average Return Average Return BE 1000 0 â (6468) 500 â (64,64) 500 â (1005025) â (1005025) â (400,300) 0 ââ (400,300) 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Li5 2.00 Timesteps x10° Timesteps x10°
Hopper-vl (PPO, Value Network Structure) 3000 2500 8 Average Return BE 1000 500 â (64,64) â (1005025) 0 ââ (400,300) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Li5 2.00 Timesteps x10°
HalfCheetah-v1 (PPO, Value Network Structure) 2500. 1500 Average Return 0 â (6468) 500 â (1005025) â (400,300) 0.00 0.25 0.50 0.95 1.00 1.25 1.50 Li5 2.00 Timesteps x10°
Figure 9: PPO Value Network structure
# Actor Critic using Kronecker-Factored Trust Region (ACKTR)
HalfCheetah-vl (ACKTR, Policy Network Structure) Hopper-vl (ACKTR, Policy Network Structure) 3000 , 3000 2500 2500. c £ 3 2000. 2 2000 © 1500 oa 2 % 1500 © 1000 S = 2 1000. < 500 < 0 cos (6468) 500 soos (6468) == (4005025) Z ==. (4005025) â500 £400,300) 0 (400,300) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 175 2.00 Timesteps x0 Timesteps xaoé
Figure 10: ACKTR Policy Network structure
HalfCheetah-vl (ACKTR, Value Network Structure) 3000 Hopper-v1 (ACKTR, Value Network Structure) 3000 2500 2500 2000 E 2000 co 1500 & 1500 % § & 1000 4 $ 1000 500 < 0 (e161) 500 soos (6468) == (4005025) ~~ (10050.28) â500 en) 0 (4a 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x0 Timesteps xaoé
= I &
2
© g =
Figure 11: ACKTR Value Network structure
HalfCheetah-vl (ACKTR, Policy Network Activation) Hopper-vl (ACKTR, Policy Network Activation) 3000 3000 2500 2500 E 2000 & 2 2 & 1500 em 2 $1 © 1000 enn = 50 = 1000 0 500 500 0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 175 2.00 Timesteps xi Timesteps x1
Figure 12: ACKTR Policy Network Activation
HalfCheetah-vl (ACKTR, Value Network Activation) Hopper-v1 (ACKTR, Value Network Activation) 4000. 3000 2500 c 3000 c 3 § 2000 o o 2000 & 1500 & & 4 4 S S I 2 1000 & 10000 _ 500 0 al oh 0 0.00, 0.25, 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25, 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps xa Timesteps xaoé
Figure 13: ACKTR Value Network Activation
We then similarly, show the signiï¬cance of these hyperparameters in the ACKTR algorithm. Our results show that the value network structure can have a signiï¬cant effect on the performance of ACKTR algorithm.
# Trust Region Policy Optimization (TRPO)
HalfCheetah-v1 (TRPO, Policy Network Structure) Hopper-v1 (TRPO, Policy Network Structure) 600 3000 m 2500 200 © g 5 2000 2 0 Ea ve 1500 : & â200 & < < 1000 400 â (es) 500 â (64.64) â600 â (100,50,25) â (200,50,25) â (400.300) 0 â (400.300) 0.00 05 050 O75 1.00 1S 50 Lis 200 0.00 05 0.50 075 1.00 125 150 17 2.00 Timesteps x10° Timesteps x10®
Hopper-v1 (TRPO, Policy Network Structure) 3000 2500 © 5 2000 Ea 1500 & & < 1000 500 â (64.64) â (200,50,25) 0 â (400.300) 0.00 05 0.50 075 1.00 125 150 17 2.00 Timesteps x10®
HalfCheetah-v1 (TRPO, Policy Network Structure) 600 m 200 g 2 0 ve : â200 < 400 â (es) â600 â (100,50,25) â (400.300) 0.00 05 050 O75 1.00 1S 50 Lis 200 Timesteps x10°
Figure 14: TRPO Policy Network structure
HalfCheetah-v1 (TRPO, Value Network Structure) Hopper-v1 (TRPO, Value Network Structure) 3000 400 2500 20 ⬠⬠5 3 2000 0 2 % 1500 2 $ â200 5 1000 400 â (6464) 500, â (64.64) 600 â (005025) â (0050.25) â (400:300) 0 â (400:300) v0 02 050 on 100 io iso ii 200 00 0COCtSCiâiCHSSC*«iCS i 200 Timesteps x10° Timesteps x10"
Hopper-v1 (TRPO, Value Network Structure) 3000 2500 ⬠3 2000 2 1500 2 5 1000 500, â (64.64) â (0050.25) 0 â (400:300) 00 0COCtSCiâiCHSSC*«iCS i 200 Timesteps x10"
HalfCheetah-v1 (TRPO, Value Network Structure) 400 20 ⬠5 zg 0 % $ â200 < 400 â (6464) 600 â (005025) â (400:300) v0 02 050 on 100 io iso ii 200 Timesteps x10°
Figure 15: TRPO Value Network structure
HalfCheetah-vl (TRPO, Policy Network Activation) Hopper-vi (TRPO, Policy Network Activation) 1000 3000 750 2500 xy s g Average Return s 0 1000 â250 â500 500 â tah leaky relu et 750 0 hey st 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.35 0.50 0.75 1.00 1.25 150 175 2.00 Timesteps x08 Timesteps x10"
Hopper-vi (TRPO, Policy Network Activation) 3000 2500 g Average Return s 1000 500 â tah et 0 hey st 0.00 0.35 0.50 0.75 1.00 1.25 150 175 2.00 Timesteps x10"
# Average Return
Figure 16: TRPO Policy and Value Network activation
HalfCheetah-vl (TRPO, Value Network Activation) Hopper-v1 (TRPO, Value Network Activation) 600 soo 400 2500 = 200 ⬠5 5 2000 eo 4 % gis 5 â200 g < = 1000 400 â tanh 500 â tanh 600 â elu â rhs leaky rela 0 â leaky sea v0 02 050 on 100 ia 150 iis 200 00 0SCtSCtiSSSsiaSC<âiHSSCâ<âiC SSCS Timesteps x10° Timesteps x10°
Hopper-v1 (TRPO, Value Network Activation) soo 2500 ⬠5 2000 4 gis g = 1000 500 â tanh â rhs 0 â leaky sea 00 0SCtSCtiSSSsiaSC<âiHSSCâ<âiC SSCS Timesteps x10°
HalfCheetah-vl (TRPO, Value Network Activation) 600 400 = 200 5 eo % 5 â200 < 400 â tanh 600 â elu leaky rela v0 02 050 on 100 ia 150 iis 200 Timesteps x10°
Figure 17: TRPO Policy and Value Network activation
In Figures 14, 15, 16, and 17 we show the effects of network structure on the OpenAI baselines implementation of TRPO. In this case, only the policy architecture seems to have a large effect on the performance of the algorithmâs ability to learn.
Deep Deterministic Policy Gradient (DDPG)
DDPG with HalfCheetah Environment, Actor Network Size 6000 é & 1000 ES § 2000 = < â Actor Network Size â 64 x 64 0 - Actor Network Size = 100 x 50 x 25 7 Actor Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10?
DDPG with Hopper Environment, Actor Network Size 3000 | 2500 E 2000 @ 1500 % 1000 § 4 J ) | ' => 5 il . . < ââ Actor Network Size â 64 x 64 0 ~ Actor Network Size = 100 x 50 x 25 â500 â Actor Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10?
3000 | 6000 2500 é E 2000 1000 @ 1500 ES % 1000 § 2000 § 4 J ) | ' = => 5 il . . â Actor Network Size â 64 x 64 < ââ Actor Network Size â 64 x 64 0 - Actor Network Size = 100 x 50 x 25 0 ~ Actor Network Size = 100 x 50 x 25 7 Actor Network Size = 400 x 300 â500 â Actor Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50 175 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10? Timesteps x10? DDPG with HalfCheetah Environment, Critic Network Size DDPG with Hopper Environment, Critic Network Size (lll 3000 | 6000 2500 E 2000 a4 gi @ 1500 % % 1000 I 2000 S 2 599 | ââ Critic Network Size = 64 x 64 <x ââ Critic Network Size = 64 x 64 cc Critic Network Size = 100 x 50 x 25 07 RTPI TYE P| -eee=: Critic Network Size = 100 x 50 x 25 tia Critic Network Size = 400 x 300 500 ââ- Critic Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50) 1.75 2.00 0.00 0.25 0.50 0.75 LOO «1.250 1.50 «1.75 2.00 Timesteps x10 Timesteps x10°
DDPG with HalfCheetah Environment, Critic Network Size (lll 6000 a E a4 gi % § 2000 g <x ââ Critic Network Size = 64 x 64 cc Critic Network Size = 100 x 50 x 25 tia Critic Network Size = 400 x 300 0.00 0.25 0.50 0.75 1.00 1.25 1.50) 1.75 2.00 Timesteps x10
DDPG with Hopper Environment, Critic Network Size 3000 | 2500 E 2000 @ 1500 % 1000 I S 2 599 | <x ââ Critic Network Size = 64 x 64 07 RTPI TYE P| -eee=: Critic Network Size = 100 x 50 x 25 500 ââ- Critic Network Size = 400 x 300 0.00 0.25 0.50 0.75 LOO «1.250 1.50 «1.75 2.00 Timesteps x10°
Figure 18: Policy or Actor Network Architecture experiments for DDPG on HalfCheetah and Hopper Environment
We further analyze the actor and critic network conï¬gurations for use in DDPG. As in default conï¬gurations, we ï¬rst use the ReLU activation function for policy networks, and examine the effect of different activations and network sizes for the critic networks. Similarly, keeping critic network conï¬gurations under default setting, we also examine the effect of actor network activation functions and network sizes.
DDPG with HalfCheetah Environment - Actor Network Activations 6000 3000 = 4000 © 3000 % © 2000 A g M < = 1000 â Policy Network Activation = ReLU 0 -----: Policy Network Activation = TanH Lovo --- Policy Network Activation = Leaky ReLU 0.00 025 050 075 100 1.35 150 175 2.00 Timesteps x10"
DDPG with Hopper Environment - Actor Network Activations 3000 il | 2500 2 2 B00 3 as ce 1500 8 1000 g Wn £500 nat amr invUaee < ââ Policy Network Activatio 0 ~ Policy Network Activation = TanH â500 Policy Network Activation = Leaky ReLU 0.00-0.25°-0.50~(0.75â«1.00â«.25 LTS 2.00 Timesteps x10°
DDPG with HalfCheetah Environment - Critic Network Activations 6000 5000 aâ 5 4000 2 = 3000 a © 009 g p { << 1000) | â Critic Network Activation = ReLU 0 ~ Critic Network Activation = TanHl ~ Critic Network Activation = Leaky ReLU â1000 0.00 025 0.50 075 100 1.35 150 175 2.00 Timesteps x10°
DDPG with Hopper Environment - Critic Network Activations 2500 2 2000 5 ®@ 1500 od © 1000 S500 oh < 0 Critic Network Activation = TanH . Critic Network Activation = Leaky ReLU â500 000 0.5 050 (0.75 «100 135 150 175 200 Timesteps x10°
Figure 19: Signiï¬cance of Value Function or Critic Network Activations for DDPG on HalfCheetah and Hopper Environment
Reward Scaling Parameter in DDPG
Hopper-v1 (DDPG, Reward Scale, No Layer Norm) Hopper-v1 (DDPG, Reward Scale, Layer Norm) 1400 1750 1200 4500 < c § 1000 5 1250 o a © 800 & 1000 i) S © 600 © 750 g $ <x 400 <= 500 200 250 0 0 00 02 04 06 08 10 00 02 04 06 08 10 Timesteps â Timesteps â
Hopper-v1 (DDPG, Reward Scale, No Layer Norm) 1400 1200 < § 1000 o © 800 i) © 600 g <x 400 200 0 00 02 04 06 08 10 Timesteps â
Hopper-v1 (DDPG, Reward Scale, Layer Norm) 1750 4500 c 5 1250 a & 1000 S © 750 $ <= 500 250 0 00 02 04 06 08 10 Timesteps â
Figure 20: DDPG reward rescaling on Hopper-v1, with and without layer norm.
5000 HalfCheetah-v1 (DDPG, Reward Scale, Layer Norm) HalfCheetah-v1 (DDPG, Reward Scale, No Layer Norm) 4000 | 3000 < 3000 a cncbeon th 5 eye % 2000 ce 2000 © 20 g g 1000 1000 2 0 0 0.00 0.25 0.50 0.75 100 125015075 2.00 0.00 0.25 0.500 0.75 1.000 125015075 2.00 Timesteps xg? Timesteps x16?
< 5 3 ce @ 20 g g Z
Figure 21: DDPG reward rescaling on HalfCheetah-v1, with and without layer norm.
Several related work (Gu et al. 2016; 2017; Duan et al. 2016) have often reported that for DDPG the reward scaling parameter often needs to be ï¬ne-tuned for stabilizing the performance of DDPG. It can make a signiï¬cant impact in performance of DDPG based on the choice of environment. We examine several reward scaling parameters and demonstrate the effect this parameter can have on the stability and performance of DDPG, based on the HalfCheetah and Hopper environments. Our experiment results, as demonstrated in Figure 21 and 20, show that the reward scaling parameter indeed can have a signiï¬cant impact on performance. Our results show that, very small or negligible reward scaling parameter can signiï¬cantly detriment the performance of DDPG across all environments. Furthermore, a scaling parameter of 10 or 1 often performs good. Based on our analysis, we suggest that every time DDPG is reported as a baseline algorithm for comparison, the reward scaling parameter should be ï¬ne-tuned, speciï¬c to the algorithm.
# Batch Size in TRPO
Hopper-v1 (TRPO, original, Batch Size) HalfCheetah-v1 (TRPO, original, Batch Size) 3000 2500. 5 2000 a 1500 Fa 1000 é 500 0 â500 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10F Timesteps x10
Figure 22: TRPO (Schulman et al. 2015a) original code batch size experiments.
Hopper-vl (TRPO, baselines, Batch Size) HalfCheetah-v1 (TRPO, baselines, Batch Size) 200 2500. ⬠0 2000: 3 [a gy 200 gs g â â400 50 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50, 0.75 1.00 1.25 1.50 175 2.00 Timesteps x10F Timesteps x10 Walker2d-v1 (TRPO, baselines, Batch Size) Reacher-v1 (TRPO, baselines, Batch Size) 3000 â1n0 2500. -115 £ 2000 5 2 â120 1500 % § â125 1000 2 500 â130 o| Z ~135 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps x10F Timesteps x10
# & FA a
% 4 S <
£ 5 Z g, § 2
Figure 23: TRPO (Schulman et al. 2017) baselines code batch size experiments.
We run batch size experiments using the original TRPO code (Schulman et al. 2015a) and the OpenAI baselines code (Schulman et al. 2017). These results can be found in Experiment results in Figure 22 and Figure 23, show that for both HalfCheetah-v1 and Hopper-v1 environments, a batch size of 1024 for TRPO performs best, while perform degrades consecutively as the batch size is increased.
# Random Seeds
To determine much random seeds can affect results, we run 10 trials total on two environments using the default previously described settings usign the (Gu et al. 2016) implementation of DDPG and the (Duan et al. 2016) version of TRPO. We divide our trials random into 2 partitions and plot them in Figures 24 and Fig 25. As can be seen, statistically different distributions can be attained just from the random seeds with the same exact hyperparameters. As we will discuss later, bootstrapping off of the sample can give an idea for how drastic this effect will be, though too small a bootstrap will still not give concrete enough results.
HalfCheetah-v1 (TRPO, Different Random Seeds) Hopper-vl (TRPO, Different Random Seeds) 5000 3500 4000: 3000 © = 2500 5 3000 z 2 2 2000 Pa © $2000 801500 g g < 1000 <= 1000 5 Random Average (5 runs) 500 Random Average (5 runs) Random Average (5 runs) 5 ~--- Random Average (5 runs) 000 025.050 075 100 13 10 17 200 0000350500 1002.00 Timesteps xa0t Timesteps x0!
Figure 24: Two different TRPO experiment runs, with same hyperparameter conï¬gurations, averaged over two splits of 5 different random seeds.
HalfCheetah-v1 (DDPG, Different Random Seeds) Hopper-v1 (DDPG, Different Random Seeds) 1750: 4000: 1500, ⬠E aos 3 3000 3 1250: 8 a 2000 & 1000 & &% gS © 750- 5 5 1000 = snp a 0 250. <= Random Average (5 runs) sa-+ Random Average (5 runs) =: Random Average (5 runs) 0 =<: Random Average (5 runs) 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 175 2.00 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 1.75, 2.00 Timesteps x08 Timesteps x0!
Figure 25: Two different DDPG experiment runs, with same hyperparameter conï¬gurations, averaged over two splits of 5 different random seeds.
# Choice of Benchmark Continuous Control Environment
We previously demonstrated that the performance of policy gradient algorithms can be highly biased based on the choice of the environment. In this section, we include further results examining the impact the choice of environment can have. We show that no single algorithm can perform consistenly better in all environments. This is often unlike the results we see with DQN networks in Atari domains, where results can often be demonstrated across a wide range of Atari games. Our results, for example, shows that while TRPO can perform signiï¬cantly better than other algorithms on the Swimmer environment, it may perform quite poorly n the HalfCheetah environment, and marginally better on the Hopper environment compared to PPO. We demonstrate our results using the OpenAI MuJoCo Gym environments including Hopper, HalfCheetah, Swimmer and Walker environments. It is notable to see the varying performance these algorithms can have even in this small set of environment domains. The choice of reporting algorithm performance results can therefore often be biased based on the algorithm designerâs experience with these environments.
Hopper Environment 3000 < A @ 2000 o cot) fa 1000 $ < 0 0.00-0.250.50 0.75 1,00 1.350150 «1.75 2.00 Timesteps x0"
Hopper Environment HalfCheetah Environment 6000 3000 _ 3000 sot dll < 5 4000 mn seed | A z i paeannd yer @ 2000 © 3000 o ov cot) Bo fa s 2000 1000 $ <= 1000 < 0 0 â1000 0.000.255 ~«050°0~«~0.75~â«.00 1.252.507 2.00 0.00-0.250.50 0.75 1,00 1.350150 «1.75 2.00 Timesteps x10! Timesteps x0"
HalfCheetah Environment 6000 _ 3000 sot dll 5 4000 mn seed | z i paeannd yer © 3000 ov Bo s 2000 <= 1000 0 â1000 0.000.255 ~«050°0~«~0.75~â«.00 1.252.507 2.00 Timesteps x10!
Walker Environment 3500 3000 § 2500 io ce 2000 © 1500 oO < 100 500 0 . . . . . . 0.00 0.25 0.50 0.75 1.00 1.25 1.50 2.00 Timesteps x10?
Swimmer Environment 300 250, § 200 cy © 150 © 2 100 o <= 0 0 â50, 0.00 0.25 0.50 0.75 1.00 1.25 1.50 «1.75 2.00 Timesteps x10°
Walker Environment 300 3500 3000 250, § 2500 § 200 cy 2000 © 150 © © 1500 2 100 oO o 100 <= 0 500 0 0 . . . . . . â50, 0.00 0.25 0.50 0.75 1.00 1.25 1.50 2.00 0.00 0.25 0.50 0.75 1.00 1.25 1.50 «1.75 2.00 Timesteps x10? Timesteps x10°
Figure 26: Comparing Policy Gradients across various environments
Codebases
We include a detailed analysis of performance comparison, with different network structures and activations, based on the choice of the algorithm implementation codebase.
HalfCheetah-v1 (TRPO, Original Code, Policy Network Structure) 2000 1500 2 & 100 & © sm < 0 â (6459 : â 1050.25) 500 â (40.30) 00 0Rs OTS 1h To 1% 20 Timesteps x108
Hopper-v1 (TRPO, Original Code, Policy Network Structure) 3000 2500 52000 4 1500 g 1000 500 â (es â 1050.25) 0 â (400.300) Ca rn cD 1h To 1% 20 Timesteps x108
HalfCheetah-v1 (TRPO, Original Code, Policy Network Structure) Hopper-v1 (TRPO, Original Code, Policy Network Structure) 2000 3000 1500 2500 2 52000 & 100 4 & 1500 © sm g < 1000 0 â (6459 500 â (es : â 1050.25) â 1050.25) 500 â (40.30) 0 â (400.300) 00 0Rs OTS 1h To 1% 20 Ca rn cD 1h To 1% 20 Timesteps x108 Timesteps x108 HalfCheetah-v1 (TRPO, Original Code, Value Network Structure) Hopper-v1 (TRPO, Original Code, Value Network Structure) 2500 3000 2000 2500 1500 © 3 52000 0 4 S500 g < = 1000 : 5 â (6459 500 â (es â 1050.25) â 1050.25) â 500, â (400,300) 0. â (400,300) 00 0Rs OTS 1h To 1% 20 Ca rn cD 1h To 1% 20 Timesteps x108 Timesteps x108
HalfCheetah-v1 (TRPO, Original Code, Value Network Structure) 2500 2000 1500 3 © 0 S500 < : â (6459 â 1050.25) â 500, â (400,300) 00 0Rs OTS 1h To 1% 20 Timesteps x108
Hopper-v1 (TRPO, Original Code, Value Network Structure) 3000 2500 © 52000 4 g = 1000 5 500 â (es â 1050.25) 0. â (400,300) Ca rn cD 1h To 1% 20 Timesteps x108
Figure 27: TRPO Policy and Value Network structure
HalfCheetah-v1 (TRPO, Original Code, Policy Network Activation) 3000. 2500 ae & 100 % 2 100 2 0 500 e000. Oe Timesteps x108
Hopper-vl (TRPO, Original Code, Policy Network Activation) soon 2500 S200 é 1500 2 2100 . ° i ee D O.Oe o Timesteps x108
HalfCheetah-v1 (TRPO, Original Code, Policy Network Activation) Hopper-vl (TRPO, Original Code, Policy Network Activation) soon 3000. 2500 2500 ae S200 & 100 é % 1500 2 100 2 2 2100 0 . 500 ° e000. Oe i ee D O.Oe o Timesteps x108 Timesteps x108 an HalfCheetah-v1 (TRPO, Original Code, Value Network Activation) Hopper-v1 (TRPO, Original Code, Value Network Activation) 2000 = 1500 2 q = im & 2 500. 0 300 : e000. Oe i e000 Timesteps x108 Timesteps x108
an HalfCheetah-v1 (TRPO, Original Code, Value Network Activation) 2000 = 1500 2 q = im & 2 500. 0 300 e000. Oe Timesteps x108
Hopper-v1 (TRPO, Original Code, Value Network Activation) : i e000 Timesteps x108
Figure 28: TRPO Policy and Value Network activations.
HalfCheetah-vl (TRPO, rllab, Policy Network Structure)
1250 1000 E750 I 2 500 © 2 950 5 <0 con (6464) â250 ---- (100,50,25) 400,300) â500 é ) 000 025050 ~OOOCSCOSCdSTN SC Timesteps xo HalfCheetah-v1 (TRPO, rllab, Policy Network Activation) 1500 1250 1000 g B 750 © 500 2 g 250 id 0 tanh 950 relu leaky relu =500 y 000 025 050 O75 100 12 150 175 200 xo
Hopper-v1 (TRPO, rllab, Policy Network Structure)
1400, 1200 1000 800 600 400} 200 === (100,50,25) 400,300) 0 â ) 0.00 0.35050 (075 1.001.351.5075 2.00 Timesteps xaoé Hopper-v1 (TRPO, rllab, Policy Network Activation) 1400 1200 1000 800 600 400 tanh 200 relu leaky relu 0 000 035 (0500 (075 1001515075 2.00 Timesteps xaoé
# £ co &
© 2 3 id
E 3 © 2 3 id
# Timesteps
Figure 29: TRPO rllab Policy Structure and Activation
# Hopper-vl (DDPG, rllab++, Policy Network Structure)
# HalfCheetah-v1 (DDPG, rllab++, Policy Network Structure)
3500 3000 2500 2000 â1500 1000 f 500 (64,64) (64,64) 0 ~ (100,50,25) tool (100,50,25) (400,300) (400,300) â500 0 000025 ~â(050~â(OTS | 100COSCLGSC«idCST SSO 00002505005 LOCHSCSSGOSCLSTS SCD. Timesteps xaoe Timesteps xt Hopper-vl (DDPG, rllab++, Value Network Structure) HalfCheetah-v1 (DDPG, rllab++, Value Network Structure) 1200 2500 1000 2000 E 800 2 1500 é G00 & 1000 z 3 400 Zz 300 (64,64) (64,64) 200 ~ (100,50,25) 0 (100,50,25) (400,300) 400,300 0 â500 ( ) 0.00025 0500.75 1001355075 2.00 000035 050 075 100135) Timesteps x1 Timesteps x1
# E 2
& § =
# E % 2 S 2 g Z
Figure 30: DDPG rllab++ Policy and Value Network structure
3500 HalfCheetah-v1 (DDPG, rllab++, Policy Network Activation) Hopper-v1 (DDPG, rllab++, Policy Network Activation) 3000 1000 2 = 800 2000 g 1500 000 2 1000 3 400 500 < tanh on ~ tanh ) na" elu relu â 500 leaky relu 0 leaky 000025050 (075 100135 GDL 2.00 0.00035 050 07 100 135 150 175 200 Timesteps xaoe Timesteps xa HalfCheetah-vl (DDPG, rllab++, Value Network Activation) Hopper-v1 (DDPG, rllab+-+, Value Network Activation) 2500 1000 2000 800 © 1500 3 & 600 1000 © @ 400 500 2 tanh 200 tanh 0 ons elu relu 500 leaky relu 9 leaky relu 000025050 (075 100135 GDL 2.00 0.000235 050 07 100 135 150 175 200 Timesteps x1 Timesteps xi
â 2
2 § <=
© & & âo fa 2
Figure 31: DDPG rllab++ Policy and Value Network activations.
Similarly, Figures 32 and 33 show the same network experiments for DDPG with the Theano implementation of rllab code (Duan et al. 2016).
Hopper-v1 (DDP6, rllab, Policy Network Structure)
# HalfCheetah-v1 (DDPG, rllab, Policy Network Structure)
3500 3000 2500 2000 â1500 1000 os Oe 500 (64,64) (64,64) 0 ~ (100,50,25) 100! | (100,50,25) (400,300) (400,300) â500 0 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50, 0.75, 1.00 1.25 1.50 175 2.00 Timesteps xaoe Timesteps xt 000 Hopper-vl (DDPG, rllab, Value Network Structure) 2000 HalfCheetah-vl (DDPG, rllab, Value Network Structure) 1750. 1500 100 < 1250. 3 1000 1000 % 750 5 00 $ 500 ~(e4e) | 250 ~-â- (100,50,25) 0 (100,50,25) ) â (400,300) (400,300) â500 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 0.00 0.25 0.50 0.75, 1.00 1.25 1.50 1.75 2.00 Timesteps x1 Timesteps x1
£ 3
# & Ey =
â 5 3 g, § $ *
Figure 32: DDPG rllab Policy and Value Network structure
HalfCheetah-v1 (DDPG, rllab, Policy Network Activation) Hopper-v1 (DDPG, rllab, Policy Network Activation) 2000: 1000. 1500. a 800 g 1000. = 600 © e 500. 2 400, 200: 0 relu 0 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps xe Timesteps xi HalfCheetah-v1 (DDPG, rllab, Value Network Activation) Hopper-vl (DDPG, rllab, Value Network Activation) 1000. 1500: 800 £ iow B 00 © @ 400 500 2 200: 0. P tanh 0. - relu 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 5 0.00, 0.25 0.50, 0.75 1.00 1.25 1.50 1.75 2.00 Timesteps xd Timesteps xi
# £ g c
© e 2
£ B © F 2
Figure 33: DDPG rllab Policy and Value Network activations.
Often in related literature, there is different baseline codebase people use for implementation of algorithms. One such example is for the TRPO algorithm. It is a commonly used policy gradient method for continuous control tasks, and there exists several implementations from
OpenAI Baselines (Plappert et al. 2017), OpenAI rllab (Duan et al. 2016) and the original TRPO codebase (Schulman et al. 2015a). In this section, we perform an analysis of the impact the choice of algorithm codebase can have on the performance. Figures 27 and 28 summarizes our results with TRPO policy network and value networks, using the original TRPO codebase from (Schulman et al. 2015a). Figure 29 shows the results using the rllab implementation of TRPO using the same hyperparameters as our default experiments aforementioned. Note, we use a linear function approximator rather than a neural network due to the fact that the Tensorï¬ow implementation of OpenAI rllab doesnât provide anything else. We note that this is commonly used in other works (Duan et al. 2016; Stadie, Abbeel, and Sutskever 2017), but may cause differences in performance. Furthermore, we leave out our value function network experiments due to this.
HalfCheetah-v1 (DDPG, Codebase Comparison) Hopper-vl (DDPG, Codebase Comparison) 5000 1750 as 1500 4000 © 1250 3000 I @ 1000 ©. © $2000 ; 8 750 g g 1099 ~ <= 500 ) === Duan 2016 250 Dean 206 <= Guns <= Guns Propper 2017 0 Papper 2017 0.000.250 (0.50 0.75 001.25 1.50.75 2.00 0.000.250 (0.50 0.75 001.25 1.50 1.75 2.00 Timesteps x10° Timesteps xaoé
2 2
=
Figure 34: DDPG codebase comparison using our default set of hyperparameters (as used in other experiments).
HalfCheetah-vl (TRPO, Codebase Comparison) Hopper-vl (TRPO, Codebase Comparison) : 3000 , 2000; 45 oven ful sa adel al OPT WA on Na 1500 2500 ⬠£ 5 5 2000 & 1000) 2 © 1500 2 500 ° o o $ $1000 < 4 < 500 == Sehuiman 2015 == Schulman 2015 â500 == Sehuiman 2017 = Schuman 2017 Duan 2016 0 Duan 2016 0.00 -0250~«(0500~«(0.75 L005 150175 2.00 0.00 025050075 1005 150)L:752.00 Timesteps xe Timesteps x10*
Figure 35: TRPO codebase comparison using our default set of hyperparameters (as used in other experiments).
Figure 35 shows a comparison of the TRPO implementations using the default hyperparamters as speciï¬ed earlier in the supplemental. Note, the exception is that we use a larger batch size for rllab and original TRPO code of 20k samples per batch, as optimized in a second set of experiments. Figure 30 and 31 show the same network experiments for DDPG with the rllab++ code (Gu et al. 2016). We can then compare the performance of the algorithm across 3 codebases (keeping all hyperparameters constant at the defaults), this can be seen in Figure 34.
# Signiï¬cance
Our full results from signiï¬cance testing with difference metrics can be found in Table 9 and Table 10. Our bootstrap mean and conï¬dence intervals can be found in Table 13. Bootstrap power analysis can be found in Table 14. To performance signiï¬cance testing, we use our 5 sample trials to generate a bootstrap with 10k bootstraps. From this conï¬dence intervals can be obtained. For the t-test and KS-test, the average returns from the 5 trials are sorted and compared using the normal 2-sample versions of these tests. Scipy ( https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ks_2samp. html, https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html) and Facebook Boostrapped (https://github.com/facebookincubator/bootstrapped) are used for the KS test, t-test, and bootstrap analysis. For power analysis, we attempt to determine if a sample is enough to game the signiï¬cance of a 25% lift. This is commonly used in A/B testing (Tuff´ery 2011).
- DDPG ACKTR TRPO PPO DDPG - t = â1.85, p = 0.102 KS = 0.60, p = 0.209 -38.24 % (-75.42 %, -15.19 %) t = â4.59, p = 0.002 KS = 1.00, p = 0.004 -75.09 % (-86.44 %, -68.36 %) t = â2.67, p = 0.029 KS = 0.80, p = 0.036 -51.67 % (-80.69 %, -31.94 %) ACKTR t = 1.85, p = 0.102 KS = 0.60, p = 0.209 61.91 % (-32.27 %, 122.99 %) - t = â2.78, p = 0.024 KS = 0.80, p = 0.036 -59.67 % (-81.70 %, -46.84 %) t = â0.80, p = 0.448 KS = 0.60, p = 0.209 -21.75 % (-75.99 %, 11.68 %) TRPO t = 4.59, p = 0.002 KS = 1.00, p = 0.004 301.48 % (150.50 %, 431.67 %) t = 2.78, p = 0.024 KS = 0.80, p = 0.036 147.96 % (30.84 %, 234.60 %) - t = 2.12, p = 0.067 KS = 0.80, p = 0.036 94.04 % (2.73 %, 169.06 %) PPO t = 2.67, p = 0.029 KS = 0.80, p = 0.036 106.91 % (-37.62 %, 185.26 %) t = 0.80, p = 0.448 KS = 0.60, p = 0.209 27.79 % (-67.77 %, 79.56 %) t = â2.12, p = 0.067 KS = 0.80, p = 0.036 -48.46 % (-81.23 %, -32.05 %) -
Table 9: HalfCheetah Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov-Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds.
- DDPG ACKTR TRPO PPO DDPG - t = 1.41, p = 0.196 KS = 0.60, p = 0.209 56.05 % (-87.98 %, 123.15 %) t = 2.58, p = 0.033 KS = 0.80, p = 0.036 81.68 % (-67.76 %, 151.64 %) t = 2.09, p = 0.070 KS = 0.80, p = 0.036 66.39 % (-67.80 %, 130.16 %) ACKTR t = â1.41, p = 0.196 KS = 0.60, p = 0.209 -35.92 % (-85.62 %, -5.38 %) - t = 1.05, p = 0.326 KS = 0.60, p = 0.209 16.43 % (-27.92 %, 41.17 %) t = 0.42, p = 0.686 KS = 0.40, p = 0.697 6.63 % (-33.54 %, 29.59 %) TRPO t = â2.58, p = 0.033 KS = 0.80, p = 0.036 -44.96 % (-78.82 %, -20.29 %) t = â1.05, p = 0.326 KS = 0.60, p = 0.209 -14.11 % (-37.17 %, 9.11 %) - t = â2.57, p = 0.033 KS = 0.60, p = 0.209 -8.42 % (-14.08 %, -2.97 %) PPO t = â2.09, p = 0.070 KS = 0.80, p = 0.036 -39.90 % (-77.12 %, -12.95 %) t = â0.42, p = 0.686 KS = 0.40, p = 0.697 -6.22 % (-31.58 %, 18.98 %) t = 2.57, p = 0.033 KS = 0.60, p = 0.209 9.19 % (2.37 %, 15.58 %) -
Table 10: Hopper Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov- Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds.
- DDPG ACKTR TRPO PPO DDPG - t = 1.03, p = 0.334 KS = 0.40, p = 0.697 44.47 % (-80.62 %, 111.72 %) t = 4.04, p = 0.004 KS = 1.00, p = 0.004 94.24 % (-22.59 %, 152.61 %) t = 3.07, p = 0.015 KS = 0.80, p = 0.036 85.01 % (-31.02 %, 144.35 %) ACKTR t = â1.03, p = 0.334 KS = 0.40, p = 0.697 -30.78 % (-91.35 %, 1.06 %) - t = 1.35, p = 0.214 KS = 0.60, p = 0.209 34.46 % (-60.47 %, 77.32 %) t = 1.02, p = 0.338 KS = 0.60, p = 0.209 28.07 % (-65.67 %, 71.71 %) TRPO t = â4.04, p = 0.004 KS = 1.00, p = 0.004 -48.52 % (-70.33 %, -28.62 %) t = â1.35, p = 0.214 KS = 0.60, p = 0.209 -25.63 % (-61.28 %, 5.54 %) - t = â0.57, p = 0.582 KS = 0.40, p = 0.697 -4.75 % (-19.06 %, 10.02 %) PPO t = â3.07, p = 0.015 KS = 0.80, p = 0.036 -45.95 % (-70.85 %, -24.65 %) t = â1.02, p = 0.338 KS = 0.60, p = 0.209 -21.91 % (-61.53 %, 11.02 %) -
Table 11: Walker2d Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov- Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds.
- DDPG ACKTR TRPO PPO DDPG - t = 2.18, p = 0.061 KS = 0.80, p = 0.036 57.34 % (-80.96 %, 101.11 %) t = 4.06, p = 0.004 KS = 1.00, p = 0.004 572.61 % (-73.29 %, 869.24 %) t = 8.33, p = 0.000 KS = 1.00, p = 0.004 237.97 % (-59.74 %, 326.85 %) ACKTR t = â2.18, p = 0.061 KS = 0.80, p = 0.036 -36.44 % (-61.04 %, -6.94 %) - t = 3.69, p = 0.006 KS = 1.00, p = 0.004 327.48 % (165.47 %, 488.66 %) t = 8.85, p = 0.000 KS = 1.00, p = 0.004 114.80 % (81.85 %, 147.33 %) TRPO t = â4.06, p = 0.004 KS = 1.00, p = 0.004 -85.13 % (-97.17 %, -77.95 %) t = â3.69, p = 0.006 KS = 1.00, p = 0.004 -76.61 % (-90.68 %, -70.06 %) - t = â2.39, p = 0.044 KS = 0.60, p = 0.209 -49.75 % (-78.58 %, -36.43 %) PPO t = â8.33, p = 0.000 KS = 1.00, p = 0.004 -70.41 % (-80.86 %, -56.52 %) t = â8.85, p = 0.000 KS = 1.00, p = 0.004 -53.45 % (-62.22 %, -47.30 %) t = 2.39, p = 0.044 KS = 0.60, p = 0.209 99.01 % (28.44 %, 171.85 %) -
Table 12: Swimmer Signiï¬cance values and metrics for different algorithms. Rows in cells are: sorted 2-sample t-test, Kolmogorov-Smirnov test, bootstrap A/B comparison % difference with 95% conï¬dence bounds.
Environment HalfCheetah-v1 Hopper-v1 Walker2d-v1 Swimmer-v1 DDPG 5037.26 (3664.11, 6574.01) 1632.13 (607.98, 2370.21) 1582.04 (901.66, 2174.66) 31.92 (21.68, 46.23) ACKTR 3888.85 (2288.13, 5131.96) 2546.89 (1875.79, 3217.98) 2285.49 (1246.00, 3235.96) 50.22 (42.47, 55.37) TRPO 1254.55 (999.52, 1464.86) 2965.33 (2854.66, 3076.00) 3072.97 (2957.94, 3183.10) 214.69 (141.52, 287.92) PPO 3043.1 (1920.4, 4165.86) 2715.72 (2589.06, 2847.93) 2926.92 (2514.83, 3361.43) 107.88 (101.13, 118.56)
Table 13: Envs bootstrap mean and 95% conï¬dence bounds
Environment HalfCheetah-v1 Hopper-v1 Walker2d-v1 DDPG 100.00 % 0.00 % 0.00 % 60.90 % 10.00 % 29.10 % 89.50 % 0.00 % 10.50 % 89.97 % 0.00 % 10.03 % ACKTR 79.03 % 11.53 % 9.43 % 79.60 % 11.00 % 9.40 % 60.33 % 9.73 % 29.93 % 59.90 % 40.10 % 0.00 % TRPO 79.47 % 20.53 % 0.00 % 0.00 % 100.00 % 0.00 % 0.00 % 100.00 % 0.00 % 89.47 % 0.00 % 10.53 % PPO 61.07 % 10.50 % 28.43 % 0.00 % 100.00 % 0.00 % 59.80 % 31.27 % 8.93 % 40.27 % 59.73 % 0.00 % Swimmer-v1
Table 14: Power Analysis for predicted signiï¬cance of 25% lift. Rows in cells are: % insigniï¬cant simulations,% positive signiï¬cant, % negative signiï¬cant. | {
"id": "1611.02247"
} |
1709.04546 | Normalized Direction-preserving Adam | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others. | http://arxiv.org/pdf/1709.04546 | Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | cs.LG, stat.ML | null | null | cs.LG | 20170913 | 20180918 | 8 1 0 2
p e S 8 1 ] G L . s c [
2 v 6 4 5 4 0 . 9 0 7 1 : v i X r a
# NORMALIZED DIRECTION-PRESERVING ADAM
# Zijun Zhang Department of Computer Science University of Calgary zijun.zhang@ucalgary.ca
Lin Ma School of Computer Science Wuhan University linmawhu@gmail.com
# Zongpeng Li Department of Computer Science University of Calgary zongpeng@ucalgary.ca
Chuan Wu Department of Computer Science The University of Hong Kong cwu@cs.hku.hk
# ABSTRACT
Adaptive optimization algorithms, such as Adam and RMSprop, have shown bet- ter optimization performance than stochastic gradient descent (SGD) in some sce- narios. However, recent studies show that they often lead to worse generalization performance than SGD, especially for training deep neural networks (DNNs). In this work, we identify the reasons that Adam generalizes worse than SGD, and develop a variant of Adam to eliminate the generalization gap. The proposed method, normalized direction-preserving Adam (ND-Adam), enables more pre- cise control of the direction and step size for updating weight vectors, leading to signiï¬cantly improved generalization performance. Following a similar rationale, we further improve the generalization performance in classiï¬cation tasks by regu- larizing the softmax logits. By bridging the gap between SGD and Adam, we also hope to shed light on why certain optimization algorithms generalize better than others.
# INTRODUCTION
In contrast with the growing complexity of neural network architectures (Szegedy et al., 2015; He et al., 2016; Hu et al., 2018), the training methods remain relatively simple. Most practical opti- mization methods for deep neural networks (DNNs) are based on the stochastic gradient descent (SGD) algorithm. However, the learning rate of SGD, as a hyperparameter, is often difï¬cult to tune, since the magnitudes of different parameters vary widely, and adjustment is required throughout the training process.
To tackle this problem, several adaptive variants of SGD were developed, including Adagrad (Duchi et al., 2011), Adadelta (Zeiler, 2012), RMSprop (Tieleman & Hinton, 2012), Adam (Kingma & Ba, 2015). These algorithms aim to adapt the learning rate to different parameters automatically, based on the statistics of gradient. Although they usually simplify learning rate settings, and lead to faster convergence, it is observed that their generalization performance tend to be signiï¬cantly worse than that of SGD in some scenarios (Wilson et al., 2017). This intriguing phenomenon may explain why SGD (possibly with momentum) is still prevalent in training state-of-the-art deep models, especially feedforward DNNs (Szegedy et al., 2015; He et al., 2016; Hu et al., 2018). Furthermore, recent work has shown that DNNs are capable of ï¬tting noise data (Zhang et al., 2017), suggesting that their generalization capabilities are not the mere result of DNNs themselves, but are entwined with optimization (Arpit et al., 2017).
This work aims to bridge the gap between SGD and Adam in terms of the generalization perfor- mance. To this end, we identify two problems that may degrade the generalization performance of Adam, and show how these problems are (partially) avoided by using SGD with L2 weight de- cay. First, the updates of SGD lie in the span of historical gradients, whereas it is not the case for Adam. This difference has been discussed in rather recent literature (Wilson et al., 2017), where the authors show that adaptive methods can ï¬nd drastically different but worse solutions than SGD.
Second, while the magnitudes of Adam parameter updates are invariant to rescaling of the gradient, the effect of the updates on the same overall network function still varies with the magnitudes of pa- rameters. As a result, the effective learning rates of weight vectors tend to decrease during training, which leads to sharp local minima that do not generalize well (Hochreiter & Schmidhuber, 1997).
To address these two problems of Adam, we propose the normalized direction-preserving Adam (ND-Adam) algorithm, which controls the update direction and step size in a more precise way. We show that ND-Adam is able to achieve signiï¬cantly better generalization performance than vanilla Adam, and matches that of SGD in image classiï¬cation tasks.
We summarize our contributions as follows:
# e
# e
# e
We observe that the directions of Adam parameter updates are different from that of SGD, i.e., Adam does not preserve the directions of gradients as SGD does. We ï¬x the problem by adapting the learning rate to each weight vector, instead of each individual weight, such that the direction of the gradient is preserved. For both Adam and SGD without L2 weight decay, we observe that the magnitude of each vectorâs direction change depends on its L2-norm. We show that, using SGD with L2 weight decay implicitly normalizes the weight vectors, and thus remove the dependence in an approximate manner. We ï¬x the problem for Adam by explicitly normalizing each weight vector, and by optimizing only its direction, such that the effective learning rate can be precisely controlled. We further demonstrate that, without proper regularization, the learning signal backpropa- gated from the softmax layer may vary with the overall magnitude of the logits in an unde- sirable way. Based on the observation, we apply batch normalization or L2-regularization to the logits, which further improves the generalization performance in classiï¬cation tasks.
In essence, our proposed methods, ND-Adam and regularized softmax, improve the generalization performance of Adam by enabling more precise control over the directions of parameter updates, the learning rates, and the learning signals.
The remainder of this paper is organized as follows. In Sec. 2, we identify two problems of Adam, and show how SGD with L2 weight decay partially avoids these problems. In Sec. 3, we further discuss and develop ND-Adam as a solution to the two problems. In Sec. 4, we propose regularized softmax to improve the learning signal backpropagated from the softmax layer. We provide em- pirical evidence for our analysis, and evaluate the performance of the proposed methods in Sec. 5. 1
# 2 BACKGROUND AND MOTIVATION
2.1 ADAPTIVE MOMENT ESTIMATION (ADAM)
Adaptive moment estimation (Adam) (Kingma & Ba, 2015) is a stochastic optimization method that applies individual adaptive learning rates to different parameters, based on the estimates of the Rn, Adam ï¬rst and second moments of the gradients. Speciï¬cally, for n trainable parameters, θ maintains a running average of the ï¬rst and second moments of the gradient w.r.t. each parameter as
mt = β1mtâ1 + (1 β1) gt, (1a)
â β2) g2 t .
and
(1b) â Rn denote respectively the ï¬rst and second Here, t denotes the time step, mt â R are the corresponding decay factors. Kingma & Ba (2015) moments, and β1 â further notice that, since m0 and v0 are initialized to 0âs, they are biased towards zero during the initial time steps, especially when the decay factors are large (i.e., close to 1). Thus, for computing the next update, they need to be corrected as
Ëmt = 1 mt βt 1 , Ëvt = 1 vt βt 2 , (2)
â
â
1Code is available at https://github.com/zj10/ND-Adam.
# where βt
# 1, βt
are the t-th powers of 31, 32 respectively. Then, we can update each parameter as Ot - m4, aie O, = 1 ~~
where a, is the global learning rate, and ⬠is a small constant to avoid division by zero. Note the above computations between vectors are element-wise.
A distinguishing merit of Adam is that the magnitudes of parameter updates are invariant to rescaling of the gradient, as shown by the adaptive learning rate term, a,/ (Vor + â¬). However, there are two potential problems when applying Adam to DNNs.
First, in some scenarios, DNNs trained with Adam generalize worse than that trained with stochas- tic gradient descent (SGD) (Wilson et al., 2017). Zhang et al. (2017) demonstrate that over- parameterized DNNs are capable of memorizing the entire dataset, no matter if it is natural data or meaningless noise data, and thus suggest much of the generalization power of DNNs comes from the training algorithm, e.g., SGD and its variants. It coincides with another recent work (Wilson et al., 2017), which shows that simple SGD often yields better generalization performance than adaptive gradient methods, such as Adam. As pointed out by the latter, the difference in the gen- eralization performance may result from the different directions of updates. Speciï¬cally, for each hidden unit, the SGD update of its input weight vector can only lie in the span of all possible input vectors, which, however, is not the case for Adam due to the individually adapted learning rates. We refer to this problem as the direction missing problem.
Second, while batch normalization (Ioffe & Szegedy, 2015) can signiï¬cantly accelerate the con- vergence of DNNs, the input weights and the scaling factor of each hidden unit can be scaled in inï¬nitely many (but consistent) ways, without changing the function implemented by the hidden unit. Thus, for different magnitudes of an input weight vector, the updates given by Adam can have different effects on the overall network function, which is undesirable. Furthermore, even when batch normalization is not used, a network using linear rectiï¬ers (e.g., ReLU, leaky ReLU) as acti- vation functions, is still subject to ill-conditioning of the parameterization (Glorot et al., 2011), and hence the same problem. We refer to this problem as the ill-conditioning problem.
# 2.2 L2 WEIGHT DECAY
L2 weight decay is a regularization technique frequently used with SGD. It often has a signiï¬cant effect on the generalization performance of DNNs. Despite its simplicity and crucial role in the training process, how L2 weight decay works in DNNs remains to be explained. A common jus- tiï¬cation is that L2 weight decay can be introduced by placing a Gaussian prior upon the weights, when the objective is to ï¬nd the maximum a posteriori (MAP) weights (Blundell et al.). How- ever, as discussed in Sec. 2.1, the magnitudes of input weight vectors are irrelevant in terms of the overall network function, in some common scenarios, rendering the variance of the Gaussian prior meaningless.
We propose to view L2 weight decay in neural networks as a form of weight normalization, which may better explain its effect on the generalization performance. Consider a neural network trained with the following loss function:
~ r 2 L(6;D) = L(6:D) + 5 De lel, (4)
where L (θ; is the set of all hidden units, and wi denotes the input weights of hidden unit i, which is included in the trainable parameters, θ. For simplicity, we consider SGD updates without momentum. Therefore, the update of wi at each time step is
Aw; = ot =-a OL + rw; |}, (5) Ow; Ow;
where a is the learning rate. As we can see from Eq. @). the gradient magnitude of the L2 penalty is proportional to ||w;||,, thus forms a negative feedback loop that stabilizes ||w;||, to an equilibrium value. Empirically, we find that ||w;||, tends to increase or decrease dramatically at the beginning of
3 (3)
the training, and then varies mildly within a small range, which indicates ||w;||, ~ |}wi + Awi|lo. In practice, we usually have || Aw;||, / ||wil|2 < 1, thus Aw; is approximately orthogonal to w, i.e. w;:- Aw; = 0.
the training, and then varies mildly within a small range, which indicates In practice, we usually have || Aw;||, / ||wil|2 < 1, thus Aw; is approximately w;:- Aw; = 0. Let J)),,, and, be the vector projection and rejection of pe on w;, which
# âwi â
on wi, which are deï¬ned as
OL Wi Wi OL ly : liw = Iy,. 6 Mei (3 ws -) \|willoâ Lu Ow; Mei ©)
# OL
From Eq. (5) and (6}, it is easy to show
|Awille ~ Ew laay. (7) Twill, â U0, 2
As discussed in Sec.|2.1| when batch normalization is used, or when linear rectifiers are used as activation functions, the magnitude of ||w;||, becomes irrelevant; it is the direction of w; that actually makes a difference in the overall network function. If L2 weight decay is not applied, the magnitude of w;âs direction change will decrease as ||w;||, increases during the training process, which can potentially lead to overfitting (discussed in detail in Sec. . On the other hand, Eq. (7) shows that L2 weight decay implicitly normalizes the weights, such that the magnitude of w;âs direction change does not depend on ||w;||,, and can be tuned by the product of a and 4. In the following, we refer to ||Aw;||, / will. as the effective learning rate of w;.
While L2 weight decay produces the normalization effect in an implicit and approximate way, we will show that explicitly doing so enables more precise control of the effective learning rate.
# 3 NORMALIZED DIRECTION-PRESERVING ADAM
We ï¬rst present the normalized direction-preserving Adam (ND-Adam) algorithm, which essentially improves the optimization of the input weights of hidden units, while employing the vanilla Adam algorithm to update other parameters. Speciï¬cally, we divide the trainable parameters, θ, into two . Then we update θv and θs by sets, θv and θs, such that θv = different rules, as described by Alg. 1. The learning rates for the two sets of parameters are denoted by αv
In Alg. 1, computing gt (wi) and wi,t may take slightly more time compared to Adam, which how- ever is negligible in practice. On the other hand, to estimate the second order moment of each Rn, Adam maintains n scalars, whereas ND-Adam requires only one scalar, vt (wi), and thus wi â reduces the memory overhead of Adam.
In the following, we address the direction missing problem and the ill-conditioning problem dis- cussed in Sec. 2.1, and explain Alg. 1 in detail. We show how the proposed algorithm jointly solves the two problems, as well as its relation to other normalization schemes.
3.1 PRESERVING GRADIENT DIRECTIONS
Assuming the stationarity of a hidden unitâs input distribution, the SGD update (possibly with mo- mentum) of the input weight vector is a linear combination of historical gradients, and thus can only lie in the span of the input vectors. Consequently, the input weight vector itself will eventually converge to the same subspace.
In contrast, the Adam algorithm adapts the global learning rate to each scalar parameter indepen- dently, such that the gradient of each parameter is normalized by a running average of its magnitudes, which changes the direction of the gradient. To preserve the direction of the gradient w.r.t. each input weight vector, we generalize the learning rate adaptation scheme from scalars to vectors.
Let gt (wi), mt (wi), vt (wi) be the counterparts of gt, mt, vt for vector wi. Since Eq. (1a) is a linear combination of historical gradients, it can be extended to vectors without any change; or equivalently, we can rewrite it for each vector as
mt (wi) = β1mtâ1 (wi) + (1 β1) gt (wi) . (8)
â
# Algorithm 1: Normalized direction-preserving Adam /* Initialization t â for i
/* Initialization */ t+ 0; for ic N do win â wio/ [lweollys mo (wi) = 05 vo (wi) â 03 /* Perform T iterations of training «/ while t < T do tet+l; /* Update 6â x/ for i ¢ N do H (wi) â OL/duy; ge (wi) â Ge (wi) â (Ge (Wi) - Witâ1) Wieâ15 my (wi) â Gime (wi) + (1 â 81) gt (wi); vp (wi) & Bove (wi) + (1 = Be) | ge (wa) | 3; rie (wi) â me (wi) / (1 â BE); Br (wi) â ve (wi) / (1 â 88); Wit â Witâ1 â Af Ty (wi) / ( b, (wi) + e): wit â Wit/ lle tllo3 /* Update 0° using Adam «/ 0; < AdamUpdate (974; ag, B1, 62); return 67;
We then extend Eq. (1b) as
2 2 , vt (wi) = β2vtâ1 (wi) + (1 β2)
# II ge (ws)|I3
â
i.e., instead of estimating the average gradient magnitude for each individual parameter, we estimate 2 2 for each vector wi. In addition, we modify Eq. (2) and (3) accordingly as the average of
||g: (wi)
and
Ëmt (wi) = mt (wi) βt 1 1 , Ëvt (wi) = vt (wi) βt 1 2 , (10)
â
â
» Wit = Wit-1 â "iy, (wi) - (1) 01 (wi) +â¬
Here, Ëmt (wi) is a vector with the same dimension as wi, whereas Ëvt (wi) is a scalar. Therefore, when applying Eq. (11), the direction of the update is the negative direction of Ëmt (wi), and thus is in the span of the historical gradients of wi.
Despite the empirical success of SGD, a question remains as to why it is desirable to constrain the input weights in the span of the input vectors. A possible explanation is related to the manifold hypothesis, which suggests that real-world data presented in high dimensional spaces (e.g., images, audios, text) concentrates on manifolds of much lower dimensionality (Cayton, 2005; Narayanan & Mitter, 2010). In fact, commonly used activation functions, such as (leaky) ReLU, sigmoid, tanh, can only be activated (not saturating or having small gradients) by a portion of the input vectors, in whose span the input weights lie upon convergence. Assuming the local linearity of the manifolds of data or hidden-layer representations, constraining the input weights in the subspace that contains that portion of the input vectors, encourages the hidden units to form local coordinate systems on the corresponding manifold, which can lead to good representations (Rifai et al., 2011).
3.2 SPHERICAL WEIGHT OPTIMIZATION
The ill-conditioning problem occurs when the magnitude change of an input weight vector can be compensated by other parameters, such as the scaling factor of batch normalization, or the output
(9)
weight vector, without affecting the overall network function. Consequently, suppose we have two DNNs that parameterize the same function, but with some of the input weight vectors having differ- ent magnitudes, applying the same SGD or Adam update rule will, in general, change the network functions in different ways. Thus, the ill-conditioning problem makes the training process inconsis- tent and difï¬cult to control.
More importantly, when the weights are not properly regularized (e.g., without using L2 weight decay), the magnitude of w;,âs direction change will decrease as ||w;|| increases during the training process. As a result, the effective learning rate for w; tends to decrease faster than expected. The gradient noise introduced by large learning rates is crucial to avoid sharp minima (Smith & Le! (2018). And it is well known that sharp minima generalize worse than flat minima (Hochreiter &| Schmidhuber}| 1997).
As shown in Sec. when combined with SGD, L2 weight decay can alleviate the ill-conditioning problem by implicitly and approximately normalizing the weights. However, the approximation fails when ||2w;||) is far from the equilibrium due to improper initialization, or drastic changes in the magnitudes of the weight vectors. In addition, due to the direction missing problem, naively applying L2 weight decay to Adam does not yield the same effect as it does on SGD. In concurrent work, |Loshchilov & Hutterâ 2017ap address the problem by decoupling the weight decay and the optimization steps taken w.r.t. the loss function. However, their experimental results indicate that improving L2 weight decay alone cannot eliminate the generalization gap between Adam and SGD.
The ill-conditioning problem is also addressed by Neyshabur et al. (2015), by employing a geometry invariant to rescaling of weights. However, their proposed methods do not preserve the direction of gradient.
To address the ill-conditioning problem in a more principled way, we restrict the L2-norm of each wi to 1, and only optimize its direction. In other words, instead of optimizing wi in a n-dimensional 1)-dimensional unit sphere. Speciï¬cally, we ï¬rst compute the raw space, we optimize wi on a (n gradient w.r.t. wi, ¯gt (wi) = âL/âwi, and project the gradient onto the unit sphere as
Here,
gt (wi) = ¯gt (wi) (¯gt (wi) (12)
ge (wi) =e (we) â (Ge (wi) + wea) wie. ||wisâ1||, = 1. Then we follow Eq. {8)-{I0}, and replace with _ a? . Wit = Wit-1 â âââââ mr (wi), and wig =
â
_ a? . Wit Wit = Wit-1 â âââââ mr (wi), and wig = â_. (13) dy (wi) + ⬠@itlle
In Eq. (12), we keep only the component that is orthogonal to w;,,-1. However, 77; (w;) is not necessarily orthogonal as well; moreover, even when 1, (w;) is orthogonal to w;,4â1, ||w;||) can still increase according to the Pythagorean theorem. Therefore, we explicitly normalize w;,, in Eq. (13), to ensure lwitlle = 1 after each update. Also note that, since w;,~1 is a linear combination of its historical gradients, g; (w;) still lies in the span of the historical gradients after the projection in Eq. (12).
Compared to SGD with L2 weight decay, spherical weight optimization explicitly normalizes the weight vectors, such that each update to the weight vectors only changes their directions, and strictly keeps the magnitudes constant. As a result, the effective learning rate of a weight vector is
# Aw;.tll. [Awicle -, l|e5,2-1llo
Aw;.tll. Fi > [Awicle -, lie (wy ow ay l|e5,2-1llo 0, (wi)
which enables precise control over the learning rate of wi through a single hyperparameter, αv t , rather than two as required by Eq. (7).
Note that it is possible to control the effective learning rate more precisely, by normalizing 71, (w;) with ||72¢ (w;)||p, instead of by \/%; (wi). However, by doing so, we lose information provided by ||7iz (w;) ||. at different time steps. In addition, since rn, (w;) is less noisy than gy (w;), ||77¢ (wa) || /V/ Gz (wi) becomes small near convergence, which is considered a desirable property of Adam (Kingma & Ba\|2015). Thus, we keep the gradient normalization scheme intact.
We note the difference between various gradient normalization schemes and the normalization scheme employed by spherical weight optimization. As shown in Eq. (11), ND-Adam general- izes the gradient normalization scheme of Adam, and thus both Adam and ND-Adam normalize
the gradient by a running average of its magnitude. This, and other similar schemes (Hazan et al., 2015; Yu et al., 2017) make the optimization less susceptible to vanishing and exploding gradients. The proposed spherical weight optimization serves a different purpose. It normalizes each weight vector and projects the gradient onto a unit sphere, such that the effective learning rate can be con- trolled more precisely. Moreover, it provides robustness to improper weight initialization, since the magnitude of each weight vector is kept constant.
For nonlinear activation functions (without batch normalization), such as sigmoid and tanh, an extra scaling factor is needed for each hidden unit to express functions that require unnormalized weight ), the activation of hidden vectors. For instance, given an input vector x · unit i is then given by
# â yi = Ï (γiwi ·
(15) where γi is the scaling factor, and bi is the bias. Consequently, normalizing weight vectors does not limit the expressiveness of models.
# 3.3 RELATION TO WEIGHT NORMALIZATION AND BATCH NORMALIZATION
A related normalization and reparameterization scheme, weight normalization (Salimans & Kingma, 2016), has been developed as an alternative to batch normalization, aiming to accelerate the conver- gence of SGD optimization. We note the difference between spherical weight optimization and weight normalization. First, the weight vector of each hidden unit is not directly normalized in = 1 in general. At training time, the activation of hidden unit i is weight normalization, i.e,
||w;||,
n=ol 7% web), (16) llewills which is equivalent to Eq. ) for the forward pass. For the backward pass, the effective learning rate still depends on ||w,||, in weight normalization, hence it does not solve the ill-conditioning problem. At inference time, both of these two schemes can merge w; and 4; into a single equivalent weight vector, w} = y;,w;, or w} = eRâ
While spherical weight optimization naturally encompasses weight normalization, it can further beneï¬t from batch normalization. When combined with batch normalization, Eq. (15) evolves into
x) + bi) , (17)
# yi = Ï (γi BN (wi ·
where BN ( ) represents the transformation done by batch normalization without scaling and shift- · ing. Here, γi serves as the scaling factor for both the normalized weight vector and batch normal- ization.
# 4 REGULARIZED SOFTMAX
For multi-class classiï¬cation tasks, the softmax function is the de facto activation function for the output layer. Despite its simplicity and intuitive probabilistic interpretation, we observe a related problem to the ill-conditioning problem we have addressed. Similar to how different magnitudes of weight vectors result in different updates to the same network function, the learning signal back- propagated from the softmax layer varies with the overall magnitude of the logits.
Specifically, when using cross entropy as the surrogate loss with one-hot target vectors, the predic- tion is considered correct as long as arg max,<c (2-) is the target class, where z, is the logit before the softmax activation, corresponding to category c ⬠C. Thus, the logits can be positively scaled together without changing the predictions, whereas the cross entropy and its derivatives will vary with the scaling factor. Concretely, denoting the scaling factor by 7, the gradient w.r.t. each logit is aL exp (2e) 2 (nize)
aL exp (2e) and 2 nexp (nize) O22" LDeccexp(nze) | Oz Nee EXD (1%) : (18)
where Ëc is the target class, and ¯c
⬠Câ C\ {
# . Ëc }
For Adam and ND-Adam, since the gradient w.r.t. each scalar or vector are normalized, the absolute magnitudes of Eq. (18) are irrelevant. Instead, the relative magnitudes make a difference here. When
η is small, we have
OL/dz ol OL/dzz| |C|â-1" (19) im n-0
|C| â
which indicates that, when the magnitude of the logits is small, softmax encourages the logit of the target class to increase, while equally penalizing that of the other classes, regardless of the difference in Ëz . However, it is more reasonable to penalize more the logits that are Ëz } closer to Ëz, which are more likely to cause misclassiï¬cation.
On the other end of the spectrum, assuming no two digits are the same, we have
AL/dze| _, ,, |OL/Oze" ace 1, hm | OL /ox lim 00 =0, (20)
where ¢â = arg max,¢c\ 42} (Zc), and éâ ⬠C\ {é,c'}. Eq. (20) indicates that, when the magnitude of the logits is large, softmax penalizes only the largest logit of the non-target classes. In this case, although the logit that is most likely to cause misclassification is strongly penalized, the logits of other non-target classes are ignored. As a result, the logits of the non-target classes tend to be similar at convergence, ignoring the fact that some classes are closer to each other than the others. The latter case is related to the saturation problem of softmax discussed in the literature (Oland et al.||2017), where they focus on the problem of small absolute gradient magnitude, which nevertheless does not affect Adam and ND-Adam.
We propose two methods to exploit the prior knowledge that the magnitude of the logits should not be too small or too large. First, we can apply batch normalization to the logits. But instead of setting γcâs as trainable variables, we consider them as a single hyperparameter, γC, such that . Tuning the value of γC can lead to a better trade-off between the two extremes γc = γC, described by Eq. (19) and (20). We observe in practice that the optimal value of γC tends to be the same for different optimizers or different network widths, but varies with network depth. We refer to this method as batch-normalized softmax (BN-Softmax).
Alternatively, since the magnitude of the logits tends to grow larger than expected (in order to mini- mize the cross entropy), we can apply L2-regularization to the logits by adding the following penalty to the loss function:
Xe 2 be= Fh ceC (21)
# câC
where λC is a hyperparameter to be tuned. Different from BN-Softmax, λC can also be shared by different networks of different depths.
# 5 EXPERIMENTS
In this section, we provide empirical evidence for the analysis in Sec. 2.2, and evaluate the perfor- mance of ND-Adam and regularized softmax on CIFAR-10 and CIFAR-100.
5.1 THE EFFECT OF L2 WEIGHT DECAY
To empirically examine the effect of L2 weight decay, we train a wide residual network (WRN) (Zagoruyko & Komodakis, 2016b) of 22 layers, with a width of 7.5 times that of a vanilla ResNet. Using the notation suggested by Zagoruyko & Komodakis (2016b), we refer to this network as WRN-22-7.5. We train the network on the CIFAR-10 dataset (Krizhevsky & Hinton, 2009), with a small modiï¬cation to the original WRN architecture, and with a different learning rate anneal- ing schedule. Speciï¬cally, for simplicity and slightly better performance, we replace the last fully connected layer with a convolutional layer with 10 output feature maps. i.e., we change the layers after the last residual block from BN-ReLU-GlobalAvgPool-FC-Softmax to BN-ReLU-Conv-GlobalAvgPool-Softmax. In addition, for clearer comparisons, the learn- ing rate is annealed according to a cosine function without restart (Loshchilov & Hutter, 2017b; Gastaldi, 2017). We train the model for 80k iterations with a batch size of 128, similar to the set- tings used by Zagoruyko & Komodakis (Zagoruyko & Komodakis, 2016b). The experiments are based on a TensorFlow implementation of WRN (Wu, 2016).
As a common practice, we use SGD with a momentum of 0.9, the analysis for which is similar to that in Sec. 2.2] Due to the linearity of derivatives and momentum, Aw; can be decomposed as Aw; = Aw! + Aw?, where Aw! and Aw? are the components corresponding to the original loss function, L (-), and the L2 penalty term (see Eq. {4)), respectively. Fig [lalshows the ratio between the scalar projection of Aw! on Aw? and ||Aw?][,, which indicates how the tendency of Aw! to increase ||w;||, is compensated by Aw?. Note that Aw? points to the negative direction of w;, even when momentum is used, since the direction change of w; is slow. As shown in Fig. [Ta] at the beginning of the training, Aw? dominants and quickly adjusts ||w;||, to its equilibrium value. During the middle stage of the training, the projection of Aw! on Aw?, and Aw? almost cancel each other. Then, towards the end of the training, the gradient of w; diminishes rapidly, making Aw? dominant again. Therefore, Eq. (7) holds more accurately during the middle stage of the training.
In Fig. we show how the effective learning rate varies in different hyperparameter settings. By Eq. (7), JAwi|l, / ||willz is expected to remain the same as long as a stays constant, which is confirmed by the fact that the curve for ag = 0.1, \ = 0.001 overlaps with that for ag = 0.05, A = 0.002. However, comparing the curve for ag = 0.1,A = 0.001, with that for ag = 0.1,A = 0.0005, we can see that the value of ||Azw;||, / ||w;||, does not change proportionally to a. On the other hand, by using ND-Adam, we can control the value of || Aw;||, / ||w;||, more precisely by adjusting the learning rate for weight vectors, aâ. For the same training step, changes in aâ lead to approximately proportional changes in ||Aw;||, / ||wi||, as shown by the two curves corresponding to ND-Adam in Fig. [Ib]
5. = 0,002 0.1, 4 = 0.0005 12 0.000 0 10000 2000030000 40000-50000 6oN00â~7OUD0 SOKO > 1000020000 00040000 50000-6000 training steps training steps 7000080000
(a) Scalar projection of Aw! on Aw? normalized by (b) Relative magnitudes of weight updates, or effective ||Aw? ||,- learning rates.
Figure 1: An illustration of how L2 weight decay and ND-Adam control the effective learning rate. The results are obtained from the 5th layer of the network, and other layers show similar results.
5.2 PERFORMANCE EVALUATION
To compare the generalization performance of SGD, Adam, and ND-Adam, we train the same WRN- 22-7.5 network on the CIFAR-10 and CIFAR-100 datasets. For SGD and ND-Adam, we ï¬rst tune the hyperparameters for SGD (α0 = 0.1, λ = 0.001, momentum 0.9), then tune the initial learning rate of ND-Adam for weight vectors to match the effective learning rate to that of SGD, i.e., αv 0 = 0.05, as shown in Fig. 1b. While L2 weight decay can greatly affect the performance of SGD, it does not noticeably beneï¬t Adam in our experiments. For Adam and ND-Adam, β1 and β2 are set to the default values of Adam, i.e., β1 = 0.9, β2 = 0.999. Although the learning rate of Adam is usually set to a constant value, we observe better performance with the cosine learning rate schedule. The initial learning rate of Adam (α0), and that of ND-Adam for scalar parameters (αs 0) are both tuned to 0.001. We use horizontal ï¬ips and random crops for data augmentation, and no dropout is used.
We ï¬rst experiment with the use of trainable scaling parameters (γi) of batch normalization. As shown in Fig. 2, at convergence, the test accuracies of ND-Adam are signiï¬cantly improved upon that of vanilla Adam, and matches that of SGD. Note that at the early stage of training, the test accu- racies of Adam increase more rapidly than that of ND-Adam and SGD. However, the test accuracies remain at a high level afterwards, which indicates that Adam tends to quickly ï¬nd and get stuck in bad local minima that do not generalize well.
The average results of 3 runs are summarized in the ï¬rst part of Table 1. Interestingly, compared to SGD, ND-Adam shows slightly better performance on CIFAR-10, but worse performance on CIFAR-100. This inconsistency may be related to the problem of softmax discussed in Sec. 4, that there is a lack of proper control over the magnitude of the logits. But overall, given comparable ef- fective learning rates, ND-Adam and SGD show similar generalization performance. In this sense, the effective learning rate is a more natural learning rate measure than the learning rate hyperparam- eter.
â SGD: ay = 0.1, = 0.001 ag = 0.05 0 1000 2000030000900 «5000069007000 S000 fi 100 20000-30000 40000 50000-69000 70000 $0000 training steps training steps
Figure 2: Test accuracies of the same network trained with SGD, Adam, and ND-Adam. De- tails are shown in the ï¬rst part of Table 1. Figure 3: Magnitudes of softmax logits in differ- ent settings. Results of WRN-22-7.5 networks trained on CIFAR-10.
Next, we repeat the experiments with the use of BN-Softmax. As discussed in Sec. 3.2, γiâs can be removed from a linear rectiï¬er network, without changing the overall network function. Although this property does not strictly hold for residual networks due to the skip connections, we observe that when BN-Softmax is used, simply removing the scaling factors results in slightly better performance for all three algorithms. Thus, we only report results for this setting. The scaling factor of the logits, γC, is set to 2.5 for CIFAR-10, and 1 for CIFAR-100.
As shown in the second part of Table 1, while we obtain the best generalization performance with ND-Adam, the improvement is most prominent for Adam, and is relatively small for SGD. This discrepancy can be explained by comparing the magnitudes of softmax logits without regularization. As shown in Fig. 3, the magnitude of logits corresponding to Adam is much larger than that of ND- Adam and SGD, and therefore beneï¬ts more from the regularization.
Table 1: Test error rates of WRN-22-7.5 net- works on CIFAR-10 and CIFAR-100. Based on a TensorFlow implementation of WRN.
Table 2: Test error rates of WRN-22-7.5 and WRN-28-10 networks on CIFAR-10 and CIFAR-100. Based on the original implemen- tation of WRN.
# CIFAR-10 Error (%)
# CIFAR-100 Error (%)
Method BN w/ scaling factors Method CIFAR-10 Error (%) CIFAR-100 Error (%) SGD Adam ND-Adam 4.61 6.14 4.53 20.60 25.51 21.45 SGD ND-Adam WRN-22-7.5 3.84 3.70 19.24 19.30 BN w/o scaling factors, BN-Softmax WRN-28-10 SGD Adam ND-Adam 4.49 5.43 4.14 20.18 22.48 19.90 SGD ND-Adam 3.80 3.70 18.48 18.42
While the TensorFlow implementation we use already provides an adequate test bed, we notice that it is different from the original implementation of WRN in several aspects. For instance, they use different nonlinearities (leaky ReLU vs. ReLU), and use different skip connections for down- sampling (average pooling vs. strided convolution). A subtle yet important difference is that, L2-
regularization is applied not only to weight vectors, but also to the scales and biases of batch normal- ization in the original implementation, which leads to better generalization performance. For further comparison between SGD and ND-Adam, we reimplement ND-Adam and test its performance on a PyTorch version of the original implementation (Zagoruyko & Komodakis, 2016a).
Due to the aforementioned differences, we use a slightly different hyperparameter setting in this experiment. Speciï¬cally, for SGD λ is set to 5e 6 (L2- 4, while for ND-Adam λ is set to 5e regularization for biases), and both αs 0 are set to 0.04. In this case, regularizing softmax does not yield improved performance for SGD, since the L2-regularization applied to γiâs and the last layer weights can serve a similar purpose. Thus, we only apply L2-regularized softmax for ND-Adam with λC = 0.001. The average results of 3 runs are summarized in Table 2. Note that the performance of SGD for WRN-28-10 is slightly better than that reported with the original imple- mentation (i.e., 4.00 and 19.25), due to the modiï¬cations described in Sec. 5.1. In this experiment, SGD and ND-Adam show almost identical generalization performance.
# 6 CONCLUSION
We introduced ND-Adam, a tailored version of Adam for training DNNs, to bridge the general- ization gap between Adam and SGD. ND-Adam is designed to preserve the direction of gradient for each weight vector, and produce the regularization effect of L2 weight decay in a more precise and principled way. We further introduced regularized softmax, which limits the magnitude of soft- max logits to provide better learning signals. Combining ND-Adam and regularized softmax, we show through experiments signiï¬cantly improved generalization performance, eliminating the gap between Adam and SGD. From a high-level view, our analysis and empirical results suggest the need for more precise control over the training process of DNNs.
# REFERENCES
Devansh Arpit, StanisÅaw JastrzËebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxin- der S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In International Conference on Machine Learning, 2017.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In International Conference on Machine Learning.
Lawrence Cayton. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep, pp. 1â17, 2005.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Xavier Gastaldi. Shake-shake regularization of 3-branch residual networks. In Workshop of Inter- national Conference on Learning Representations, 2017.
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectiï¬er neural networks. International Conference on Artiï¬cial Intelligence and Statistics, pp. 315â323, 2011. In
Elad Hazan, Kï¬r Levy, and Shai Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex opti- mization. In Advances in Neural Information Processing Systems, pp. 1594â1602, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770â778, 2016.
Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448â456, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, University of Toronto, 2009.
Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101, 2017a.
Ilya Loshchilov and Frank Hutter. Sgdr: stochastic gradient descent with restarts. In International Conference on Learning Representations, 2017b.
Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. In Advances in Neural Information Processing Systems, pp. 1786â1794, 2010.
Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized opti- mization in deep neural networks. In Advances in Neural Information Processing Systems, pp. 2422â2430, 2015.
Anders Oland, Aayush Bansal, Roger B Dannenberg, and Bhiksha Raj. Be careful what you arXiv preprint backpropagate: A case for linear output activations & gradient boosting. arXiv:1707.04199, 2017.
Salah Rifai, Yann N Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent classiï¬er. In Advances in Neural Information Processing Systems, pp. 2294â2302, 2011.
Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accel- erate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901â909, 2016.
Samuel L Smith and Quoc V Le. A bayesian perspective on generalization and stochastic gradient descent. In International Conference on Learning Representations, 2018.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1â9, 2015.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The In Advances in Neural In- marginal value of adaptive gradient methods in machine learning. formation Processing Systems, 2017.
Neal Wu. A tensorï¬ow implementation of wide residual networks, 2016. URL https:// github.com/tensorflow/models/tree/master/research/resnet.
Adams Wei Yu, Qihang Lin, Ruslan Salakhutdinov, and Jaime Carbonell. Normalized gradient with adaptive stepsize method for deep neural network training. arXiv preprint arXiv:1707.04822, 2017.
Sergey Zagoruyko and Nikos Komodakis. A pytorch implementation of wide residual networks, 2016a. URL https://github.com/szagoruyko/wide-residual-networks.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016b.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In International Conference on Learning Rep- resentations, 2017. | {
"id": "1711.05101"
} |
1709.02755 | Simple Recurrent Units for Highly Parallelizable Recurrence | Common recurrent neural architectures scale poorly due to the intrinsic
difficulty in parallelizing their state computations. In this work, we propose
the Simple Recurrent Unit (SRU), a light recurrent unit that balances model
capacity and scalability. SRU is designed to provide expressive recurrence,
enable highly parallelized implementation, and comes with careful
initialization to facilitate training of deep models. We demonstrate the
effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over
cuDNN-optimized LSTM on classification and question answering datasets, and
delivers stronger results than LSTM and convolutional models. We also obtain an
average of 0.7 BLEU improvement over the Transformer model on translation by
incorporating SRU into the architecture. | http://arxiv.org/pdf/1709.02755 | Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi | cs.CL, cs.NE | EMNLP | null | cs.CL | 20170908 | 20180907 | 8 1 0 2
p e S 7 ] L C . s c [
5 v 5 5 7 2 0 . 9 0 7 1 : v i X r a
# Simple Recurrent Units for Highly Parallelizable Recurrence
# Tao Lei1 1ASAPP Inc.
# Yu Zhang2
# 2Google Brain
# Sida I. Wang1,3
# Hui Dai1
# 3Princeton University
# Yoav Artzi1,4 4Cornell University
1{tao, hd}@asapp.com 3sidaw@cs.princeton.edu
2ngyuzh@google.com 4yoav@cs.cornell.edu
# Abstract
Common recurrent neural architectures scale poorly due to the intrinsic difï¬culty in par- allelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is de- signed to provide expressive recurrence, en- able highly parallelized implementation, and comes with careful initialization to facili- tate training of deep models. We demon- strate the effectiveness of SRU on multiple SRU achieves 5â9x speed-up NLP tasks. over cuDNN-optimized LSTM on classiï¬ca- tion and question answering datasets, and de- livers stronger results than LSTM and convo- lutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model (Vaswani et al., 2017) on translation by incorporating SRU into the architecture.1
# Introduction
Recurrent neural networks (RNN) are at the core of state-of-the-art approaches for a large num- ber of natural language tasks, including machine translation (Cho et al., 2014; Bahdanau et al., 2015; Jean et al., 2015; Luong et al., 2015), lan- guage modeling (Zaremba et al., 2014; Gal and Ghahramani, 2016; Zoph and Le, 2016), opin- ion mining (Irsoy and Cardie, 2014), and situated language understanding (Mei et al., 2016; Misra et al., 2017; Suhr et al., 2018; Suhr and Artzi, 2018). Key to many of these advancements are architectures of increased capacity and computa- tion. For instance, the top-performing models for semantic role labeling and translation use eight re- current layers, requiring days to train (He et al., 2017; Wu et al., 2016b). The scalability of these models has become an important problem that im- pedes NLP research.
1Our code is available at https://github.com/ taolei87/sru.
The difï¬culty of scaling recurrent networks arises from the time dependence of state com- putation. In common architectures, such as Long Short-term Memory (LSTM; Hochreiter and Schmidhuber, 1997) and Gated Recurrent Units (GRU; Cho et al., 2014), the computation of each step is suspended until the complete ex- ecution of the previous step. This sequential de- pendency makes recurrent networks signiï¬cantly slower than other operations, and limits their ap- plicability. For example, recent translation mod- els consist of non-recurrent components only, such as attention and convolution, to scale model train- ing (Gehring et al., 2017; Vaswani et al., 2017).
In this work, we introduce the Simple Recurrent Unit (SRU), a unit with light recurrence that offers both high parallelization and sequence modeling capacity. The design of SRU is inspired by pre- vious efforts, such as Quasi-RNN (QRNN; Brad- bury et al., 2017) and Kernel NN (KNN; Lei et al., 2017), but enjoys additional beneï¬ts:
⢠SRU exhibits the same level of parallelism as convolution and feed-forward nets. This is achieved by balancing sequential dependence and independence: while the state compu- tation of SRU is time-dependent, each state dimension is independent. This simpliï¬ca- tion enables CUDA-level optimizations that parallelize the computation across hidden di- mensions and time steps, effectively using the full capacity of modern GPUs. Figure 1 com- pares our architectureâs runtimes to common architectures.
⢠SRU replaces the use of convolutions (i.e., n- gram ï¬lters), as in QRNN and KNN, with more recurrent connections. This retains modeling capacity, while using less compu- tation (and hyper-parameters).
1=32,d=256 (k=3) | (k=2) | sru | (0) 2 4 6 1=128,d=512 mmm forward backward (0) 10 20 30 40
# conv2d
# conv2d
Figure 1: Average processing time in milliseconds of a batch of 32 samples using cuDNN LSTM, word- level convolution conv2d (with ï¬lter width k = 2 and k = 3), and the proposed SRU. We vary the number of tokens per sequence (l) and feature dimension (d).
⢠SRU improves the training of deep recur- rent models by employing highway connec- tions (Srivastava et al., 2015) and a parame- ter initialization scheme tailored for gradient propagation in deep architectures.
We evaluate SRU on a broad set of problems, including text classiï¬cation, question answering, translation and character-level language model- ing. Our experiments demonstrate that light re- currence is sufï¬cient for various natural language tasks, offering a good trade-off between scala- bility and representational power. On classiï¬ca- tion and question answering datasets, SRU out- performs common recurrent and non-recurrent ar- chitectures, while achieving 5â9x speed-up com- pared to cuDNN LSTM. Stacking additional lay- ers further improves performance, while incurring relatively small costs owing to the cheap compu- tation of a single layer. We also obtain an average improvement of 0.7 BLEU score on the English to German translation task by incorporating SRU into Transformer (Vaswani et al., 2017).
# 2 Related Work
Improving on common architectures for sequence processing has recently received signiï¬cant atten- tion (Greff et al., 2017; Balduzzi and Ghifary, 2016; Miao et al., 2016; Zoph and Le, 2016; Lee et al., 2017). One area of research involves incor- porating word-level convolutions (i.e. n-gram ï¬l- ters) into recurrent computation (Lei et al., 2015; Bradbury et al., 2017; Lei et al., 2017). For ex- ample, Quasi-RNN (Bradbury et al., 2017) pro- poses to alternate convolutions and a minimal- ist recurrent pooling function and achieves sig- niï¬cant speed-up over LSTM. While Bradbury et al. (2017) focus on the speed advantages of the network, Lei et al. (2017) study the theoret-
ical characteristics of such computation and pos- sible extensions. Their results suggest that sim- pliï¬ed recurrence retains strong modeling capac- ity through layer stacking. This ï¬nding motivates the design of SRU for both high parallelization and representational power. SRU also relates to IRNN (Le et al., 2015), which uses an identity di- agonal matrix to initialize hidden-to-hidden con- nections. SRU uses point-wise multiplication for hidden connections, which is equivalent to using a diagonal weight matrix. This can be seen as a constrained version of diagonal initialization.
Various strategies have been proposed to scale network training (Goyal et al., 2017) and to speed up recurrent networks (Diamos et al., 2016; Shazeer et al., 2017; Kuchaiev and Ginsburg, 2017). For instance, Diamos et al. (2016) utilize hardware infrastructures by stashing RNN param- eters on cache (or fast memory). Shazeer et al. (2017) and Kuchaiev and Ginsburg (2017) im- prove the computation via conditional computing and matrix factorization respectively. Our imple- mentation for SRU is inspired by the cuDNN- optimized LSTM (Appleyard et al., 2016), but en- ables more parallelism â while cuDNN LSTM re- quires six optimization steps, SRU achieves more signiï¬cant speed-up via two optimizations.
The design of recurrent networks, such as SRU and related architectures, raises questions about representational power and interpretability (Chen et al., 2018; Peng et al., 2018). Balduzzi and Ghi- fary (2016) applies type-preserving transforma- tions to discuss the capacity of various simpliï¬ed RNN architectures. Recent work (Anselmi et al., 2015; Daniely et al., 2016; Zhang et al., 2016; Lei et al., 2017) relates the capacity of neural networks to deep kernels. We empirically demonstrate SRU can achieve compelling results by stacking multi- ple layers.
# 3 Simple Recurrent Unit
We present and explain the design of Simple Re- current Unit (SRU) in this section. A single layer of SRU involves the following computation:
f, = o (W Fx, +ve © C1 + be) f,O cq_1 + (1â£,) © (Wx) Ce
(1) = (2)
r, = 0 (W,x, + vr © G1 +b,) hy = OG +(1-m) OX
(3)
(4)
where W, Wf and Wr are parameter matrices and vf , vr, bf and bv are parameter vectors to be learnt during training. The complete architec- ture decomposes to two sub-components: a light recurrence (Equation 1 and 2) and a highway net- work (Equation 3 and 4).
The light recurrence component successively reads the input vectors xt and computes the se- quence of states ct capturing sequential informa- tion. The computation resembles other recurrent networks such as LSTM, GRU and RAN (Lee et al., 2017). Speciï¬cally, a forget gate ft controls the information ï¬ow (Equation 1) and the state vector ct is determined by adaptively averaging the previous state ctâ1 and the current observation Wxt according to ft (Equation 2).
One key design decision that differs from previ- ous gated recurrent architectures is the way cy; is used in the sigmoid gate. Typically, cy; is multiplied with a parameter matrix to compute f;, e.g., fp = o(Wyx; + Vecx_1 + by). However, the inclusion of Vfc;â1 makes it difficult to par- allelize the state computation: each dimension of c; and f; depends on all entries of c;_;, and the computation has to wait until c;_; is fully com- puted. To facilitate parallelization, our light recur- rence component uses a point-wise multiplication vy © C-1 instead. With this simplification, each dimension of the state vectors becomes indepen- dent and hence parallelizable.
The highway network component (Srivastava et al., 2015) facilitates gradient-based training of deep networks. It uses the reset gate r, (Equation 3) to adaptively combine the input x; and the state c; produced from the light recurrence (Equation 4), where (1 â r;) © x; is a skip connection that allows the gradient to directly propagate to the pre- vious layer. Such connections have been shown to improve scalability (Wu et al., 2016a; Kim et al., 2016; He et al., 2016; Zilly et al., 2017).
The combination of the two components makes the overall architecture simple yet expressive, and easy to scale due to enhanced parallelization and gradient propagation.
# 3.1 Parallelized Implementation
Despite the parallelization friendly design of SRU, a naive implementation which computes equations (1)â(4) for each step t sequentially would not achieve SRUâs full potential. We employ two op- timizations to enhance parallelism. The optimiza- tions are performed in the context of GPU / CUDA programming, but the general idea can be applied to other parallel programming models.
We re-organize the computation of equations (1)â(4) into two major steps. First, given the input sequence {x1 · · · xL}, we batch the matrix multi- plications across all time steps. This signiï¬cantly improves the computation intensity (e.g. GPU uti- lization). The batched multiplication is:
Ww Wy; WwW, Ul= [x1,X2,°°+ xz] ,
where L is the sequence length, U â RLÃ3d is the computed matrix and d is the hidden state size. When the input is a mini-batch of B sequences, U would be a tensor of size (L, B, 3d).
The second step computes the remaining point- wise operations. Speciï¬cally, we compile all point-wise operations into a single fused CUDA kernel and parallelize the computation across each dimension of the hidden state. Algorithm 1 shows the pseudo code of the forward function. The com- plexity of this step is O(L · B · d) per layer, where L is the sequence length and B is the batch size. In contrast, the complexity of LSTM is O(L · B · d2) because of the hidden-to-hidden multiplications (e.g. Vhtâ1), and each dimension can not be in- dependently parallelized. The fused kernel also reduces overhead. Without it, operations such as sigmoid activation would each invoke a separate function call, adding kernel launching latency and more data moving costs.
The implementation of a bidirectional SRU is similar: the matrix multiplications of both direc- tions are batched, and the fused kernel handles and parallelizes both directions at the same time.
# 3.2 Initialization
Proper parameter initialization can reduce gradient propagation difï¬culties and hence have a positive
Equations (1)-(4). state dimension d. multiplication U[I, i, 7â]; b,[j]. Parallelize each example 7 and dimension j 5 layers * no scaling © with scaling 20 layers 0.75 oth 0.58 0.42 0.25 0 250 500 750 1000
Algorithm 1 Mini-batch version of the forward pass deï¬ned in Equations (1)â(4).
Indices: Sequence length L, mini-batch size B, hidden state dimension d. Input: Input sequences batch x{I, i, j]; grouped matrix multiplication U[I, i, 7â]; initial state co[?, j]; parameters v [J], v,[J], by[j] and b,[j]. Output: Output h[-, -,-] and internal c[-, -, -] states. Initialize h[-,-,-] and c[-,-,-] as two L x B x d tensors. fori =1,---,B;j=1,--- c= Cofi, j] for] =1,--- ,Ldo ,d do f=o(U[l,i,j+d)+vy[j] x e+ by[j]) c=fxc+(1âf) x Ufl,i,j] // Parallelize each example 7 and dimension j r=o(U[l,i,j+dx 2] +v,[j] x c+b,[j]) h=rxct+(lâr) x x{l,i, 3]
impact on the ï¬nal performance. We now describe an initialization strategy tailored for SRU.
We start by adopting common initializations de- rived for feed-forward networks (Glorot and Ben- gio, 2010; He et al., 2015). The weights of param- eter matrices are drawn with zero mean and 1/d variance, for instance, via the uniform distribution [-\/3/d, +./3/d]. This ensures the output vari- ance remains approximately the same as the input variance after the matrix multiplication.
the light recurrence and highway computation would still reduce the variance of hidden representations by a factor of 1/3 to 1/2:
1 3 Var[ht] Var[xt] 1 2 ⤠⤠,
Figure 2: Training curves of SRU on classiï¬cation. The x-axis is the number of training steps and the y-axis is the training loss. Scaling correction im- proves the training progress, especially for deeper models with many stacked layers.
and the factor converges to 1/2 in deeper layers (see Appendix A). This implies the output ht and the gradient would vanish in deep models. To off- set the problem, we introduce a scaling correction constant α in the highway connection
# 4 Experiments
h, = r,-Oc¢ + (1-r)OxX-a ,
â
We evaluate SRU on several natural language pro- cessing tasks and perform additional analyses of the model. The set of tasks includes text classiï¬ca- tion, question answering, machine translation, and character-level language modeling. Training time on these benchmarks ranges from minutes (classi- ï¬cation) to days (translation), providing a variety of computation challenges.
where α is set to 3 such that Var[ht] â Var[xt] at initialization. When the highway network is ini- tialized with a non-zero bias br = b, the scaling constant α can be accordingly set as:
# a = Vlt+exp(b) x2.
Figure 2 compares the training progress with and without the scaling correction. See Appendix A for the derivation and more discussion.
The main question we study is the performance- speed trade-off SRU provides in comparison to
Model Size CR SUBJ MR TREC MPQA SST Best reported results: Wang and Manning (2013) Kalchbrenner et al. (2014) Kim (2014) Zhang and Wallace (2017) Zhao et al. (2015) 82.1 - 85.0 84.7 86.3 93.6 - 93.4 93.7 95.5 79.1 - 81.5 81.7 83.1 - 93.0 93.6 91.6 92.4 86.3 - 89.6 89.6 93.3 - 86.8 88.1 85.5 - Our setup (default Adam, ï¬xed word embeddings): 360k CNN 352k LSTM QRNN (k=1) 165k QRNN (k=1) + highway 204k 83.1±1.6 82.7±1.9 83.5±1.9 84.0±1.9 92.7±0.9 92.6±0.8 93.4±0.6 93.4±0.8 78.9±1.3 79.8±1.3 82.0±1.0 82.1±1.2 93.2±0.8 93.4±0.9 92.5±0.5 93.2±0.6 89.2±0.8 89.4±0.7 90.2±0.7 89.6±1.2 85.1±0.6 88.1±0.8 88.2±0.4 88.9±0.2 SRU (2 layers) SRU (4 layers) SRU (8 layers) 204k 303k 502k 84.9±1.6 85.9±1.5 86.4±1.7 93.5±0.6 93.8±0.6 93.7±0.6 82.3±1.2 82.9±1.0 83.1±1.0 94.0±0.5 94.8±0.5 94.7±0.5 90.1±0.7 90.1±0.6 90.2±0.8 89.2±0.3 89.6±0.5 88.9±0.6 Time - - - - - 417 2409 345 371 320 510 879
Table 1: Test accuracies on classiï¬cation benchmarks (Section 4.1). The ï¬rst block presents best reported results of various methods. The second block compares SRU and other baselines given the same setup. For the SST dataset, we report average results of 5 runs. For other datasets, we perform 3 independent trials of 10-fold cross validation (3Ã10 runs). The last column compares the wall clock time (in seconds) to ï¬nish 100 epochs on the SST dataset.
other architectures. We stack multiple layers of SRU to directly substitute other recurrent, convo- lutional or feed-forward modules. We minimize hyper-parameter tuning and architecture engineer- ing for a fair comparison. Such efforts have a non- trivial impact on the results, which are beyond the scope of our experiments. Unless noted otherwise, the hyperparameters are set identical to prior work.
# 4.1 Text Classiï¬cation
Dataset We use six sentence classiï¬cation benchmarks: movie review sentiment (MR; Pang and Lee, 2005), sentence subjectivity (SUBJ; Pang and Lee, 2004), customer reviews polar- ity (CR; Hu and Liu, 2004), question type (TREC; Li and Roth, 2002), opinion polarity (MPQA; Wiebe et al., 2005), and the Stanford sentiment treebank (SST; Socher et al., 2013).2
Following Kim (2014), we use word2vec em- beddings trained on 100 billion Google News to- kens. For simplicity, all word vectors are normal- ized to unit vectors and are ï¬xed during training.
Setup We stack multiple SRU layers and use the last output state to predict the class label for a given sentence. We train for 100 epochs and use the validation (i.e., development) set to se- lect the best training epoch. We perform 10-fold
cross validation for datasets that do not have a standard train-evaluation split. The result on SST is averaged over ï¬ve independent trials. We use Adam (Kingma and Ba, 2014) with the default learning rate 0.001, a weight decay 0 and a hid- den dimension of 128.
We compare SRU with a wide range of meth- ods on these datasets, including various convo- lutional models (Kalchbrenner et al., 2014; Kim, 2014; Zhang and Wallace, 2017) and a hierarchical sentence model (Zhao et al., 2015) reported as the state of the art on these datasets (Conneau et al., 2017). Their setups are not exactly the same as ours, and may involve more tuning on word em- beddings and other regularizations. We use the setup of Kim (2014) but do not ï¬ne-tune word embeddings and the learning method for simplic- ity. In addition, we directly compare against three baselines trained using our code base: a re- implementation of the CNN model of Kim (2014), a two-layer LSTM model and Quasi-RNN (Brad- bury et al., 2017). We use the ofï¬cial implemen- tation of Quasi-RNN and also implement a ver- sion with highway connection for a fair compar- ison. These baselines are trained using the same hyper-parameter conï¬guration as SRU.
2We use the binary version of SST dataset.
Results Table 1 compares the test results on the six benchmarks. We select the best number re-
CR 90 98 96 94 92 90 88 85 80 SUB} MR 0 20 40 60 80 100 120 0 50 100 150 200 250 300 0 50 100 150 200 250 300 TREC MPQA ssT 6 94 94 5 92 90 90 90 95 88 86 80 88 â cuDNNLSTM 84 86 â sRu 75 82 â CNN 4 80 0 20 40 60 80 100 120 0 25 50 75 100 125 150 0 500 1000 1500 2000
Figure 3: Mean validation accuracies (y-axis) and standard deviations of the CNN, 2-layer LSTM and 2-layer SRU models. We plot the curves of the ï¬rst 100 epochs. X-axis is the training time used (in seconds). Timings are performed on NVIDIA GeForce GTX 1070 GPU, Intel Core i7-7700K Processor and cuDNN 7003.
ported in previous methods when multiple model variants were explored in their experiments. De- spite our simple setup, SRU outperforms most pre- vious methods and achieves comparable results compared to the state-of-the-art but more sophisti- cated model of Zhao et al. (2015). Figure 3 shows validation performance relative to training time for SRU, cuDNN LSTM and the CNN model. Our SRU implementation runs 5â9 times faster than cuDNN LSTM, and 6â40% faster than the CNN model of Kim (2014). On the movie review (MR) dataset for instance, SRU completes 100 training epochs within 40 seconds, while LSTM takes over 320 seconds.
We use the open source implementation of Doc- ument Reader in our experiments.4 We train mod- els for up to 100 epochs, with a batch size of 32 and a hidden dimension of 128. Following the author suggestions, we use the Adamax op- timizer (Kingma and Ba, 2014) and variational dropout (Gal and Ghahramani, 2016) during train- ing. We compare with two alternative recurrent components: the bidirectional LSTM adopted in the original implementation of Chen et al. (2017) and Quasi-RNN with highway connections for im- proved performance.
# 4.2 Question Answering
Dataset We use the Stanford Question Answer- ing Dataset (SQuAD; Rajpurkar et al., 2016). SQuAD is a large machine comprehension dataset that includes over 100K question-answer pairs ex- tracted from Wikipedia articles. We use the stan- dard train and development sets.
Setup We use the Document Reader model of Chen et al. (2017) as our base architecture for this task. The model is a combination of word- level bidirectional RNNs and attentions, providing a good testbed to compare our bidirectional SRU implementation with other RNN components.3
3The current state-of-the-art models (Seo et al., 2016; Wang et al., 2017) make use of additional components such
Results Table 2 summarizes the results on SQuAD. SRU achieves 71.4% exact match and 80.2% F1 score, outperforming the bidirectional LSTM model by 1.9% (EM) and 1.4% (F1) re- spectively. SRU also exhibits over 5x speed-up over LSTM and 53â63% reduction in total train- ing time. In comparison with QRNN, SRU ob- tains 0.8% improvement on exact match and 0.6% on F1 score, and runs 60% faster. This speed im- provement highlights the impact of the fused ker- nel (Algorithm 1). While the QRNN baseline in- volves a similar amount of computation, assem- bling all element-wise operations of both direc-
as character-level embeddings, which are not directly com- parable to the setup of Chen et al. (2017). However, these models can potentially beneï¬t from SRU since RNNs are in- corporated in the model architecture.
# 4https://github.com/hitvoice/DrQA
Model # layers Size Dev EM Dev F1 Time per epoch Total RNN LSTM (Chen et al., 2017) 3 4.1m 69.5 78.8 316s 431s QRNN (k=1) + highway 4 6 2.4m 3.2m 70.1 ± 0.1 70.6 ± 0.1 79.4 ± 0.1 79.6 ± 0.2 113s 161s 214s 262s SRU SRU SRU 3 4 6 2.0m 2.4m 3.2m 70.2 ± 0.3 70.7 ± 0.1 71.4 ± 0.1 79.3 ± 0.1 79.7 ± 0.1 80.2 ± 0.1 58s 72s 100s 159s 173s 201s
Table 2: Exact match (EM) and F1 scores of various models on SQuAD (Section 4.2). We also report the total processing time per epoch and the time spent in RNN computations. SRU outperforms other models, and is more than ï¬ve times faster than cuDNN LSTM.
tions in SRU achieves better GPU utilization.
# 4.3 Machine Translation
Dataset We train translation models on the WMT EnglishâGerman dataset, a standard benchmark for translation systems (Peitz et al., 2014; Li et al., 2014; Jean et al., 2015). The dataset consists of 4.5 million sentence pairs. We obtain the pre-tokenized dataset from the Open- NMT project (Klein et al., 2017). The sentences were tokenized using the word-piece model (Wu et al., 2016b), which generates a shared vocabu- lary of about 32,000 tokens. Newstest-2014 and newstest-2017 are provided and used as the vali- dation and test sets.5
Setup We use the state-of-the-art Transformer model of Vaswani et al. (2017) as our base archi- tecture. In the base model, a single Transformer consists of a multi-head attention layer and a bot- tleneck feed-forward layer. We substitute the feed- forward network using our SRU implementation:
base: W · ReLU_layer(x) + b ours: W · SRU_layer(x) + b .
The intuition is that SRU can better capture se- quential information as a recurrent network, and potentially achieve better performance while re- quiring fewer layers.
We keep the model conï¬guration the same as Vaswani et al. (2017): the model dimension is dmodel = 512, the feed-forward and SRU layer has inner dimensionality dff = dsru = 2048, and posi- tional encoding (Gehring et al., 2017) is applied on
the input word embeddings. The base model with- out SRU has 6 layers, while we set the number of layers to 4 and 5 when SRU is added. Following the original setup, we use a dropout probability 0.1 for all components, except the SRU in the 5-layer model, for which we use a dropout of 0.2 as we observe stronger over-ï¬tting in training.
We use a single NVIDIA Tesla V100 GPU for each model. The published results were obtained using 8 GPUs in parallel, which provide a large ef- fective batch size during training. To approximate the setup, we update the model parameters ev- ery 5Ã5120 tokens and use 16,000 warm-up steps following OpenNMT suggestions. We train each model for 40 epochs (250,000 steps), and perform 3 independent trials for each model conï¬guration. A single run takes about 3.5 days with a Tesla V100 GPU.
Results Table 3 shows the translation results. When SRU is incorporated into the architecture, both the 4-layer and 5-layer model outperform the Transformer base model. For instance, our 5- layer model obtains an average improvement of 0.7 test BLEU score and an improvement of 0.5 BLEU score by comparing the best results of each model achieved across three runs. SRU also ex- hibits more stable performance, with smaller vari- ance over 3 runs. Figure 4 further compares the validation accuracy of different models. These re- sults conï¬rm that SRU is better at sequence mod- eling compared to the original feed-forward net- work (FFN), requiring fewer layers to achieve sim- ilar accuracy. Finally, adding SRU does not affect the parallelization or speed of Transformer â the 4-layer model exhibits 10% speed improvement,
5https://github.com/OpenNMT/ OpenNMT-tf/tree/master/scripts/wmt
Model # layers Size Valid BLEU score Test Speed (toks/sec) Hours per epoch Transformer (base) Transformer (+SRU) Transformer (+SRU) 6 4 5 76m 79m 90m 26.6±0.2 (26.9) 26.7±0.1 (26.8) 27.1±0.0 (27.2) 27.6±0.2 (27.9) 27.8±0.1 (28.3) 28.3±0.1 (28.4) 20k 22k 19k 2.0 1.8 2.1
Table 3: EnglishâGerman translation results (Section 4.3). We perform 3 independent runs for each conï¬guration. We select the best epoch based on the valid BLEU score for each run, and report the average results and the standard deviation over 3 runs. In addition, we experiment with averaging model checkpoints and use the averaged version for evaluation, following (Vaswani et al., 2017). We show the best BLEU results achieved in brackets.
# Valid accuracy
72% 71% ; sqsnoneooenoee? 70% âO Base model O w/SRU (4 layer) © w/SRU (5 layer) 68% 67% 1 10 20 30 40
Figure 4: Mean validation accuracy (y-axis) of dif- ferent translation models after each training epoch (x-axis).
We compare various recurrent models and use a parameter budget similar to previous methods. In addition, we experiment with the factorization trick (Kuchaiev and Ginsburg, 2017) to reduce the total number of parameters without decreasing the performance. See details in Appendix B.
Results Table 4 presents the results of SRU and other recurrent models. The 8-layer SRU model achieves validation and test bits per char- acter (BPC) of 1.21, outperforming previous best reported results of LSTM, QRNN and recurrent highway networks (RHN). Increasing the layer of SRU to 12 and using a longer context of 256 char- acters in training further improves the BPC to 1.19
while the 5-layer model is only 5% slower com- pared to the base model. We present more results and discussion in Appendix B.3.
# 4.5 Ablation Analysis
# 4.4 Character-level Language Modeling
We perform ablation analyses on SRU by succes- sively disabling different components:
Dataset We use Enwik8, a large dataset for character-level Following standard practice, we use the ï¬rst 90M characters for training and the remaining 10M split evenly for validation and test.
(1) Remove the point-wise multiplication term v © c;_1 in the forget and reset gates. The resulting variant involves less recurrence and has less representational capacity.
Setup Similar to previous work, we use a batch size of 128 and an unroll size of 100 for trun- cated backpropagation during training. We also experiment with an unroll size of 256 and a batch size of 64 such that each training instance has longer context. We use a non-zero highway bias br = â3 that is shown useful for training lan- guage model (Zilly et al., 2017). Previous meth- ods employ different optimizers and learning rate schedulers for training. For simplicity and consis- tency, we use the Adam optimizer and the same learning rate scheduling (i.e., Noam scheduling) as the translation experiments. We train a maxi- mum of 100 epochs (about 700,000 steps).
(2) Disable the scaling correction by setting the constant α = 1.
(3) Remove the skip connections.
We train model variants on the classiï¬cation and question answering datasets. Table 5 and Figure 5 conï¬rm the impact of our design decisions â re- moving these components result in worse classiï¬- cation accuracies and exact match scores.
# 5 Discussion
This work presents Simple Recurrent Unit (SRU), a scalable recurrent architecture that operates as fast as feed-forward and convolutional units. We
Model Size # layers Unroll size Valid Test Time Best reported results: MI-LSTM (Wu et al., 2016c) HM-LSTM (Chung et al., 2016) LSTM (Melis et al., 2017) RHN (Zilly et al., 2017) FS-LSTM (Mujika et al., 2017) QRNN (Merity et al., 2018) LSTM (Merity et al., 2018) 17m 35m 46m 46m 47m 26m 47m 1 3 4 10 4 4 3 100 100 50 50 100 200 200 - - 1.28 - - - - 1.44 1.32 1.30 1.27 1.25 1.33 1.23 - - - - - - - Our setup: LSTM LSTM QRNN (k=1) SRU SRU SRU (with projection) SRU (with projection) SRU (with projection) 37m 37m 37m 37m 37m 37m 47m 49m 3 6 6 6 10 6 8 12 100 100 100 100 100 100 100 256 1.37 1.35 1.36 1.29 1.26 1.25 1.21 1.19 1.39 1.38 1.38 1.30 1.27 1.26 1.21 1.19 42min 48min 30min 28min 29min 29min 39min 41min
Table 4: Validation and test BPCs of different recurrent models on Enwik8 dataset. The last column presents the training time per epoch. For SRU with projection, we set the projection dimension to 512.
Model 4layers 6 layers SRU (full) 70.7 714 â remove v © C;_1 70.6 TLA â remove a-scaling 70.3 71.0 â remove highway 69.4 69.1
Table 5: Ablation analysis on SQuAD. Compo- nents are successively removed and the EM scores are averaged over 4 runs.
95.4 95.3 94.8 92.8 92.2 91.2 85.9 85.3 849 CR SUBJ MR Trec
Figure 5: Ablation analysis on the classification datasets. Average validation results are presented. We compare the full SRU implementation (left blue), the variant without v © c;_; multiplication (middle green) and the variant without highway connection (right yellow).
conï¬rm the effectiveness of SRU on multiple nat- ural language tasks ranging from classiï¬cation to translation. We open source our implementation to facilitate future NLP and deep learning research.
Trading capacity with layers SRU achieves high parallelization by simplifying the hidden-to- hidden dependency. This simpliï¬cation is likely to reduce the representational power of a single layer and hence should be balanced to avoid perfor- mance loss. However, unlike previous work that suggests additional computation (e.g., n-gram ï¬l- ters) within the layer (Balduzzi and Ghifary, 2016; Bradbury et al., 2017), we argue that increasing the depth of the model sufï¬ces to retain modeling capacity. Our empirical results on various tasks conï¬rm this hypothesis.
# Acknowledgement
We thank Alexander Rush and Yoon Kim for help with machine translation experiments, and Danqi Chen for help with SQuAD experiments. We thank Adam Yala, Howard Chen, Jeremy Wohlwend, Lili Yu, Kyle Swanson and Kevin Yang for providing useful feedback on the paper and the SRU implementation. A special thanks to Hugh Perkins for his support on the experimental environment setup and Runqi Yang for answering questions about his code.
# References
Fabio Anselmi, Lorenzo Rosasco, Cheston Tan, and Tomaso A. Poggio. 2015. Deep convolutional net- works are hierarchical kernel machines. CoRR, abs/1508.01084.
Jeremy Appleyard, Tomás Kociský, and Phil Blunsom. 2016. Optimizing performance of recurrent neural networks on gpus. CoRR, abs/1604.01946.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations.
David Balduzzi and Muhammad Ghifary. 2016. Strongly-typed recurrent neural networks. In Inter- national Conference on Machine Learning.
James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-recurrent neural net- works. In Proceedings of the International Confer- ence on Learning Representations.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- In Proceedings of the Annual domain questions. Meeting of the Association for Computational Lin- guistics.
Yining Chen, Sorcha Gilroy, Kevin Knight, and Jonathan May. 2018. Recurrent neural networks as In Proceedings of weighted language recognizers. the Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies.
Kyunghyun Cho, Bart van Merrienboer, à ËGaglar GülÃËgehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learn- ing phrase representations using rnn encoderâ decoder for statistical machine translation. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural net- works. CoRR, abs/1609.01704.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
Amit Daniely, Roy Frostig, and Yoram Singer. 2016. Toward deeper understanding of neural networks: The power of initialization and a dual view on ex- pressivity. In Advances In Neural Information Pro- cessing Systems.
Greg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse Engel, Awni Hannun, and Sanjeev Satheesh. 2016. Persistent rnns: Stashing recurrent weights In International Conference on Machine on-chip. Learning.
Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional se- quence to sequence learning. In International Con- ference on Machine Learning.
Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difï¬culty of training deep feedforward neural networks. In Proceedings of the international con- ference on artiï¬cial intelligence and statistics.
Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter No- ordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Ac- curate, large minibatch SGD: Training imagenet in 1 hour. CoRR, abs/1706.02677.
Jan and Koutnx00EDk, Jx00FCrgen Schmidhuber. 2017. Lstm: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectiï¬ers: Surpass- ing human-level performance on imagenet classiï¬- In Proceedings of the IEEE international cation. conference on computer vision.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- In Proceedings of the IEEE conference on nition. computer vision and pattern recognition.
Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017. Deep semantic role labeling: What In Proceedings of the An- works and whatâs next. nual Meeting of the Association for Computational Linguistics.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9.
Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining.
Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In Proceedings of the Annual Meeting of the Associa- tion for Computational Linguistics and the Interna- tional Joint Conference on Natural Language Pro- cessing.
Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics.
Yoon Kim. 2014. Convolutional neural networks for In Proceedings of the Em- sentence classiï¬cation. pirical Methods in Natural Language Processing.
Yoon Kim, Yacine Jernite, David A Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the AAAI Con- ference on Artiï¬cial Intelligence.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstra- tions.
Oleksii Kuchaiev and Boris Ginsburg. 2017. for torization tricks abs/1703.10722. lstm networks. Fac- CoRR,
Quoc V. Le, Navdeep Jaitly, and Geoffrey E. Hinton. 2015. A simple way to initialize recurrent networks of rectiï¬ed linear units. CoRR, abs/1504.00941.
Kenton Lee, Omer Levy, and Luke S. Zettlemoyer. CoRR, Recurrent additive networks. 2017. abs/1705.07393.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing. Association for Computational Linguistics.
Tao Lei, Wengong Jin, Regina Barzilay, and Tommi Jaakkola. 2017. Deriving neural architectures from sequence and graph kernels. International Confer- ence on Machine Learning.
Liangyou Li, Xiaofeng Wu, Santiago Cortes Vaillo, Jun Xie, Andy Way, and Qun Liu. 2014. The DCU- ICTCAS MT system at WMT 2014 on german- english translation task. In Proceedings of the Ninth Workshop on Statistical Machine Translation.
Xin Li and Dan Roth. 2002. Learning question classi- ï¬ers. In Proceedings of the international conference on Computational linguistics-Volume 1. Association for Computational Linguistics.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- In Empirical based neural machine translation. Methods in Natural Language Processing. Associ- ation for Computational Linguistics.
Hongyuan Mei, Mohit Bansal, and R. Matthew Walter. 2016. What to talk about and how? selective gener- ation using lstms with coarse-to-ï¬ne alignment. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies.
Gábor Melis, Chris Dyer, and Phil Blunsom. 2017. On the state of the art of evaluation in neural language models. CoRR, abs/1707.05589.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language mod- eling at multiple scales. CoRR, abs/1803.08240.
Yajie Miao, Jinyu Li, Yongqiang Wang, Shi-Xiong Zhang, and Yifan Gong. 2016. Simplifying long short-term memory acoustic models for fast training and decoding. In IEEE International Conference on Acoustics, Speech and Signal Processing.
Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to ac- In Proceedings tions with reinforcement learning. of the Conference on Empirical Methods in Natural Language Processing.
Asier Mujika, Florian Meier, and Angelika Steger. 2017. Fast-slow recurrent neural networks. In Ad- vances in Neural Information Processing Systems.
Bo Pang and Lillian Lee. 2004. A sentimental edu- cation: Sentiment analysis using subjectivity sum- marization based on minimum cuts. In Proceedings of the annual meeting on Association for Computa- tional Linguistics.
Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment categoriza- tion with respect to rating scales. In Proceedings of the annual meeting on association for computational linguistics.
Stephan Peitz, Joern Wuebker, Markus Freitag, and Hermann Ney. 2014. The RWTH aachen german- english machine translation system for wmt 2014. In Proceedings of the Ninth Workshop on Statistical Machine Translation.
Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. In Empirical Smith. 2018. Rational recurrences. Methods in Natural Language Processing.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehen- sion of text. In Empirical Methods in Natural Lan- guage Processing.
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional at- tention ï¬ow for machine comprehension. CoRR, abs/1611.01603.
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing.
Rupesh K Srivastava, Klaus Greff, and Jürgen Schmid- huber. 2015. Training very deep networks. In Ad- vances in neural information processing systems.
Alane Suhr and Yoav Artzi. 2018. Situated mapping of sequential instructions to actions with single-step In Proceedings of the Annual reward observation. Meeting of the Association for Computational Lin- guistics.
Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to ex- ecutable formal queries. In Proceedings of the Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems.
Sida Wang and Christopher Manning. 2013. Fast In International Conference on dropout training. Machine Learning.
Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching net- works for reading comprehension and question an- swering. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language resources and evalua- tion.
Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2016a. An empirical exploration of skip connections for se- In Proceedings of the Interna- quential tagging. tional Conference on Computational Linguisticss.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Ã
ËAukasz Kaiser, Stephan
Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Ja- son Riesa, Alex Rudnick, Oriol Vinyals, Greg Cor- rado, Macduff Hughes, and Jeffrey Dean. 2016b. Googleâs neural machine translation system: Bridg- ing the gap between human and machine translation. CoRR, abs/1609.08144.
Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan R Salakhutdinov. 2016c. On multiplicative integration with recurrent neural net- works. In Advances in Neural Information Process- ing Systems.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. 2014. CoRR, abs/1409.2329.
Yingjie Zhang and Byron C. Wallace. 2017. A sensi- tivity analysis of (and practitionersâ guide to) convo- lutional neural networks for sentence classiï¬cation. In Proceedings of the International Joint Conference on Natural Language Processing.
Yuchen Zhang, Jason D. Lee, and Michael I. Jordan. 2016. ¢,-regularized neural networks are improp- erly learnable in polynomial time. In International Conference on Machine Learning.
Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence model. In Pro- ceedings of the International Joint Conference on Artiï¬cial Intelligence.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan KoutnÃk, and Jürgen Schmidhuber. 2017. Recurrent highway networks. In International Conference on Machine Learning.
Barret Zoph and Quoc V. Le. 2016. Neural archi- tecture search with reinforcement learning. CoRR, abs/1611.01578.
# A Parameter Initialization Derivation
Following the derivation of Glorot and Kaiming initialization (Glorot and Bengio, 2010; He et al., 2015), we assume the values of each input vector xt are i.i.d with zero mean and a small variance:
Elia] = 0, Var[aryi] <1 Vi<i<d.
We initialize each weight matrix with zero mean and a variance of 1/d. After a matrix multiplica- tion y = Wxt, each value yi would have
Elvi] = BID wigxes] = 0 Var[yi] = S_ Var[wi,j] - Varlaej] = Varla] J
which means the scale of the values after matrix multiplication remains the same.
A.1 Computing Var[ct] Let ft,i be the i-th entry of the forget gate ft:
fii = o(W} aX + vpicâiit bya) -
The pre-activation value will be sufï¬ciently close to 0 because the parameters are initialized with zero mean and small variance and the bias value is initially 0. As a result,
E[ft,i] = Ï(0) = 0.5 .
The state value ct,i is computed according to
Cea = Fee Cease + A â Saya) - (w} xz) ;
Substituting the expectation of ft,i in, we get:6
Ctyi Ww; 2 t 4 t 3 T (= _ Xt-1 | Xt-2 | -)
Therefore, E[c;,;] = 0 as E[w'x] = 0. The vari- ance of ¢;,; however depends on the correlation be- tween input vectors. When the input vectors are independent:
1
1 1 1 Var[c;,;] = Var[w; x] (5: + me) + ra fee ) 1 ~ Var[w,! x] - 37 Var[x]/3 .
However, the two vectors in the input sequence, for instance x; and xj, are not necessarily indepen- dent, for example because two words in an input
We are ignoring the correlation between f;,; and fy i here because their variance is small.
Var[c]/Var[x] 1.00
Figure 6: Empirical estimation of the variance ra- tio Var[ct]/Var[xt] at each layer in a randomly initialized SRU model. We use the pre-trained word2vec embeddings as input, resulting an ini- tial ratio slightly higher than 1/3. As expected, the ratio increases to 1 in deep layers.
sentence are often correlated. When the input vec- tors are perfectly correlated xt = xtâ1 = · · · = x, on the other hand,
Var[c1,;] = Var[w; x] = Var[x] .
In practice, multiple SRU layers are stacked to construct a deep network. The internal state ct and ht would be a weighted combination of inputs {x1 · · · xt}, which will increase the correlation of the state vectors at different steps. These state vec- tors are again fed into the next layer, and keep in- creasing the correlation. As a result, we expect the actual ratio between the variance of ct and that of the input of the current layer xt lies between the two derived values,
1 3 ⤠Var[c] Var[x] ⤠1 , (5)
and would ï¬nally converge to the upper bound value of 1. Figure 6 conï¬rms our expectation by computing the empirical value of Var[c]/Var[x] in deep SRU networks.
A.2 Computing Var[ht] Given the result in Equation (5), we proceed to compute Var[ht]. The i-th entry of ht is similarly computed as
hes = Tee Ca + 1 â res): te rj o(w jx + Ur iCeâ1i + dpi) -
# where
The highway reset gate is not necessarily initial- ized with a zero bias. Let the initial bias be b and Tox + Urictâ1,i denote the rest of terms U= Wei
in the sigmoid function. We have E[u] = 0 and Var[u] < 1 because x; and c;_; have small vari- ance.
We approximate the value of rt,i using its Taylor expansion at u = 0:
# ry;
=
# o(ut+bd)
~ eb e âU ~ ori (erp 9 e2b e2b . 42 Bll © Carip * (ery We can ignore the term with u? since Var[u] < 1, which gives us
# E[r2
E[r2 t,i] â e2b (eb + 1)2 .
Substituting this result in Var[ht,i],
Var[hii] = E [rzge?; + (1 â rea)?x? i} 7). Var[c] Var[z] (eo +1)2 © (eb +1)? (6)
Since from (5) we have Var[x]/3 ⤠Var[c] ⤠Var[x], we get the bound of Var[ht,i]
e2b + 3 3(eb + 1)2 ⤠Var[h] Var[x] ⤠e2b + 1 (eb + 1)2
which is equivalent to
1 3 ⤠Var[h] Var[x] ⤠1 2
when b = 0.
# A.3 Computing the Scaling Constant α
Finally, we compute the scaling constant α (Sec- tion 3.2). Using the result in Equation (6), when α is introduced we get:
Var[ht,i] = â e2b · Var[c] (eb + 1)2 + e2b + α2 (eb + 1)2 · Var[x] , α2 · Var[x] (eb + 1)2
as Var[c] â Var[x] according to Equation (5) and the empirical evaluation (Figure 6). This implies e2b + α = (1 + eb)2 if we want Var[h] â Var[x]. By solving for α we have
α = 1 + 2 · eb ,
â
and α = 3 when b = 0.
# B Experimental Details
We include additional experimental setup and re- sults in this section.
# B.1 Classiï¬cation
The data and pre-processing code are obtained from the code repository of Harvard NLP.7
We use a batch size of 32 and a dropout proba- bility of 0.5 for all models. In addition, we incre- ment the dropout to 0.55 or 0.6 for the 8-layer SRU model. Following the implementation of (Kim, 2014), out-of-vocabulary words that are not in the pre-trained embeddings are initialized with ran- dom vectors with values from [â0.25, 0.25].
# B.2 Question Answering
We use a word embedding dropout of 0.5 and a re- current dropout of 0.2. In the setup of Chen et al. (2017), the bi-LSTM models concatenates the out- put of each layer and feed it to subsequent layers. This helps the gradient propagation and improves the ï¬nal performance. With highway connection, this is no longer necessary. In SRU and Q-RNN (with highway), only the output of the last layer is given to subsequent layers.
# B.3 Machine Translation
We use the OpenNMT PyTorch implementation for the translation experiments.Table 6 shows the list of conï¬guration options used for training. For evaluation, we use beam size 5 and length penalty 0.6.
-layers 4to6 | -share_embedding -rmn_size 512 -position_encoding -word_vec_size 512 -param_init 0 -batch_type tokens | -max_grad_norm 0 -normalization tokens | -dropout 0.1 -batch_size 5120 -label_smoothing 0.1 -accum_count 5 -epoch 40 -optim adam | -param_init_glorot -learning_rate 2 -adam_beta2 0.998 -decay_method noam 16000
Table 6: Translation training conï¬guration.
7https://github.com/harvardnlp/ sent-conv-torch
Epoch Transformer base Valid Test w/ SRU (4 layer) Valid Test w/ SRU (5 layer) Valid Test 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 26.1 26.2 26.1 26.2 26.2 26.3 26.5 26.4 26.4 26.4 26.5 26.4 26.5 26.5 26.4 26.4 26.5 26.5 26.5 26.5 26.6 27.3 27.3 27.4 27.4 27.4 27.4 27.5 27.6 27.6 27.5 27.7 27.6 27.5 27.5 27.6 27.6 27.6 27.5 27.6 27.6 27.6 26.2 26.3 26.3 26.4 26.4 26.4 26.5 26.4 26.4 26.4 26.4 26.6 26.5 26.5 26.5 26.5 26.5 26.5 26.5 26.7 26.6 27.6 27.7 27.8 27.7 27.8 27.7 27.7 27.6 27.7 27.8 27.8 27.7 27.8 27.8 27.9 27.9 27.8 27.8 28.0 27.8 27.9 26.6 26.6 26.7 26.8 26.7 26.6 26.7 26.8 26.7 26.8 26.9 26.9 26.9 27.1 26.9 26.9 26.9 26.9 27.0 27.0 27.0 27.9 28.1 28.0 28.1 28.0 28.1 28.1 28.1 28.2 28.2 28.1 28.3 28.3 28.3 28.2 28.2 28.3 28.2 28.2 28.2 28.2
Table 7: Average BLEU scores after each epoch.
Train-valid perplexity 43 42 me} at s O Base model 39 © w/SRU(5 layer) O w/SRU (4 layer) 3.8 4.0 43 45 48 5.0 Train PPL
B.4 Character-level Language Modeling We train all models using a weight decay of 10â7 and a gradient clipping of 0.3. We set the learn- ing rate factor of Noam scheduling to 3 and the warmup steps to 32, 000. We tune the dropout probability from {0.2, 0.3}.
trick is imple- mented as follows. Recall that the batched mul- tiplication of SRU is computed as
Figure 7: Training and validation perplexity curves of the base model and two SRU models.
W Wf Wr [x1, x2, · · · , xL] .
Table 7 shows the averaged BLEU score of each model from 20th to 40th epoch. The improve- ment over the Transformer base model is consis- tent across different epochs.
Figure 7 plots the training and validation per- plexity of three models. With a higher dropout (0.2) used for the SRU, the 5-layer model gets con- sistent lower validation perplexity over the base model and the 4-layer model. We also see that models with SRU exhibit much faster training progress with much lower training perplexity, sug- gesting the models could be tuned better with fur- ther training regularization.
The stacked parameter matrices on the left is re- parameterized by a low-rank factorization,
w Ww; |=P'Q, W,
where Q ⬠R4in*@ and P ⬠R&4ou*" are two new parameter matrices to be learned, and dâ is the projection dimension that is much smaller than the input and output dimension of the SRU. | {
"id": "1701.06538"
} |
1709.02349 | A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data. | http://arxiv.org/pdf/1709.02349 | Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, Yoshua Bengio | cs.CL, cs.AI, cs.LG, cs.NE, stat.ML, I.5.1; I.2.7 | 40 pages, 9 figures, 11 tables | null | cs.CL | 20170907 | 20171105 | 7 1 0 2
v o N 5 ] L C . s c [
2 v 9 4 3 2 0 . 9 0 7 1 : v i X r a
# A Deep Reinforcement Learning Chatbot
Iulian V. Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexandre de Brebisson, Jose M. R. Sotelo, Dendi Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau1,2 and Yoshua Bengio2 Montreal Institute for Learning Algorithms, Montreal, Quebec, Canada
# Abstract
We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including template-based models, bag-of-words models, sequence-to-sequence neural network and latent variable neural network models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with real-world users, where it performed signiï¬cantly better than many competing systems. Due to its machine learning architecture, the system is likely to improve with additional data.
# Introduction
Dialogue systems and conversational agents - including chatbots, personal assistants and voice- control interfaces - are becoming ubiquitous in modern society. Examples of these include personal assistants on mobile devices, technical support help over telephone lines, as well as online bots selling anything from fashion clothes and cosmetics to legal advice and self-help therapy. However, building intelligent conversational agents remains a major unsolved problem in artiï¬cial intelligence research.
In 2016, Amazon.com Inc proposed an international university competition with the goal of building a socialbot: a spoken conversational agent capable of conversing coherently and engagingly with humans on popular topics, such as entertainment, fashion, politics, sports, and technology. The socialbot converses through natural language speech through Amazonâs Echo device (Stone & Soper 2014). This article describes the models, experiments and ï¬nal system (MILABOT) developed by our team at University of Montreal.3 Our main motivation for participating has been to help advance artiï¬cial intelligence research. To this end, the competition has provided a special opportunity for training and testing state-of-the-art machine learning algorithms with real users (also known as machine learning in the wild) in a relatively unconstrained setting. The ability to experiment with real users is unique in the artiï¬cial intelligence community, where the vast majority of work consists of experiments on ï¬xed datasets (e.g. labeled datasets) and software simulations (e.g. game engines). In addition, the computational resources, technical support and ï¬nancial support provided by Amazon has helped scale up our system and test the limits of state-of-the-art machine learning methods. Among other things, this support has enabled us to crowdsource 200, 000 labels on Amazon Mechanical Turk and to maintain over 32 dedicated Tesla K80 GPUs for running our live system.
1School of Computer Science, McGill University. 2CIFAR Fellow. 3Our team is called MILA Team, where MILA stands for the Montreal Institute for Learning Algorithms.
Our socialbot is based on a large-scale ensemble system leveraging deep learning and reinforcement learning. We develop a new set of deep learning models for natural language retrieval and generation â including recurrent neural networks, sequence-to-sequence models and latent variable models â and evaluate them in the context of the competition. These models are combined into an ensemble, which generates a candidate set of dialogue responses. Further, we apply reinforcement learning â including value function and policy gradient methods â to train the system to select an appropriate response from the models in its ensemble. In particular, we propose a novel reinforcement learning procedure, based on estimating a Markov decision process. Training is carried out on crowdsourced data and on interactions recorded between real-world users and a preliminary version of the system. The trained systems yield substantial improvements in A/B testing experiments with real-world users.
In the competition semi-ï¬nals, our best performing system reached an average user score of 3.15 on a scale 1 â 5, with a minimal number of hand-crafted states and rules and without engaging in non-conversational activities (such as playing games or taking quizzes).4 The performance of this best system is substantially better than the average of all the teams in the competition semi-ï¬nals. Further, the same system averaged a high 14.5 â 16.0 turns per dialogue, which is also signiï¬cantly higher than the average of all the teams in the competition semi-ï¬nals, as well as the ï¬nalist teams. This improvement in back-and-forth exchanges between the user and system suggests that our system is likely to be the most engaging system among all systems in the competition. Finally, the system is bound to improve with additional data, as nearly all system components are learnable.
# 2 System Overview
Early work on dialogue systems (Weizenbaum 1966, Colby 1981, Aust et al. 1995, McGlashan et al. 1992, Simpson & Eraser 1993) were based mainly on states and rules hand-crafted by human experts. Modern dialogue systems typically follow a hybrid architecture, combining hand-crafted states and rules with statistical machine learning algorithms (Suendermann-Oeft et al. 2015, JurËcÃËcek et al. 2014, Bohus et al. 2007, Williams 2011). Due to the complexity of human language, however, it will probably never be possible to enumerate states and rules required for building a socialbot capable of conversing with humans on open-domain, popular topics. In contrast to such rule-based systems, our core approach is built entirely on statistical machine learning. We believe that this is the most plausible path to artiï¬cially intelligent conversational agents. The system architecture we propose aims to make as few assumptions as possible about the process of understanding and generating natural human language. As such, the system utilizes only a small number of hand-crafted states and rules. However, every system component has been designed to be optimized (trained) using machine learning algorithms. These system components will be trained ï¬rst independently on massive datasets and then jointly on real-world user interactions. This way, the system will learn all relevant states and rules for conducting open-domain conversations implicitly. Given an adequate amount of examples, such a system should outperform systems based on hand-crafted states and rules. Further, the system will continue to improve in perpetuity with additional data.
Our system architecture is inspired by the success of ensemble-based machine learning systems. These systems consist of many independent sub-models combined intelligently together. Examples of such ensemble systems include the winner of the Netï¬ix Prize (Koren et al. 2009), utilizing hundreds of machine learning models to predict user movie preferences, and IBM Watson (Ferrucci et al. 2010), the ï¬rst machine learning system to win the quiz game Jeopardy! in 2011. More recently, Google observed substantial improvements building an ensemble-based neural machine translation system (Wu et al. 2016).
Our system consists of an ensemble of response models. The response models take as input a dialogue and output a response in natural language text. In addition, the response models may also output one or several scalar values, indicating their internal conï¬dence. As will be explained later, the response models have been engineered to generate responses on a diverse set of topics using a variety of strategies.
4Throughout the semi-ï¬nals we carried out several A/B testing experiments to evaluate different variants of our system (see Section 5). The score 3.15 is based on the best performing system in the period between July 29th and August 6th, 2017. The score is not based on the leaderboard, which averages the scores of all the variants of our system (including a supervised learning system and a heuristic baseline system).
2
Response selection policy Response models Generate candidate responses Dialogue history Return priority Return selected ASR confidences response response Evaluate candidate responses Has priority response?
Figure 1: Dialogue manager control ï¬ow.
The dialogue manager is responsible for combining the response models together. As input, the dialogue manager expects to be given a dialogue history (i.e. all utterances recorded in the dialogue so far, including the current user utterance) and conï¬dence values of the automatic speech recognition system (ASR conï¬dences). To generate a response, the dialogue manager follows a three-step procedure. First, it uses all response models to generate a set of candidate responses. Second, if there exists a priority response in the set of candidate responses (i.e. a response which takes precedence over other responses), this response will be returned by the system.5 For example, for the question "What is your name?", the response "I am an Alexa Prize socialbot" is a priority response. Third, if there are no priority responses, the response is selected by the model selection policy. For example, the model selection policy may select a response by scoring all candidate responses and picking the highest-scored response. The overall process is illustrated in Figure 1.
When the ASR conï¬dences are below a certain threshold, the system requests the user to repeat their last utterance. Otherwise, the system does not utilize the ASR conï¬dences. In particular, neither the response models nor the model selection policy make use of the ASR conï¬dences. Nevertheless, the ASR system is far from perfect. Therefore, it is very plausible that the system could be improved signiï¬cantly by conditioning the response models and model selection policy on the ASR conï¬dences.
# 3 Response Models
There are 22 response models in the system, including retrieval-based neural networks, generation- based neural networks, knowledge base question answering systems and template-based systems. Examples of candidate model responses are shown in Table 1. This section aims to summarize these response models. First, the template-based models are described.
# 3.1 Template-based Models
We start by describing the template-based response models in the system.
Alicebot: Alicebot uses a set of AIML (artiï¬cial intelligence markup language) templates to produce a response given the dialogue history and user utterance (Wallace 2009, Shawar & Atwell 2007). We use the freely available Alice kernel available at www.alicebot.org. By default all templates generate non-priority responses, so we conï¬gure templates related to the socialbotâs name, age and location to output priority responses. We modify a few templates further to make them consistent with the challenge (e.g. to avoid obscene language and to encourage the user to discuss certain topics, such as news, politics and movies). The majority of templates remain unchanged.
The Alicebot model also outputs a scalar conï¬dence score. Since the AIML templates repeat the userâs input utterance, they are not always correct sentences. Therefore, we use a string-based rules to deter- mine if the response constitutes a correct sentence. If the response is correct sentence, it returns a high
5An ordering of the models decides which response to return in case there are multiple priority responses.
3
Table 1: Example dialogues and corresponding candidate responses generated by response models. The response of the ï¬nal system is marked in bold.
Dialogue SYSTEM: Hi. This is an Alexa Prize socialbot. USER: Hi SYSTEM: Do you have pets? USER: Yes, I have two rabbits. Dialogue ... USER: Letâs talk about movies. SYSTEM: Whatâs your favorite movie? USER: star wars movie
Candidate Responses BoWEscapePlan: I donât have an answer for this. BoWFactGenerator: Did you know that male rabbits are called bucks, females are does? GRUQuestionGenerator: How about the American family of it? Initiatorbot: Do you believe in love at ï¬rst sight? LSTMClassiï¬erMSMarco: Mar 23, 2017 Two Parts: Learning What to Feed Your Rabbit Learning How to ... DualEncoderRedditNews: bye doggie DualEncoderRedditPolitics: Whoâs edger u shboys? Alicebot: Hurrah! Two is a good number of rabbits. Elizabot: OK, but can you elaborate a bit? VHREDRedditMovies: Hmm. Nope, never heard of it. VHREDSubtitles: Would you like to see them? HelpResponse: I would really like to talk about news, politics or movies. Candidate Responses BoWEscapePlan: I donât have an answer for this. BoWFactGenerator: Hereâs an interesting fact. The international telephone dialing code for Antarctica is 672. GRUQuestionGenerator: how about your New York City? Initiatorbot: How was your day? LSTMClassiï¬erMSMarco: A third Anthology ï¬lm will be released in 2020. DualEncoderRedditNews: Now whisper it to me , one more time bby. DualEncoderRedditPolitics: hahaha on mobile this ainât even close. I wish I could show you but this looks awful Iâm sorry man. Alicebot: What did you like about the robots in that movie? Elizabot: How does that make you feel? Evi: Star Wars movie a movie in the Star Wars series. VHREDRedditMovies: Oh please. Please. Pleeeease. Let this happen. VHREDSubtitles: What? HelpResponse: I would really like to talk about news, politics or movies.
conï¬dence and otherwise it returns a low conï¬dence score. This process is illustrated in Algorithm 1.
Algorithm 1: Alicebot 1 input: dialogue history 2 response â apply AIML templates to dialogue history 3 if response is correct sentence then if response is given priority then 4
# nk
# conï¬dence â 1.0
5
6 else
# conï¬dence â 0.5
7
# 8 else 9
# conï¬dence â 0.0
|. confidence < 0.0
# 10 output: response, priority, conï¬dence
Elizabot Similar to Alicebot, the Elizabot model performs string matching to select an answer from a set of templates. The model is based on the famous Eliza system, designed to mimic a Rogerian (Weizenbaum 1966).6 Therefore, in contrast with Alicebot, most of Elizabotâs psychotherapist. responses are personal questions which are meant to engage the user to continue the conversation.
# 6We use the implementation available at: https://gist.github.com/bebraw/273706.
4
Here are two example templates:
1. "I am (.*)" â "Did you come to me because you are ..."
2. "What (.*)" â "Why do you ask?"
The ellipses mark the parts of the response sentence which will be replaced with text from the userâs utterance. The model detects the appropriate template and selects the corresponding response (if there are multiple templates, then a template is selected at random). The model then runs the template response through a set of reï¬ections to better format the string for a response (e.g. "Iâd" â "you would", "your" â "my").
# Algorithm 2: Initiatorbot
1 input: dialogue history 2 if Initiatorbot was triggered in one of last two turns then 3 4 else if user did not give a greeting then 5 6 else 7
# return ""
return a non-priority response with a random initiator phrase
return a priority response with a random initiator phrase
Initiatorbot The Initiatorbot model acts as a conversation starter: it asks the user an open-ended question to get the conversation started and increase the engagement of the user. We wrote 40 question phrases for the Initiatorbot. Examples of phrases include "What did you do today?", "Do you have pets?" and "What kind of news stories interest you the most?". As a special case, the model can also start the conversation by stating an interesting fact. In this case, the initiator phrase is "Did you know that <fact>?", where fact is replaced by a statement. The set of facts is the same as used by the BoWFactGenerator model, described later.
Before returning a response, Initiatorbot ï¬rst checks that it hasnât already been triggered in the last two turns of the conversation. If the user gives a greeting (e.g. "hi"), then Initiatorbot will return a response with priority. This is important because we observed that greetings often indicate the beginning of a conversation, where the user does not have a particular topic they would like to talk about. By asking a question, the system takes the initiative (i.e. control of the dialogue). The procedure is detailed in Algorithm 2.
Storybot The Storybot model outputs a short ï¬ction story at the request of the user. We implemented this model as we observed that many users were asking the socialbot to tell stories.7 Storybot determines if the user requested a story by checking if there was both a request word (e.g. say, tell.) and story-type word in the utterance (e.g. story, tale). The response states the storyâs title and author followed by the story body. For example, one set of responses from this model follows the pattern "Alright, let me tell you the story <story_title> <story_body> by <story_author>" where <story_title> is the title of the story, <story_body> is the main text and <story_author> is the name of the storyâs author. The stories were scraped from the website: www.english-for-students.com.
An example story is:
** The Ant and The Grasshopper ** The ants worked hard in summer. They sorted food for winter. At that time, a grasshopper remained idle. When winter came, the ants had enough to eat. But, the grasshopper had nothing to eat. He had to starve. He went to the ants and begged for foods. The ants asked in return, "What did you do in summer?" He replied, "I idled away my time during summer". The ant replied, "Then you must starve in winter." MORAL: Never be idle.
The Storybot is the only component in the system performing a non-conversational activity. It is triggered only when a user speciï¬cally asks for a story, and in that case its response is a priority
7Requests for telling stories is possibly a side-effect of userâs interacting with bots from other teams, which often emphasized non-conversational activities, such as telling stories and playing quizzes and word games.
5
response. Otherwise, the Storybot response model is never triggered. Further, the rest of the system will not encourage the user to request stories.
# 3.2 Knowledge Base-based Question Answering
Evibot The Evibot response model forwards the userâs utterance to Amazonâs question-answering web-service Evi: www.evi.com. Evi was designed primarily to handle factual questions. There- fore, Evibot returns a priority response for direct questions, deï¬ned as user utterances contain- ing a wh-word (e.g. "who", "what"), and otherwise returns a non-priority or, possibly, an empty If the query is a direct question and contains non-stop words, Evibot will follow a response. three step procedure to generate its response. First, Evibot forwards a query to www.evi.com containing the whole user utterance, and returns the resulting answer if its valid. If that fails, Evibot applies NLTKâs named entity processor (Bird et al. 2009) to the query to ï¬nd sub- queries with named entities. For each subphrase that contains a named entity, Evibot forwards queries to www.evi.com, and returns the result upon a valid response. Finally, if the previ- ous two steps fail, Evibot forwards queries for every subquery without named entities, and re- turns either a valid response or an empty response. The procedure is detailed in Algorithm 3.
Algorithm 3: Evibot 1 input: dialogue history 2 query â last user utterance 3 has-wh-words â true if utterance contains a wh-word, otherwise false 4 has-only-stop-words â true if utterance only has stop words, otherwise false 5 if has-only-stop-words and not has-wh-words then 6
# return ""
7 evi-response â send query to www.evi.com 8 priority â true if has-wh-words and evi-response is valid, otherwise false 9 if evi-response is valid then 10
# return evi-response, priority
# 11 else if has-wh-words then 12
priority â has-wh-words subentities â entities extracted from query using NLTKâs named entity processor subphrases â list of subphrases with entities for subphrase in subphrases do
13
14
15
evi-response â send subphrase to www.evi.com if evi-response is valid then
16
17
18 return evi-response, priority
19
20
subphrases â list of all subphrases for subphrase in subphrases do
evi-response â send subphrase to www.evi.com if evi-response is valid then
21
22
return evi-response, priority
23
# 24 else 25
# return ""
25 return
BoWMovies The BoWMovies model is a template-based response model, which handles questions in the movie domain. The model has a list of entity names and tags (e.g. movie plot and release year). The model searches the userâs utterance for known entities and tags. Entities are identiï¬ed by string matching. This is done in a cascading order, by giving ï¬rst preference to movie title matches, then actor name matches, and ï¬nally director name matches. Tags are also identiï¬ed by string matching. However, if exact string matching fails for tags, then identiï¬cation is performed by word embedding similarity. If both an entity and a tag are present, the agent will dispatch an API call to one of several data sources to retrieve the data item for the selected query type. The agent is limited by the data available in the APIs to which it has access. The modelâs responses follow predeï¬ned templates.
Movie titles, actor names, and director names are extracted from the Internet Movie Database (IMDB). Movie descriptions are taken from Google Knowledge Graphâs API. Other movie title queries are
6
directed to the Open Movie Database (OMDB).8 For actor and director queries, the Wikiedata API is used. First, a search for actor and director names is done on a Wikidata JSON dump.
As described earlier, the model uses word embeddings to match tags. These word embeddings are trained using Word2Vec on movie plot summaries and actor biographies extracted from the IMDB database (Mikolov et al. 2013).
# Algorithm 4: BoWMovies - ComputeResponse
1 input: dialogue history 2 entity â entity contained both in last user utterance and list of movie titles, actors or directors 3 if no entity then 4
entity â entity contained in previous user utterances and movie titles, actors or directors
5 if no entity then return "" 6
7 if entity is a movie title then 8 9 else if entity is an actor name then 10 11 else if entity is an director name then 12 13 return response
# response â ComputeEntityResponse(entity, movie title)
# response â ComputeEntityResponse(entity, actor name)
# response â ComputeEntityResponse(entity, director name)
# Algorithm 5: BoWMovies - ComputeEntityResponse
1 input: entity and entity type 2 tag â string matching tag, where tag is valid for entity type (movie title, actor name, director
name)
# 3 if no tag then 4
tag â word embedding matching tag, where tag is a single word and valid for the entity type (movie title, actor name, director name)
# 5 if no tag then 6
tag â word embedding matching tag, where tag is multiple words and valid for the entity type (movie title, actor name, director name)
# 7 if no tag then 8
# return ""
9 api-response â call external API with query (entity, tag). 10 response â template with api-response inserted 11 return response
# 3.3 Retrieval-based Neural Networks
VHRED models: The system contains several VHRED models, sequence-to-sequence models with Gaussian latent variables trained as variational auto-encoders (Serban et al. 2017, Kingma & Welling 2014, Rezende et al. 2014). The models are trained using the same procedure as Serban et al. (2017). A comparison between VHRED and other generative sequence-to-sequence models is provided by Serban et al. (2016). The trained VHRED models generate candidate responses as follows. First, a set of K model responses are retrieved from a dataset using cosine similarity between the current dialogue history and the dialogue history in the dataset based on bag-of-words TF-IDF Glove word embeddings (Pennington et al. 2014).9 An approximation of the log-likelihood for each of the 20 responses is computed by VHRED, and the response with the highest log-likelihood is returned. The system has 4 VHRED models based on datasets scraped from Reddit, one VHRED model based on news articles and one VHRED model based on movie subtitles:
8See www.omdbapi.com. This should not be confused with IMDB. 9We use the Glove embeddings trained on Wikipedia 2014 + Gigaword 5: https://nlp.stanford.edu/
projects/glove/.
7
⢠VHREDRedditPolitics trained on https://www.reddit.com/r/politics and extracting responses from all Reddit datasets with K = 10,
⢠VHREDRedditNews trained on Reddit https://www.reddit.com/r/news and extracting responses from all Reddit datasets with K = 20,
⢠VHREDRedditSports trained on Reddit https://www.reddit.com/r/sports and ex- tracting responses from all Reddit datasets with K = 20,
⢠VHREDRedditMovies trained on Reddit https://www.reddit.com/r/movies and ex- tracting responses from all Reddit datasets with K = 20,
VHREDWashingtonPost10 trained on Reddit https://www.reddit.com/r/politics
and extracting responses from user comments to WashingtonPost news articles, and
⢠VHREDSubtitles11 using the movie subtitles dataset SubTle (Ameixa et al. 2014) with K = 10.
In particular, VHREDRedditPolitics and VHREDWashingtonPost use a different retrieval procedure. These two models use a logistic regression model to score the responses instead of the approximate log-likelihood. The logistic regression model is trained on a set of 7500 Reddit threads and candidate responses annotated by Amazon Mechanical Turk workers on a Likert-type scale 1 â 5. The candidate responses are selected from other Reddit threads according to cosine similarity w.r.t. Glove word embeddings. The label collection and training procedure for the logistic regression model are similar to the procedures described in Section 4. For each response, the logistic regression model takes as input the VHRED log-likelihood score, as well as several other input features, and outputs a scalar-valued score. Even though the logistic regression model did improve the appropriateness of responses selected for Reddit threads, VHREDRedditPolitics is used extremely rarely in the ï¬nal system (see Section 4). This suggests that training a model to rerank responses based on labeled Reddit threads and responses cannot help improve performance.
SkipThought Vector Models: The system contains a SkipThought Vector model (Kiros et al. 2015) trained on the BookCorpus dataset (Zhu et al. 2015) and on the SemEval 2014 Task 1 (Marelli et al. 2014). The model was trained using the same procedure as Kiros et al. (2015) and is called SkipThoughtBooks.
SkipThoughtBooks ensures that the system complies with the Amazon Alexa Prize competition rules. One rule, introduced early in the competition, is that socialbots were not supposed to state their own opinions related to political or religious topics. If a user wishes to discuss such topics, the socialbots should proceed by asking questions or stating facts. SkipThoughtBooks also handles idiosyncratic issues particular to the Alexa platform. For example, many users did not understand the purpose of a socialbot and asked our socialbot to play music. In this case, the system should instruct the user to exit the socialbot application and then play music.
SkipThoughtBooks follows a two-step procedure to generate its response. The ï¬rst step compares the userâs last utterance to a set of trigger phrases. If a match is found, the model returns a corresponding priority response.12 For example, if the user says "What do you think about Donald trump?", the model will return a priority response, such as "Sometimes, truth is stranger than ï¬ction.". A match is found if: 1) the SkipThought Vector modelâs semantic relatedness score between the userâs last utterance and a trigger phrase is above a predeï¬ned threshold, and 2) the userâs last utterance contains keywords relevant to the trigger phrase.13 In total, there are 315 trigger phrases (most are paraphrases of each other) and 35 response sets.
If the model did not ï¬nd a match in the ï¬rst step, it proceeds to the second step. In this step, the model selects its response from among all Reddit dataset responses. As before, a set of K model responses are retrieved using cosine similarity. The model then returns the response with the highest semantic relatedness score.
Dual Encoder Models: The system contains two Dual Encoder retrieval models (Lowe et al. 2015, Lowe, Pow, Serban, Charlin, Liu & Pineau 2017), DualEncoderRedditPolitics and DualEncoderRed- ditNews. Both models are composed of two sequence encoders ENCQ and ENCR with a single
10For VHREDWashingtonPost, the K responses are extracted based on the cosine similarity between the current dialogue and the news article keywords. K varies depending on the number of user comments within a set of news articles above a certain cosine similarity threshold.
11For VHREDSubtitles, cosine similarity is computed based on one-hot vectors for each word. 12Trigger phrases may have multiple responses. In this case, a response is selected at random. 13Some trigger phrases do not have keywords. In this case, matching is based only on semantic relatedness.
8
LSTM recurrent layer used to encode the dialogue history and a candidate response. The score for a candidate response is computed by a bilinear mapping of the dialogue history embedding and the candidate response embedding as Lowe et al. (2015). The models are trained using the method proposed by (Lowe et al. 2015). In principle, it is also possible to use early stopping based on separate model trained on a domain similar to our target domain (Lowe et al. 2016). The response with the highest score from a set of K = 50 candidate responses are retrieved using TF-IDF cosine similarity based on Glove word embeddings. The model DualEncoderRedditPolitics is trained on the Reddit https://www.reddit.com/r/politics dataset and extracts responses from all Reddit datasets. The model DualEncoderRedditNews is trained on the Reddit https://www.reddit.com/r/news dataset and extracts responses from all Reddit datasets.
Bag-of-words Retrieval Models: The system contains three bag-of-words retrieval models based on TF-IDF Glove word embeddings (Pennington et al. 2014) and Word2Vec embeddings (Mikolov et al. 2013).14 Similar to the VHRED models, these models retrieve the response with the highest cosine similarity. The BoWWashingtonPost model retrieves user comments from WashingtonPost news articles using Glove word embeddings. The model BoWTrump retrieves responses from a set of Twitter tweets scraped from Donald Trumpâs proï¬le: https://twitter.com/realDonaldTrump. This model also uses Glove word embeddings and it only returns a response when at least one relevant keyword or phrase is found in the userâs utterance (e.g. when the word "Trump" is mentioned by the user). The list of trigger keywords and phrases include: âdonaldâ, âtrumpâ, âpotusâ, âpresident of the united statesâ, âpresident of the usâ, âhillaryâ, âclintonâ, âbarackâ, and âobamaâ. The model BoWFactGenerator retrieves responses from a set of about 2500 interesting and fun facts, including facts about animals, geography and history. The model uses Word2Vec word embeddings. The model BoWGameofThrones retrieves responses from a set of quotes scraped from https://twitter.com/ ThroneQuotes using Glove word embeddings. Tweets from this source were manually inspected and cleaned to remove any tweets that were not quotes from the series. As in the BoWTrump model, we use a list of trigger phrases to determine if the modelâs output is relevant to the userâs utterance. We populate this list with around 80 popular character names, place names and family names, which are large unique to the domain. We also added a few aliases to try and account for alternative speech transcriptions of these named entities. Some phrases include: âned starkâ, âjon snowâ, âjohn snowâ, âsamwell tarlyâ, "hodor", "dothraki" and so on. 15
# 3.4 Retrieval-based Logistic Regression
BoWEscapePlan: The system contains a response model, called BoWEscapePlan, which returns a response from a set of 35 topic-independent, generic pre-deï¬ned responses, such as "Could you repeat that again", "I donât know" and "Was that a question?". Its main purpose is to maintain user engagement and keep the conversation going, when other models are unable to provide meaningful responses. This model uses a logistic regression classiï¬er to select its response based on a set of higher-level features.
To train the logistic regression classiï¬er, we annotated 12, 000 user utterances and candidate response pairs for appropriateness on a Likert-type scale 1 â 5. The user utterances were extracted from interactions between Alexa users and a preliminary version of the system. The candidate responses were sampled at random from BoWEscapePlanâs response list. The label collection and training procedure for the logistic regression model are similar to the procedures described in Section 4. The logistic regression model is trained with log-likelihood on a training set, with early-stopping on a development set, and evaluated on the testing set. However, the trained modelâs performance was poor. It obtained a Pearson correlation coefï¬cient of 0.05 and a Spearmanâs rank correlation coefï¬cient of 0.07. This indicates that the logistic regression model is only slightly better at selecting a topic-independent, generic response compared to selecting a response at uniform random. Future work should investigate collecting more labeled data and pre-training the logistic regression model.
# 3.5 Search Engine-based Neural Networks
The system contains a deep classiï¬er model, called LSTMClassiï¬erMSMarco, which chooses its response from a set of search engine results. The system searches the web with the last user utterance
14We use the pre-trained Word2Vec embeddings: https://code.google.com/archive/p/word2vec/. 15This model was implemented after the competition ended, but is included here for completeness.
9
as query, and retrieves the ï¬rst 10 search snippets. The retrieved snippets are preprocessed by stripping trailing words, removing unnecessary punctuation and truncating to the last full sentence. The model uses a bidirectional LSTM to separately map the last dialogue utterance and the snippet to their own embedding vectors. The resulting two representations are concatenated and passed through an MLP to predict a scalar-value between 0 â 1 indicating how appropriate the snippet is as a response to the utterance.
The model is trained as a binary classiï¬cation model on the Microsoft Marco dataset with cross- entropy to predict the relevancy of a snippet given a user query (Nguyen et al. 2016). Given a search query and a search snippet, the model must output one when the search snippet is relevant and otherwise zero. Search queries and ground truth search snippets are taken as positive samples, while other search snippets are selected at random as negative samples. On this task, the model is able to reach a prediction accuracy of 72.96% w.r.t. the Microsoft Marco development set.
The system is able to use search APIs from various search engines including Google, Bing, and AIFounded (Im 2017). In the current model, we choose Google as the search engine, since qualitative inspection showed that this retrieved the most appropriate responses.
# 3.6 Generation-based Neural Networks
The system contains a generative recurrent neural network language model, called GRUQuestion- Generator, which can generate follow-up questions word-by-word, conditioned on the dialogue history. The input to the model consists of three components: a one-hot vector of the current word, a binary question label and a binary speaker label. The model contains two GRU layers (Cho et al. 2014) and softmax output layer. The model is trained on Reddit Politics and Reddit News conversa- tions, wherein posts were labeled as questions by detecting question marks. We use the optimizer Adam (Kingma & Ba 2015), and perform early stopping by checking the perplexity on the validation set For generation, we ï¬rst condition the model on a short question template (e.g. "How about", âWhat aboutâ, âHow do you think ofâ, âWhat is your opinion ofâ), and then generate the rest of the question by sampling from the model with the question label clamped to one. The generation procedure stops once a question mark is detected. Further, the length of the question is controlled by tuning the temperature of the softmax layer. Due to speed requirements, only two candidate responses are generated and the best one w.r.t. log-likelihood of the ï¬rst 10 words is returned.
# 4 Model Selection Policy
After generating the candidate response set, the dialogue manager uses a model selection policy to select the response it returns to the user. The dialogue manager must select a response which increases the satisfaction of the user for the entire dialogue. It must make a trade-off between immediate and long-term user satisfaction. For example, suppose the user asks to talk about politics. If the dialogue manager chooses to respond with a political joke, the user may be pleased for one turn. Afterwards, however, the user may be disappointed with the systemâs inability to debate political topics. Instead, if the dialogue manager chooses to respond with a short news story, the user may be less pleased for one turn. However, the news story may inï¬uence the user to follow up with factual questions, which the system may be better adept at handling. To make the trade-off between immediate and long-term user satisfaction, we consider selecting the appropriate response as a sequential decision making problem. This section describes ï¬ve approaches to learn the model selection policy. These approaches are all evaluated with real-world users in the next section.
We use the reinforcement learning framework (Sutton & Barto 1998). The dialogue manager is an agent, which takes actions in an environment in order to maximize rewards. For each time step t = 1, . . . , T , the agent observes the dialogue history ht and must choose one of K actions (responses): a1 t . After taking an action, the agent receives a reward rt and is transferred to the next state ht+1 (which includes the userâs next response). Then, the agent is provided with a new set of K actions: a1
T R=oy'n, (1) t=1
t=1 which is referred to as the expected cumulative return (or simply expected return). The parameter γ â (0, 1] is a discount factor.
10
An issue speciï¬c to our setting is that the set of actions changes depending on the state (dialogue history). This happens because the candidate responses are generated by response models, which also depend on the dialogue history. In addition, the response models are not deterministic. This means the set of candidate responses is likely to be different every time the agent encounters the same state ht.16 This is in contrast to certain reinforcement learning problems, such as learning to play Atari 2600 games, where the set of actions is ï¬xed given the state. To simplify notation, we will ï¬x the number of actions to K henceforth.
Action-value Parametrization: We use two different approaches to parametrize the agentâs policy. The ï¬rst approach is based on an action-value function, deï¬ned by parameters θ:
Qθ(ht, ak t ) â R for k = 1, . . . , K, (2)
which estimates expected return of taking action ak t (candidate response k) given dialogue history ht and given that the agent will continue to use the same policy afterwards. Given Qθ, the agent chooses the action with highest expected return:
Ïθ(ht) = arg max k Qθ(ht, ak t ). (3)
The use of an action-value function for selecting dialogue responses is closely related to the recent work by Lowe, Noseworthy, Serban, Angelard-Gontier, Bengio & Pineau (2017), where a model is learned to predict the quality of a dialogue system response. However, in our case, Qθ is only conditioned on the dialogue context. On the other hand, the model proposed by Lowe, Noseworthy, Serban, Angelard-Gontier, Bengio & Pineau (2017) is conditioned both on the dialogue context and on a human reference response. The action-value function is also related to the the work by Yu et al. (2016), who learn an evaluation model, which is used to train a reinforcement learning agent to select appropriate dialogue response strategies.
Stochastic Policy Parametrization: The second approach instead parameterizes the policy as a discrete distribution over actions. Let θ be the parameters. The agent selects its action by sampling:
ed" fo(he.at) mo(a*\hi) Sanat) fork =1,..., K, (4) a
where fθ(ht, ak t given ht. The parameter λ is called the temperature and controls the entropy of the distribution. The higher λ is, the more uniform the selecting of actions will be. The stochastic policy can be transformed to a deterministic (greedy) policy by selecting the action with highest probability: fθ(ht, ak
Ïgreedy θ Ïθ(ak (ht) = arg max t |ht) = arg max t ). k k (5)
Scoring Model: The action-value function Qθ(ht, ak t ) are closely related. Both functions yield a ranking over the actions; higher values imply higher expected returns. When Qθ(ht, ak t ), the action-value function policy in eq. (3) is equivalent to the greedy policy in eq. (5). For simplicity, we will use the same parametrization for both Qθ(ht, ak t ) and fθ(ht, ak t ). Therefore, we let both functions take the same features as input and process them using the same neural network architecture. We will refer to both functions as the scoring model.
The next section describes the input features for the scoring model.
# 4.1 Input Features
As input to the scoring model we compute 1458 features based on the given dialogue history and candidate response. The input features are based on a combination of word embeddings, dialogue acts, part-of-speech tags, unigram word overlap, bigram word overlap and model-speciï¬c features:
Word embeddings of response: Average of dings (Mikolov et al. 2013).17 candidate response word embed-
16In general, since some response models only output responses for certain user utterances, the number of candidate responses also changes depending on the state.
# 17We use the pre-trained Word2Vec embeddings: https://code.google.com/archive/p/word2vec/.
11
Word embeddings of last user utterance: Average of the last user utterance word embeddings. Word embeddings of context: Average of the word embeddings of the last six utter- ances in dialogue context. Average of the word embeddings of the last three user utterances in dialogue context. The Embedding Average, Embedding Extrema and Embedding Greedy similarity metrics described by Liu et al. (2016). Each similarity metric is computed between 1) the last user utterance and candidate re- sponse, 2) the last six utterances in the dialogue and candidate response, 3) the last three user utterances in the dialogue and candidate response, 4) the last six utterances in the dialogue and candidate response with stop-words removed, and 5) the last three user utterances in the dialogue and candidate response with stop-words removed. A one-hot vector with size equal to the number of response models, where entry i is equal to 1.0 when candidate response was generated by the model class with index i. The part-of-speech tags for candidate response is es- timated using a maximum entropy tagger trained on the Penn Treebank corpus. The sequence of part-of- speech tags is then mapped to a one-hot vector, which constitutes the input feature. The outer-product between a one-hot vector represent- ing the dialogue act (we consider 10 types of dialogue acts) and a one-hot vector for indicating the model class (Stolcke et al. 2000). 1.0 when one or more non-stop-words overlap be- tween candidate response and last user utterance, and otherwise zero. 1.0 when a bigram (two consecutive tokens) exists both in the candidate response and in the last user utterance, and otherwise zero. 1.0 when a bigram exists both in candidate response and in one of the last utterances in dialogue context, and otherwise zero. 1.0 when a named-entity (an upper-cased word, which is not a stop-word) exists both in candidate response and in the last user utterance, and otherwise zero. 1.0 when a named-entity exists both in candidate re- sponse and in one of the last utterances in dialogue context, and otherwise zero. 1.0 when candidate response consists of only stop- words or words shorter than 3 characters, and other- wise zero. 1.0 when candidate response contains a wh-word (e.g. what, where, and so on), and otherwise zero. 1.0 when last user utterance contains a wh-word, and otherwise zero. 1.0 when candidate response contains an intensiï¬er word (e.g. amazingly, crazy, and so on), and otherwise zero.
12
Intensiï¬er word context: 1.0 when last user utterance contains an intensiï¬er word, and otherwise zero. Unigram response: A set of binary features which are 1.0 when candidate response contains a speciï¬c word (including the words I, you and thanks), and otherwise zero. Negation response: 1.0 when candidate response contains a negation word, such as not or nât, and otherwise zero. Non-stop-words response: 1.0 when candidate response contains a non-stop- word, and otherwise zero.
We do not include features based on the conï¬dences of the speech recognition system, for experimental reasons. Speech recognition errors are a confounding factor in experiments with real-world users. Speech recognition errors are likely to affect user satisfaction. If features based on speech recognition conï¬dences were included, one policy might learn to handle speech recognition errors better than another policy. In turn, this could make that policy perform better w.r.t. overall user satisfaction. However, that would be an effect caused by the imperfect speech recognition system, and would not reï¬ect user satisfaction under a perfect speech recognition system. Excluding these features as input to the scoring model helps minimize this confounding effect.Nevertheless, even if these features are excluded, it should be noted that speech recognition errors still constitute a substantial confounding factor in our later experiments. Lastly, for the same reasons, none of the response models utilize speech recognition conï¬dences.
In principle, it is possible to compute input features by encoding the dialogue context and candi- date response using Recurrent Neural Networks (RNNs) or Convolutional Neural Networks (Con- vNets) (Socher et al. 2013, Blunsom et al. 2014, Cho et al. 2014, Yu et al. 2014, Kiros et al. 2015). However, these models are known to require training on large corpora in order to achieve acceptable performance, which we do not have access to. In addition, we need to keep the scoring modelâs execution time under 150ms. Otherwise, the slowdown in the response time, could frustrate the user and lower the overall user satisfaction. This rules out large RNNs and ConvNets for the Amazon Alexa Prize competition, since these would require more computational runtime. However, future dialogue systems utilizing larger datasets should consider large-scale models.
# 4.2 Model Architecture
This section describes the scoring modelâs architecture. The scoring model is a ï¬ve-layered neural network. The ï¬rst layer is the input, consisting of the 1458 features, described in the previous section. The second layer contains 500 hidden units, computed by applying a linear transformation followed by the rectiï¬ed linear activation function (Nair & Hinton 2010, Glorot et al. 2011) to the input layer units. The third layer contains 20 hidden units, computed by applying a linear transformation to the preceding layer units. Similar to matrix factorization, this layer compresses the 500 hidden units down to 20 hidden units. The fourth layer contains 5 outputs units, which are probabilities (i.e. all values are positive and sum to one). These output units are computed by applying a linear transformation to the preceding layer units followed by a softmax transformation. This layer corresponds to the Amazon Mechanical Turk labels, which will be described in the next sub-section. The ï¬fth layer is the ï¬nal output scalar, computed by applying a linear transformation to the units in the third and fourth layers. The model is illustrated in Figure 2.
Before settling on this architecture, we experimented both with deeper and more shallow models. However, we found that both the deeper models and the more shallow models performed worse. Nevertheless, future work should explore alternative architectures.
We use ï¬ve different machine learning approaches to learn the scoring model. These are described next.
# 4.3 Supervised AMT: Learning with Crowdsourced Labels
This section describes the ï¬rst approach to learning the scoring model, which is based on estimating the action-value function using supervised learning on crowdsourced labels. This approach also serves as initialization for the approaches discussed later.
13
1458 dim. Dialogue history S.dim. idim. Hidden Hidden Layer Candidate response
Figure 2: Computational graph for scoring model, used for the model selection policies based on both action-value function and stochastic policy parametrizations. The model consists of an input layer with 1458 features, a hidden layer with 500 hidden units, a hidden layer with 20 hidden units, a softmax layer with 5 output probabilities (corresponding to the ï¬ve AMT labels in Section 4.3), and a scalar-valued output layer. The dashed arrow indicates a skip connection.
Crowdsourcing: We use Amazon Mechanical Turk (AMT) to collect data for training the scoring model. We follow a setup similar to Liu et al. (2016). We show human evaluators a dialogue along with 4 candidate responses, and ask them to score how appropriate each candidate response is on a 1-5 Likert-type scale. The score 1 indicates that the response is inappropriate or does not make sense, 3 indicates that the response is acceptable, and 5 indicates that the response is excellent and highly appropriate.
Our setup only asks human evaluators to rate the overall appropriateness of the candidate responses. In principle, we could choose to evaluate other aspects of the candidate responses. For example, we could evaluate ï¬uency. However, ï¬uency ratings would not be very useful since most of our models retrieve their responses from existing corpora, which contain mainly ï¬uent and grammatically correct responses. As another example, we could evaluate topical relevancy. However, we choose not to evaluate such criteria since it is known to be difï¬cult to reach high inter-annotator agreement on them (Liu et al. 2016). In fact, it is well known that even asking for a single overall rating tends to produce only a fair agreement between human evaluators (Charras et al. 2016); disagreement between annotators tends to arise either when the dialogue context is short and ambiguous, or when the candidate response is only partially relevant and acceptable.
The dialogues are extracted from interactions between Alexa users and preliminary versions of our system. Only dialogues where the system does not have a priority response were extracted (when there is a priority response, the dialogue manager must always return the priority response). About 3/4 of these dialogues were sampled at random, and the remaining 1/4 dialogues were sampled at random excluding identical dialogues.18 For each dialogue, the corresponding candidate responses are created by generating candidate responses from the response models.
We preprocess the dialogues and candidate responses by masking out profanities and swear words with stars (e.g. we map "fuck" to "****").19 Furthermore, we anonymize the dialogues and candidate responses by replacing ï¬rst names with randomly selected gender-neutral names (for example, "Hi John" could be mapped to "Hello Casey"). Finally, the dialogues are truncated to the last 4 utterances and last 500 words. This reduces the cognitive load of the annotators. Examples from the crowdsourcing task are shown in Figure 3, Figure 4 and Figure 5. The dialogue example shown in Figure 5 is a ï¬ctitious example.
18Sampling at random is advantageous for our goal, because it ensures that candidate responses to frequent user statements and questions tend to be annotated by more turkers. This increases the average annotation accuracy for such utterances, which in turn increases the scoring modelâs accuracy for such utterances.
19The masking is not perfect. Therefore, we also instruct turkers that the task may contain profane and
obscene language. Further, it should also be noted that Amazon Mechanical Turk only employs adults.
14
We need your consent to proceed Given a conversation, you must rate the quality of potential next responses. This study is part of the dialogue research project carried out by lulian Viad Serban in collaboration with professor Yoshua Bengio at University of Montreal. The project aims to build a computer system able to converse with humans. The conversations you will be presented are based on real conversations, which have been anonymized. You are not allowed to share or redistribute these conversations in any form. Once you have completed the task you must ensure that no data is left in memory on your computer. We have automatically filtered the content to remove offensive language. Unfortunately, the filtering process is not perfect so it is possible that you occasionally will be shown offensive language. Your name will not be recorded. You will be assigned a number, which will not be kept alongside any identifiable information, This number will be used to refer to you in our results. Your participation is entirely voluntary and will require about 20 minutes of your time. You may decide to refuse to perform a task you deem inappropriate or to withdraw from the study at any time. By clicking "I Agreeâ, you assert that you have read the information above, and are agreeing to participate in this study, in accordance with Amazon Mechanical Turk Guidelines. #1 Print a copy of this Do you understand and consent to these terms? @ No thanks, | do not want to do this HIT
Figure 3: Consent screen for Amazon Mechanical Turk human intelligence tasks (HITs).
Instructions You will be presented with a conversation between two speakers (speaker A and speaker B). You will also be presented with 4 potential responses from one of the speakers for this dialogue. The task is to rate each response between 1 (inappropriate, does not make any sense) and 5 (highly appropriate and interesting) based on how appropriate the response is to continue the conversation (with 3 being neutral). A response is appropriate if it is interesting and makes sense given the previous dialogue. If two responses are equally appropriate, you should give them the same score. If you see a response that is not in English, please give all "1" scores.
Figure 4: Instructions screen for Amazon Mechanical Turk human intelligence tasks (HITs).
15
: Response Res, R Conversation P Ponse Response 3 esponse 1 2 4 A: you need to work . What other Here's a funny fact! Go. is . But English on your English is my native reasons the shortest complete bye B: Why do you say lan ye e come to sentence in the English doggie that about me? guage. mind? language. A: Well your English is very poor Score 4, 37 3â 2 Instructions: Rate the appropriateness of the response between 1 (inappropriate, does not make any sense) and 5 (highly appropriate and interesting). The score 3 indicates neutral (acceptable, but not interesting). Remember to take into account the previous conversation. Next 3/28
: Response Res, R Conversation P Ponse Response 3 esponse 1 2 4 A: you need to work . What other Here's a funny fact! Go. is . But English on your English is my native reasons the shortest complete bye B: Why do you say lan ye e come to sentence in the English doggie that about me? guage. mind? language. A: Well your English is very poor Score 4, 37 3â 2
Figure 5: Annotation screen for Amazon Mechanical Turk human intelligence tasks (HITs). The dialogue text is a ï¬ctitious example.
We inspected the annotations manually. We observed that annotators tended to frequently overrate topic-independent, generic responses. Such responses may be considered acceptable for a single turn in a conversation, but are likely to be detrimental when repeated over and over again. In particular, annotators tended to overrate responses generated by the response models Alicebot, Elizabot, VHREDSubtitles and BoWEscapePlan. Responses generated by these models are often acceptable or good, but the majority of them are topic-independent, generic sentences. Therefore, for these response models, we mapped all labels 5 ("excellent") to 4 ("good"). Furthermore, for responses consisting of only stop-words, we decreased the labels by one level (e.g. 4 is mapped to 3). Finally, the BoWMovies response model suffered from a bug during the label collection period. Therefore, we decreased all labels given to BoWMovies responses to be at most 2 ("poor").
In total, we collected 199, 678 labels. We split this into training (train), development (dev) and testing (test) datasets consisting of respectively 137,549, 23,298 and 38,831 labels each.
Training: We optimize the scoring model w.r.t. log-likelihood (cross-entropy) to predict the 4th layer, which represents the AMT label classes. Formally, we optimize the parameters θ: Ëθ = arg max
log Pθ(y|x), θ x,y (6)
where x are the input features, y is the corresponding AMT label class (a one-hot vector) and Pθ(y|x) is the modelâs predicted probability of y given x, computed in the second last layer of the scoring model. We use the ï¬rst-order gradient-descent optimizer Adam (Kingma & Ba 2015) We experiment with a variety of hyper-parameters, and select the best hyper-parameter combination based on the log-likelihood of the dev set. For the ï¬rst hidden layer, we experiment with layer sizes in the set: {500, 200, 50}. For the second hidden layer, we experiment with layer sizes in the set: {50, 20, 5}. We use L2 regularization on all model parameters, except for bias parameters. We experiment with L2 regularization coefï¬cients in the set: {10.0, 1.0, 10â1, . . . , 10â9} Unfortunately, we do not have labels to train the last layer. Therefore, we ï¬x the parameters of the last layer to the vector [1.0, 2.0, 3.0, 4.0, 5.0]. In other words, we assign a score of 1.0 for the label very poor, a score of 2.0 for the label poor, a score of 3.0 for the label acceptable, a score of 4.0 for the label good and a
16
score of 5.0 for the label excellent. As this model was trained on crowdsourced data from Amazon Mechanical Turk (AMT), we call this model Supervised AMT.
Table 2: Scoring model evaluation on Amazon Mechanical Turk test set w.r.t. Pearson correlation coefï¬cient, Spearmanâs rank correlation coefï¬cient and mean squared error.
Model Pearson Spearman Mean squared error Average Predictor Supervised AMT 0.00 0.40 0.00 0.38 1.30 1.10
60 Policy mmm Random lm Alicebot Evibot + Alicebot mmm Supervised AMT 50 w :S 6 so N 6 1 ° oe Very poor Poor Acceptable : . I °
# Frequency (in %)
Figure 6: Amazon Mechanical Turk class frequencies on the test set w.r.t. different policies.
Table 2 shows the performance w.r.t. Pearson correlation coefï¬cient, Spearmanâs rank correlation coefï¬cient and mean squared error. The metrics are computed after linearly transforming the AMT class categories to the scalar output score (i.e. by taking the dot-product between the one-hot class vector and the vector [1.0, 2.0, 3.0, 4.0, 5.0]). The Average Predictor is a baseline model, which always predicts with the average output score. As shown, Supervised AMT achieves a Pearson correlation coefï¬cient of 0.40, a Spearmanâs rank correlation coefï¬cient of 0.38 and a signiï¬cant reduction in mean squared error. This indicates Supervised AMT performs signiï¬cantly better than the baseline.
Figure 6 shows the performance w.r.t. each AMT label class. In addition to Supervised AMT, the ï¬gure shows the performance of three baseline policies: 1) Random, which selects a response at random, 2) Alicebot, which selects an Alicebot response if available and otherwise selects a response at random, and 3) Evibot + Alicebot, which selects an Evibot response if available and otherwise selects an Alicebot response. For each policy, the ï¬gure shows the percentage of responses selected by the policy belonging to a particular AMT label class. In one end of the spectrum, we observe that Supervised AMT has a ~30% point reduction compared to Random in responses belonging to the "very poor" class. For the same AMT label class, Supervised AMT has a reduction of ~10% points compared to Alicebot and Evibot + Alicebot. In the other end of the spectrum, we observe that Supervised AMT performs signiï¬cantly better than the three baselines w.r.t. the classes "good" and "excellent". In particular, Supervised AMT reaches ~8% responses belonging to the class "excellent". This is more than double compared to all three baseline policies. This demonstrates that Supervised AMT has learned to select "good" and "excellent" responses, while avoiding "very poor" and "poor" responses.
17
Overall, the results show that Supervised AMT improves substantially over all baseline policies. Nevertheless, ~46% of the Supervised AMT responses belong to the classes "very poor" and "poor". This implies that there is ample space for improving both Supervised AMT and the set of candidate responses (i.e. the systemâs response models).
# 4.4 Supervised Learned Reward: Learning with a Learned Reward Function
In the ï¬rst scoring model Supervised AMT we ï¬xed the last output layer weights to [1.0, 2.0, 3.0, 4.0, 5.0]. In other words, we assigned a score of 1.0 for very poor responses, 2.0 for poor responses, 3.0 for acceptable responses, and so on. Itâs not clear whether this score is correlated with scores given by real-world Alexa users, which is what we ultimately want to optimize the system for. This section describes another approach, which remedies this problem by learning to predict the Alexa user scores based on previously recorded dialogues.
Learned Reward Function: Let ht be a dialogue history and let at be the corresponding response, given by the system at time t. We aim to learn a linear regression model, gÏ, which predicts the corresponding return (Alexa user score) at the current dialogue turn:
gÏ(ht, at) â [1, 5], where Ï are the model parameters. We call this a reward model, since it directly models the Alexa user score, which we aim to maximize. Let {hd Let Rd â [1, 5] denote the observed real-valued return for dialogue d.
Speciï¬cally, we set Rd to be the Alexa user score given at the end of dialogue d. Itâs optional for users to a give a score; users are prompted to give a score at the end, but they may opt out by stopping the application. Although not all users give scores, we do not consider examples without scores.20 Furthermore, users are encouraged to give a score in the range 1 â 5. The majority of users give whole number (integer) scores, but some users give decimal scores (e.g. 3.5). Therefore, we treat Rd as a real-valued number in the range 1 â 5.
We learn ¢ by minimizing the squared error between the modelâs prediction and the observed return: @
ËÏ = arg max (gÏ(hd t , ad t ) â Rd)2 Ï d (8)
t As before, we optimize the model parameters with mini-batch stochastic gradient de- scent in the set {10.0, 1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001, 0.0}. We select the coefï¬cient with the smallest squared error on a hold-out dataset.
As input to the reward model we compute 23 features based on the dialogue history and a candidate response. As training data is scarce, we use only higher-level features:
AMT label class:
A vector indicating the probability of the AMT label classes for the candidate response, computed using Supervised AMT, as well as the probability that the candidate response has priority. If the candidate response has priority, the vector is zero in all entries, except the last entry corresponding to the priority class: [0.0, 0.0, 0.0, 0.0, 0.0, 1.0]. A binary feature, which is 1.0 when the response only contains stop-words and otherwise zero. The number of words in the response, and the square root of the number of words in the response. A one-hot vector, indicating whether the last user utteranceâs dialogue is a request, a question, a statement or contains profanity (Stolcke et al. 2000).
Generic response:
Response length:
Dialogue act:
20By ignoring dialogues without Alexa user scores, we introduce a signiï¬cant bias in our reward model. In particular, it seems likely that the users who did not provide a score either found the system to be very poor or to lack particular functions/features they expected (e.g. non-conversational activities, such as playing games or taking quizzes.). A related problem arises in medical statistics, when patients undergo a treatment and, later, their outcome is not observed.
18
Sentiment class: A one-hot vector, indicating whether the last user utteranceâs dialogue is negative, neutral or positive.
Generic user utterance: A binary feature, which is 1.0 when the last user utterance only contains stop-words, and otherwise zero.
User utterance length: The number of words in the last user utterance, and the square root of the
number of words in the response. A binary feature, which is 1.0 when the last user utterance is very short (less than three words) and contains at least one word indicating the user is confused (e.g. "what", "silly", "stupid"). The number of dialogue turns so far, as well as the square root and logarithm of the number of dialogue turns.
Confusion indicator:
Dialogue length:
In total, our dataset for training the reward model has 4340 dialogues. We split this into a training set with 3255 examples and a test set with 1085 examples.
To increase data efï¬ciency, we learn an ensemble model through a variant of the bagging tech- nique (Breiman 1996). We create 5 new training sets, which are shufï¬ed versions of the original training set. Each shufï¬ed dataset is split into a sub-training set and sub-hold-out set. The sub- hold-out sets are created such that the examples in one set do not overlap with other sub-hold-out sets. A reward model is trained on each sub-training set, with its hyper-parameters selected on the sub-hold-out set. This increases data efï¬ciency by allowing us to re-use the sub-hold-out sets for training, which would otherwise not have been used. The ï¬nal reward model is an ensemble, where the output is an average of the underlying linear regression models.
The reward model obtains a mean squared error of 0.96 and a Spearmanâs rank correlation coefï¬cient of 0.19 w.r.t. the real Alexa user on the test set. In comparison, a model predicting with the average user score obtains a mean squared error of 0.99 and (because it outputs a constant) a Spearmanâs rank correlation coefï¬cient of zero. Although the reward model is better than predicting the average, its correlation is relatively low. There are two reasons for this. First, the amount of training data is very small. This makes it difï¬cult to learn the relationships between the features and the Alexa user scores. Second, the Alexa user scores are likely to have high variance because, they are inï¬uenced by many different factors. The score of the user may be determined by a single turn in the dialogue (e.g. a single misunderstanding at the end of the dialogue could result in a very low user score, even if all the previous turns in the dialogue were excellent). The score of the user may be affected by the accuracy of the speech recognition module. More speech recognition errors will inevitably lead to frustrated users. In a preliminary study, we found that Spearmanâs rank correlation coefï¬cient between the speech recognition conï¬dences and the Alexa user scores was between 0.05 â 0.09. In comparison to correlations with other factors, this implies that speech recognition performance plays an important role in determining user satisfaction.21 In addition, extrinsic factors are likely to have a substantial inï¬uence on the user scores. The user scores are likely to depend not only on the dialogue, but also on the userâs proï¬le (e.g. whether the user is an adult or a child), the environment (e.g. whether the user is alone with the system or several users are taking turns conversing with the system), the userâs expectations towards the system before starting the conversation (e.g. whether the system is capable of playing games) and the emotional state of the user (e.g. the userâs mood).
Training: To prevent overï¬tting, we do not train the scoring model (action-value function) from scratch with the reward model as target. Instead, we ï¬rst initialize the model with the parameters of the Supervised AMT scoring model, and then ï¬ne-tune it with the reward model outputs to minimize the squared error:
Ëθ = arg max (fθ(hd t , ad t ) â gÏ(hd t , ad t ))2, θ d t (9)
As before, we optimize the model parameters with stochastic gradient descent using Adam. As training this model does not depend on AMT labels, training is carried out on recorded dialogues. We train on several thousand recorded dialogue examples, where about 80% are used for training and about 20% are used as hold-out set. No regularization is used. We early stop on the squared error of
21This was conï¬rmed by manual inspection of the conversation logs, where the majority of conversations had several speech recognition errors. In conversations with an excessive number of speech recognition errors (perhaps due to noisy environments), the usersâ utterances clearly showed frustration with the system.
19
the hold-out dataset w.r.t. Alexa user scores predicted by the reward model. As this scoring model was trained with a learned reward function, we call it Supervised Learned Reward.
# 4.5 Off-policy REINFORCE
As discussed earlier, one way to parametrize the policy is as a discrete probability distribution over actions. This parametrization allows us to learn the policy directly from recorded dialogues through a set of methods known as policy gradient methods. This section describes one such approach.
Off-policy Reinforcement Learning: We use a variant of the classical REINFORCE algo- rithm (Williams 1992, Precup 2000, Precup et al. 2001), which we call Off-policy REINFORCE. Recall eq. (4), where the policyâs distribution over actions is parametrized as softmax function applied to a function fθ with parameters θ. As before, let {hd t is the dialogue history for dialogue d at time t, ad t is the agentâs action for dialogue d at time t and Rd is the return for dialogue d. Let D be the number of dialogues and let T d be the number of turns in dialogue d. Further, let θd be the parameters of the stochastic policy Ïθd used during dialogue d. The Off-policy REINFORCE algorithm updates the policy parameters θ by:
âθ â cd t âθ log Ïθ(ad t |hd t ) Rd where d â¼ Uniform(1, D) and t â¼ Uniform(1, T d), (10)
# where cd
t is the importance weight ratio:
# t
t adet Tay Ta(ay |hi) C= pS. Tera 4 (af Ihe) This ratio corrects for the discrepancy between the learned policy 79 and the policy under which the data was collected 7g, (sometimes referred to as the behaviour policy). It up-weights examples with high probability under the learned policy and down-weights examples with low probability under the learned reward function. qd)
The intuition behind the algorithm can be illustrated by analogy with learning from trial and error. t ) Rd will be a When an example has a high return (i.e. high user score), the term âθ log Ïθ(ad vector pointing in a direction increasing the probability of taking action ad t . On the other hand, when t ) Rd will be a vector close t |hd an example has low return (i.e. low user score), the term âθ log Ïθ(ad to zero or a vector pointing in the opposite direction, hence decreasing the probability of taking action ad t . The importance ratio cd t is known to exhibit very high, possibly inï¬nite, variance (Precup et al. 2001). Therefore, we truncate the products in the nominator and denominator to only include the current time step t:
cd t,trunc. def= Ïθ(ad Ïθd (ad t |hd t ) t |hd t ) . (12)
This induces bias in the learning process, but also acts as a regularizer.
Reward Shaping: As mentioned before, one problem with the Off-policy REINFORCE algorithm presented in eq. (10) is that it suffers from high variance (Precup et al. 2001). The algorithm uses the return, observed only at the very end of an episode, to update the policyâs action probabilities for all intermediate actions in an episode. With a small number of examples, the variance in the gradient estimator is overwhelming and this could easily lead the agent to over-estimate the utility of poor actions and, vice versa, to under-estimate the utility of good actions. One remedy for this problem is reward shaping, where the reward at each time step is estimated using an auxiliary function (Ng et al. 1999). For our purpose, we propose a simple variant of reward shaping which takes into account the sentiment of the user. When the user responds with a negative sentiment (e.g. an angry comment), we will assume that the preceding action was highly inappropriate and assign it a reward of zero. Given a dialogue d, at each time t we assign reward rd t : 0 Rd T d
20
With reward shaping and truncated importance weights, the learning update becomes:
# âθ â cd
# t,trunc.âθ log Ïθ(ad
# t |hd
# t ) rd
t where d â¼ Uniform(1, D), t â¼ Uniform(1, T d),
(14)
Off-policy Evaluation: To evaluate the policy, we estimate the expected return (Precup 2000):
RÏθ [R] â t,trunc. rd cd t . d,t (15)
Furthermore, by substituting rd t with a constant reward of 1.0 for each time step, we can compute the estimated number of time steps per episode under the policy. As will be discussed later, this is an orthogonal metric based on which we can analyse and evaluate each policy. However, this estimate does not include the number of priority responses, since there are no actions for the agent to take when there is a priority response.
Training: We initialize the policy model with the parameters of Supervised AMT, and then train the parameters w.r.t. eq. (14) with stochastic gradient descent using Adam. We use a set of a few thousand dialogues recorded between Alexa users and a preliminary version of the system. About 60% of these examples are used for training, and about 20% are used for development and testing. To reduce the risk of overï¬tting, we only train the weights related to the second last layer using off-policy REINFORCE. We use a random grid search with different hyper-parameters, which include the temperature parameter λ and the learning rate. We select the hyper-parameters with the highest expected return on the development set.
# 4.6 Off-policy REINFORCE with Learned Reward Function
Similar to the Supervised Learned Reward policy, we may use the reward model for training with the Off-policy REINFORCE algorithm. This section describes how we combine the two approaches.
Reward Shaping with Learned Reward Model: We use the reward model to compute a new estimate for the reward at each time step in each dialogue:
rd t def= if user utterance at time t + 1 has negative sentiment, gÏ(ht, at) otherwise. (16)
This is substituted into eq. (14) for training and into eq. (15) for evaluation.
Training: As with Off-policy REINFORCE, we initialize the policy model with the parameters of the Supervised AMT model, and then train the parameters w.r.t. eq. (14) with mini-batch stochastic gradient descent using Adam. We use the same set of dialogues and split as Off-policy REINFORCE. We use a random grid search with different hyper-parameters, As before, to reduce the risk of overï¬tting, we only train the weights related to the second last layer using this method. which include the temperature parameter λ and the learning rate, and select the hyper-parameters with the highest expected return on the development set. In this case, the expected return is computed according to the learned reward model. As this policy uses the learned reward model, we call it Off-policy REINFORCE Learned Reward.
# 4.7 Q-learning with the Abstract Discourse Markov Decision Process
The approaches described so far have each their own advantages and disadvantages. One way to quantify their differences is through a decomposition known as the bias-variance trade-off. At one end of the spectrum, the Supervised AMT policy has low variance, because it was trained with hundreds of thousands of human annotations at the level of each model response. However, for the same reason, Supervised AMT incurs a substantial bias, because the human annotations do not reï¬ect the real user satisfaction for an entire conversation. At the other end of the spectrum, Off-policy REINFORCE suffers from high variance, because it was trained with only a few thousand dialogues and corresponding user scores. To make matters worse, the user scores are affected by many external factors (e.g. user proï¬le, user expectations, and so on) and occur at the granularity of an entire conversation. Nevertheless, this method incurs low bias because it directly optimizes the objective metric we care about (i.e. the user score).22 By utilizing a learned reward function, Supervised
22Due to truncated importance weights, however, the off-policy REINFORCE training procedure is still biased.
21
Learned Reward and Off-policy REINFORCE Learned Reward suffer less from bias, but since the learned reward function has its own variance component, they are both bound to have higher variance. In general, ï¬nding the optimal trade-off between bias and variance can be notoriously difï¬cult. In this section we propose a novel method for trading off bias and variance by learning the policy from simulations in an approximate Markov decision process.
Motivation A Markov decision process (MDP) is a framework for modeling sequential decision making (Sutton & Barto 1998). In the general setting, an MDP is a model consisting of a discrete set of states H, a discrete set of actions A, a transition distribution function P , a reward distribution function R, and a discount factor γ. As before, an agent aims to maximize its reward during each episode. Let t denote the time step of an episode with length T . At time step t, the agent is in state ht â H and takes action at â A. Afterwards, the agent receives reward rt â¼ R(ht, at) and transitions to a new state ht+1 â¼ P (ht|at).
Given an MDP model for open-domain conversations, there are dozens of algorithms we could apply to learn the agentâs policy (Sutton & Barto 1998). Unfortunately, such an MDP is difï¬cult to build or estimate. We could try to naively estimate one from the recorded dialogues, but this would require solving two extremely difï¬cult problems. First, we would need to learn the transition distribution P , which outputs the next user utterance in the dialogue given the dialogue history. This problem is likely to be as difï¬cult as our original problem of ï¬nding an appropriate response to the user! Second, we would need to learn the reward distribution R for each time step. However, as we have shown earlier, it is very difï¬cult to learn to predict the user score for an entire dialogue. Given the data we have available, estimating the reward for a single turn is likely also going to be difï¬cult. Instead, we propose to tackle the problem by splitting it into three smaller parts.
Figure 7: Probabilistic directed graphical model for the Abstract Discourse Markov Decision Process. For each time step t, zt is a discrete random variable which represents the abstract state of the dialogue, ht represents the dialogue history, at represents the action taken by the system (i.e. the selected response), yt represents the sampled AMT label and rt represents the sampled reward.
The Abstract Discourse Markov Decision Process The model we propose to learn is called the Abstract Discourse MDP. As illustrated in Figure 7, the model follows a hierarchical structure at each time step. At time t, the agent is in state zt â Z, a discrete random variable representing the abstract discourse state. This variable only represents a few high-level properties related to the dialogue history. We deï¬ne the set Z is the Cartesian product:
Z = ZDialogue act à ZUser sentiment à ZGeneric user utterance, (17)
and where of sets. {Accept, Reject, Request, Politics, Generic Question, Personal Question, Statement, Greeting, Goodbye, Other}. the intention of acts The second set consists of sentiments types: userâs utterance (Stolcke et al. 2000). ZUser sentiment = {Negative, Neutral, Positive}. The third set represent a binary variable: ZGeneric user utterance = {True, False}. This variable is True only when the user utterance is generic and topic-independent (i.e. when the user utterance only contains stop-words). We build a hand-crafted deterministic classiï¬er, which maps a dialogue history to the corresponding classes in ZDialogue act, ZUser sentiment and ZGeneric user utterance. We denote this mapping fhâz. Although we only
22
consider dialogue acts, sentiment and generic utterances, it is trivial to expand the abstract discourse state with other types of discrete or real-valued variables.
Given a sample zt, the Abstract Discourse MDP samples a dialogue history ht from a ï¬nite set of dialogue histories H. In particular, ht is sampled at uniformly random from the set of dialogue histories where the last utterance is mapped to zt:
ht â¼ P (h|H, fhâz, zt) def= Uniform({h | h â H and fhâz(h) = zt}). In other words, ht is a dialogue history where dialogue act, user sentiment and generic property is identical to the discrete variable zt.
For our purpose, H is the set of all recorded dialogues between Alexa users and a preliminary version of the system. This formally makes the Abstract Discourse MDP a non-parametric model, since sampling from the model requires access to the set of recorded dialogue histories H. This set grows over time when the system is deployed in practice. This is useful, because it allows to continuously improve the policy as new data becomes available. Further, it should be noted that the set Z is small enough that every possible state is observed several times in the recorded dialogues.
Given a sample ht, the agent chooses an action at according to its policy Ïθ(at|ht), with parameters θ. A reward rt is then sampled such that rt â¼ R(ht, at), where R is a distribution function. In our case, we use the probability function PËθ, where the parameters Ëθ are estimated using supervised learning on AMT labels in eq. (6). We specify a reward of â2.0 for a "very poor" response class, a reward of â1.0 for a "poor" response class, a reward of 0.0 for an "acceptable" response class, a reward of 1.0 for a "good" response class and a reward of 2.0 for an "excellent" response class. To reduce the number of hyperparameters, we use the expected reward instead of a sample:23
rt = PËθ(y|ht, at)T[â2.0, â1.0, 0.0, 1.0, 2.0].
(19)
Next, a variable yt â {"very poor", "poor", "acceptable", "good", "excellent"} is sampled:
yt â¼ PËθ(y|ht, at). This variable represents one appropriateness interpretation of the output. This variable helps predict the future state zt+1, because the overall appropriateness of a response has a signiï¬cant impact on the userâs next utterance (e.g. very poor responses often cause users to respond with What? or I donât understand.). Finally, a new state zt+1 is sampled according to P ËÏ:
zt+1 â¼ P ËÏ(z|zt, ht, at, yt). where P ËÏ is the transition distribution with parameters ËÏ. The transition distribution is parametrized by three independent two-layer MLP models, which take as input the same features as the scoring function, as well as 1) a one-hot vector representing the sampled response class yt, 2) a one-hot vector representing the dialogue act of the last user utterance, 3) a one-hot vector representing the sentiment of the last user utterance, 4) a binary variable indicating whether the last user utterance was generic, and 5) a binary variable indicating whether the last user utterance contained a wh-word (e.g. what, who). The ï¬rst MLP predicts the next dialogue act, the second MLP predicts the next sentiment type and the third MLP predicts whether the next user utterance is generic. The dataset for training the MLPs consists of 499, 757 transitions, of which 70% are used for training and 30% for evaluation. The MLPs are trained with maximum log-likelihood using mini-batch stochastic gradient descent. We use Adam and early-stop on a hold-out set. Due to the large number of examples, no regularization is used. The three MLP models obtain a joint perplexity of 19.51. In comparison, a baseline model, which always assigns the average class frequency as the output probability obtains a perplexity of 23.87. On average, this means that roughly 3 â 4 possible zt+1 states can be eliminated by conditioning on the previous variables zt, ht, at and yt. In other words, the previous state zt and ht, together with the agentâs action at has a signiï¬cant effect on the future state zt+1. This means that an agent trained in the Abstract Discourse MDP has the potential to learn to take into account future states of the dialogue when selecting its action. This is in contrast to policies learned using supervised learning, which do not consider future dialogue states.
23For example, if we were to use a Gaussian distribution, we would have to at least also specify the variance parameter.
23
Table 3: Policy evaluation on AMT w.r.t. score mean and score standard deviation (std). 90% conï¬dence intervals are given for means (after ±) and standard deviations (in square brackets).
Full test set Difï¬cult test set Policy Score mean Score std Score mean Score std Alicebot Evibot + Alicebot Supervised AMT Off-policy REINFORCE Q-learning AMT 2.19 ± 0.03 2.25 ± 0.04 2.63 ± 0.04 2.61 ± 0.04 2.64 ± 0.04 1.17 [1.15, 1.20] 1.22 [1.20, 1.25] 1.34 [1.31, 1.37] 1.33 [1.31, 1.36] 1.37 [1.34, 1.40] 1.79 ± 0.03 1.79 ± 0.03 2.34 ± 0.04 2.30 ± 0.04 2.35 ± 0.04 0.88 [0.86, 0.90] 0.86 [0.84, 0.88] 1.26 [1.23, 1.29] 1.25 [1.22, 1.28] 1.31 [1.28, 1.34]
The idea of modeling a high-level abstraction of the dialogue, zt, is related to the dialogue state tracking challenge (Williams et al. 2013, 2016). In this challenge, the task is to map the dialogue history to a discrete state representing all salient information about the dialogue. Unlike the dialogue state tracking challenge, however, the variable zt only includes limited salient information about the dialogue. For example, in our implementation, zt does not include topical information. As such, zt is only a partial representation of the dialogue history.
Training Given the Abstract Discourse MDP, we are now able to learn policies directly from simulations. We use Q-learning with experience replay to learn the policy parametrized as an action- value function (Mnih et al.]2013 3). Q-learning is a simple off-policy reinforcement learning algorithm, which has been shown to be effective for training policies parametrized by neural networks. For experience replay, we use a memory buffer of size 1000. We use an ¢-greedy exploration scheme with ⬠= 0.1. We experiment with discount factors 7 ⬠{0.1,0.2,0.5}. As before, the parameters are updated using Adam. To reduce the risk of overfitting, we only train the weights related to the final output layer and the skip-connection (shown in dotted lines in Figure[2) using Q-learning.
Training is carried out in two alternating phases. We train the policy for 100 episodes. Then, we evaluate the policy for 100 episodes w.r.t. average return. Afterwards, we continue training the policy for another 100 episodes. During evaluation, each dialogue history is sampled from a separate set of dialogue histories, HEval, which is disjoint from the set of dialogue histories, HTrain used at training time. This ensures that the policy is not overï¬tting our ï¬nite set of dialogue histories. For each hyper-parameter combination, we train the policy between 400 and 600 episodes. We select the policy which performs best w.r.t. average return. To keep notation brief, we call this policy Q-learning AMT.
# 4.8 Preliminary Evaluation
In this section, we carry out a preliminary evaluation of the response model selection policies.
AMT Evaluation: We ï¬rst evaluate the learned policies on the w.r.t. the human scores in the AMT test set. We measure the average performance as a real-valued scalar, where the label "Very poor" is given a score of 1, label "Poor" is given a score of 2 and so on. We also report standard deviations for the scores, which measure the variance or risk the policies are willing to take; higher standard deviations indicate that a policy is more likely to select responses which result in extreme labels (e.g. "Very poor" and "Excellent"). For both means and standard deviations we report 90% conï¬dence intervals estimated under the assumption that the scores are Gaussian-distributed. In addition to measuring performance on the full test set, we also measure performance on a subset of the test set where neither Alicebot nor Evibot had responses labeled "Good" or "Excellent". These are test examples, where an appropriate response is likely to come only from some of the other models. Determining an appropriate response for these examples is likely to be more difï¬cult. We refer to this subset as the "Difï¬cult test set".
We evaluate the policies Supervised AMT, Off-policy REINFORCE and Q-learning AMT. In addition, we also evaluate two heuristic policies: 1) a policy selecting only Alicebot responses called Alicebot, and 2) a policy selecting Evibot responses when possible and Alicebot responses otherwise, called Evibot + Alicebot.
The results are given in Table 3. The results show that the three learned policies are all signiï¬cantly better w.r.t. mean score compared to both Alicebot and Evibot + Alicebot. Not surprisingly, this
24
difference is ampliï¬ed on the difï¬cult test set. Q-learning AMT, Supervised AMT and Off-policy REINFORCE appear to perform overall equally well. This shows that machine learning has helped learn effective policies, able to select other model responses when neither the Alicebot and Evibot responses are appropriate. Next, the results show that Q-learning AMT has higher standard deviations than the other policies on both the full test set and the difï¬cult test set. Furthermore, since these standard deviations are evaluated at the level of a single response, we might expect this variability to compound throughout an entire conversation. This strongly indicates that Q-learning AMT is more risk tolerant than the other policies.
Table 4: Off-policy evaluation w.r.t. expected (average) Alexa user score and number of time steps (excluding priority responses) on test set.
Policy Alexa user score Time steps Supervised AMT Supervised Learned Reward Off-policy REINFORCE Off-policy REINFORCE Learned Reward Q-learning AMT 2.06 0.94 2.45 1.29 2.08 8.19 3.66 10.08 5.02 8.28
Off-policy Evaluation: One way to evaluate the selection policies is by using the off-policy evalu- ation given in eq. (15). This equation provides an estimate of the expected Alexa user score under each policy.24 As described earlier, the same equation can be used to estimate the expected number of time steps per episode (excluding priority responses).
The expected (average) Alexa user score and number of time steps per episode (excluding priority responses) are given in Table 4. Here we observe that the Off-policy REINFORCE performs best followed by Q-learning AMT and Supervised AMT w.r.t. expected Alexa user score. Off-policy REINFORCE reaches 2.45, which is a major 17.8% improvement over the second best performing model Q-learning AMT. However, this advantage should be taken with a grain of salt. As discussed earlier, the off-policy evaluation in eq. (15) is a biased estimator since the importance weights have been truncated. Moreover, Off-policy REINFORCE has been trained speciï¬cally to maximize this biased estimator, while all other policies have been trained to maximize other objective functions. Similarly, w.r.t. expected number of time steps, Off-policy REINFORCE reaches the highest number of time steps followed by Q-learning AMT and Supervised AMT. As before, we should take this result with a grain of salt, since this evaluation is also biased and does not take into account priority responses. Further, itâs not clear that increasing the number of time steps will increase user scores. Nevertheless, Off-policy REINFORCE, Q-learning AMT and Supervised AMT appear to be our prime candidates for further experiments.
Response Model Selection Frequency: Figure 8 shows the frequency with which Supervised AMT, Off-policy REINFORCE and Q-learning AMT select different response models. We observe that the policy learned using Off-policy REINFORCE tends to strongly prefer Alicebot responses over other models. The Alicebot responses are among the safest and most topic-dependent, generic responses in the system, which suggests that Off-policy REINFORCE has learned a highly risk averse strategy. On the other hand, the Q-learning AMT policy selects Alicebot responses substantially less often than both Off-policy REINFORCE and Supervised AMT. Instead, Q-learning AMT tends to prefer responses retrieved from Washington Post and from Google search results. These responses are semantically richer and have the potential to engage the user more deeply in a particular topic, but they are also more risky (e.g. a bad choice could derail the entire conversation.). This suggests that Q-learning AMT has learned a more risk tolerant strategy. One possible explanation for this difference is that Q-learning AMT was trained using simulations. By learning online from simulations, the policy has been able to explore new actions and discover high-level strategies lasting multiple time steps. In particular, the policy has been allowed to experiment with riskier actions and to learn remediation or fall-back strategies, in order to handle cases where a risky action fails. This might also explain its stronger preference for BoWFactGenerator responses, which might be serving as a fall-back strategy by outputting factual statements on the current topic. This would have been difï¬cult
24For the policies parametrized as action-value functions, we transform eq. (2) to eq. (4) by setting fθ = Qθ and ï¬tting the temperature parameter λ on the Off-policy REINFORCE development set.
25
Evibot Alicebot Elizabot Initiatorbot BoWFactGenerator VHREDSubtitles LSTMClassifierMSMarco Washington Post Models Retrieval Model Policy Retrievaâ Models ss Supervised AMT GRUQuestionGenerator mmm Off-policy REINFORCE Other Models Mm Q-learning AMT 0 5 10 15 20 25 30 35 40 45 Response selection frequency (in %)
# Reddit edait
Figure 8: Response model selection probabilities across response models for Supervised AMT, Off- policy REINFORCE and Q-learning AMT on the AMT label test dataset. 95% conï¬dence intervals are shown based on the Wilson score interval for binomial distributions.
Table 5: Policy evaluation using the Abstract Discourse MDP w.r.t. average return, average reward per time step and average episode length on dev set (± standard deviations). The reward function is based on Supervised AMT.
Policy Average return Random â32.18 ± 31.77 Alicebot â15.56 ± 15.61 Evibot + Alicebot â11.33 ± 12.43 Supervised AMT â6.46 ± 8.01 Supervised Learned Reward â24.19 ± 23.30 Off-policy REINFORCE â7.30 ± 8.90 â0.87 ± 0.24 â0.37 ± 0.16 â0.29 ± 0.19 â0.15 ± 0.16 â0.73 ± 0.27 â0.16 ± 0.16 34.29 ± 33.02 42.01 ± 42.00 37.5 ± 38.69 42.84 ± 42.92 31.91 ± 30.09 43.24 ± 43.58 Off-policy REINFORCE Learned Reward Q-learning AMT â10.19 ± 11.15 â6.54 ± 8.02 â0.28 ± 0.19 â0.15 ± 0.18 35.51 ± 35.05 40.68 ± 39.13
# Average reward per time step Average dialogue length
to learn for Off-policy REINFORCE, since the sequence of actions for such high-level strategies are sparsely observed in the data and, when they are observed, the corresponding returns (Alexa user scores) have high variance.
A second observation is that Q-learning AMT has the strongest preference for Initiatorbot among the three policies. This could indicate that Q-learning AMT leans towards a system-initiative strategy (e.g. a strategy where the system tries to maintain control of the conversation by asking questions, changing topics and so on). Further analysis is needed to conï¬rm this.
Abstract Discourse MDP Evaluation Next, we can evaluate the performance of each policy w.r.t. simulations in the Abstract Discourse MDP. We simulate 500 episodes under each policy and evaluate it w.r.t. average return, average reward per time step and dialogue length. In addition to evaluating the ï¬ve policies described earlier, we also evaluate three heuristic policies: 1) a policy selecting responses at random called Random, 2) the Alicebot policy, and 3) the Evibot + Alicebot policy. Evaluating these models will serve to validate the approximate MDP.
The results are given in Table 5. We observe that Supervised AMT performs best w.r.t. average return and average reward per time step. However, this comes as no surprise. The reward function in the MDP is deï¬ned as Supervised AMT, so by construction this policy achieves the highest reward per time step. Next we observe that Q-learning AMT is on par with Supervised AMT, both achieving same â0.15 average reward per time step. Second in line comes Off-policy REINFORCE, achieving
26
an average reward per time step of â0.16. However, Off-policy REINFORCE also achieved the highest average dialogue length of 43.24. At the other end of the spectrum comes, as expected, the Random policy performing worst w.r.t. all metrics. In comparison, both Alicebot and Evibot + Alicebot perform better w.r.t. all metrics, with Evibot + Alicebot achieving the best average return and average reward per time step out of the three heuristic policies. This validates the utility of the Abstract Discourse MDP as an environment for training and evaluating policies. Overall, Off-policy REINFORCE, Q-learning AMT and Supervised AMT still appear to be the best performing models in the preliminary evaluation.
Evibot ° ° Alicebot e ° ° ° Elizabot Initiatorbot BoWMovies e ° ° ° BoWEscapePlan ° ° BoWFactGenerator Reddit models -B- -B- Be ° ° SkipThoughtBooks Q-learning AMT policy VHREDSubtitles LSTMClassifierMSMarco oe oo oe e GRUQuestionGenerator; 0 0 0 0 0 2 0 0 0 VHREDWashingtonPost { 0 BoWWashingtonPost ~â â om eo 6 ° VHREDSubtitles | © | © a oe See egnaeypey on uw 8383888 § see 8S sa @ ae 8 e226 8 a2oe2ea257 83 8 seeed > SS S88 U5 82S = 5 ⬠goeeseeoes â @ s $s zi 8 & 5 2 aos 2e 226 @x & =ecasa -6%o908 8 5eEes f£adys ez Ss eee Seee esuy of a os ee" eS iaas 8 G Seg % F2¢ a 35S Supervised AMT policy
Figure 9: Contingency table comparing selected response models between Supervised AMT and Q-learning AMT. The cells in the matrix show the number of times the Supervised AMT policy selected the row response model and the Q-learning AMT policy selected the column response model. The cell frequencies were computed by simulating 500 episodes under the Q-learning policy in the Abstract Discourse MDP. Note that all models retrieving responses from Reddit have been agglomerated into the class Reddit models.
Finally, we compare Q-learning AMT with Supervised AMT w.r.t. the action taken in states from episodes simulated in the Abstract Discourse MDP. As shown in Figure 9, the two policies diverge w.r.t. several response models. When Supervised AMT would have selected topic-independent, generic Alicebot and Elizabot responses, Q-learning AMT often selects BoWFactGenerator, Initiatorbot and VHREDWashingtonPost responses. For example, there were 347 instances where Supervised AMT selected Alicebot, but where Q-learning AMT selected BoWFactGenerator. Similarly, where Supervised AMT would have preferred generic VHREDSubtitle responses, Q-learning AMT often selects responses from BoWFactGenerator, InitiatorBot and VHREDRedditSports. This supports our previous analysis showing that Q-learning AMT has learned a more risk tolerant strategy, which involves response models with semantically richer content.
In the next section, we evaluate these policies with real-world users.
27
# 5 A/B Testing Experiments
To evaluate the dialogue manager policies described in the previous section, we carry out A/B testing experiments. During each A/B testing experiment, we evaluate several policies for selecting the response model. When Alexa users start a conversation with the system, they are automatically assigned to a random policy and afterwards their dialogues and ï¬nal scores are recorded.
A/B testing allows us to accurately compare different dialogue manager policies by keeping all other system factors constant (or almost constant). This is in contrast to evaluating the system performance over time, when the system is continuously being modiï¬ed. In such a situation, it is often difï¬cult to evaluate the improvement or degradation of performance w.r.t. particular system modiï¬cations.
However, even during our A/B testing experiments, the distribution over Alexa users still changes through time. Different types of users will be using the system depending on the time of day, weekday and holiday season. In addition, the user expectations towards our system change over time as they interact with other socialbots in the competition. In other words, we must consider the Alexa user distribution as following a non-stationary stochastic process. Therefore, we take two steps to reduce confounding factors and correlations between users. First, during each A/B testing experiment, we evaluate all policies of interest simultaneously. This ensures that we have approximately the same number of users interacting with each policy w.r.t. time of day and weekday. This minimizes the effect of changes in the user distribution on the ï¬nal user scores within that period. However, since the user distribution changes between the A/B testing experiments, we still cannot accurately compare policy performance across A/B testing experiments. Second, we discard scores from returning users (i.e. users who have already evaluated the system once). Users who are returning to the system are likely to be inï¬uenced by their previous interactions with the system. For example, users who previously had a positive experience with the system may be biased towards giving high scores in their next interaction. Further, the users who return to the system are likely to belong to a particular subpopulation of users. This particular group of users may inherently have more free time and be more willing to engage with socialbots than other users. Discarding returning user scores ensures that the evaluation is not biased towards this subpopulation of users. By discarding scores from returning users, we also ensure that the evaluation counts every user exactly once. Finally, it should be noted that we ignore dialogues where the Alexa user did not give a score. This inevitably biases our evaluation, since users who do not provide a score are likely to have been dissatisï¬ed with the system or to have been expecting different functionality (e.g. non-conversational activities, such as playing music, playing games or taking quizzes). One potential remedy is to have all dialogues evaluated by a third-party (e.g. by asking human annotators on Amazon Mechanical Turk to evaluate the dialogue), but that is beyond the scope of these experiments.
# 5.1 A/B Testing Experiment #1
The ï¬rst A/B testing experiment was carried out between July 29th, 2017 and August 6th, 2017. We tested six dialogue manager policies: Evibot + Alicebot, Supervised AMT, Supervised Learned Reward, Off-policy REINFORCE, Off-policy REINFORCE Learned Reward and Q-learning AMT. For Off-policy REINFORCE and Off-policy REINFORCE Learned Reward, we use the greedy variant deï¬ned in eq. (5).
This experiment occurred early in the Amazon Alexa Prize competition. This means that Alexa users have few expectations towards our system (e.g. expectations that the system can converse on a particular topic, or that the system can engage in non-conversational activities, such as playing word games or taking quizzes). Further, the period July 29th - August 6th overlaps with the summer holidays in the United States. This means that we might expect more children to interact with system than during other seasons. Policy Evaluation The results are given in Table 6.25 The table shows the average Alexa user scores, average dialogue length, average percentage of positive user utterances and average percentage of negative user utterances. In total, over a thousand user ratings were collected after discarding returning users. Ratings were collected after the end of the semi-ï¬nals competition, where all ratings
2595% conï¬dence intervals are computed under the assumption that the Alexa user scores for each policy are drawn from a Gaussian distribution with its own mean and variance. This is an approximation, since the Alexa user scores only have support on the interval [1, 5].
28
Table 6: First A/B testing experiment with six different policies (± 95% conï¬dence intervals). Star â indicates policy is signiï¬cantly better than other policies at 95% statistical signiï¬cance level.
Policy User score Dialogue length Pos. utterances Neg. utterances Evibot + Alicebot Supervised AMT Supervised Learned Reward Off-policy REINFORCE 2.86 ± 0.22 2.80 ± 0.21 2.74 ± 0.21 2.86 ± 0.21 31.84 ± 6.02 34.94 ± 8.07 27.83 ± 5.05 37.51 ± 7.21 2.80% ± 0.79 4.00% ± 1.05 2.56% ± 0.70 3.98% ± 0.80 5.63% ± 1.27 8.06% ± 1.38 6.46% ± 1.29 6.25 ± 1.28 Off-policy REINFORCE Learned Reward Q-learning AMT* 2.84 ± 0.23 3.15 ± 0.20 34.56 ± 11.55 30.26 ± 4.64 2.79% ± 0.76 3.75% ± 0.93 6.90% ± 1.45 5.41% ± 1.16
Table 7: Amazon Alexa Prize semi-ï¬nals average team statistics provided by Amazon.
Policy User score Dialogue length All teams Non-ï¬nalist teams Finalist teams 2.92 2.81 3.31 22 22 26
had been transcribed by human annotators. Each policy was evaluated by about two hundred unique Alexa users.
As expected from our preliminary evaluation, we observe that Q-learning AMT and Off-policy REINFORCE perform best among all policies w.r.t. user scores. Q-learning AMT obtained an average user score of 3.15, which is signiï¬cantly higher than all other policies at a 95% statistical signiï¬cance level w.r.t. a one-tailed two-sample t-test. In comparison, the average user score for all the teams in the competition during the semi-ï¬nals was only 2.92. Interestingly, Off-policy REINFORCE achieved the longest dialogues with an average length of 37.51. This suggests Off-policy REINFORCE yields highly engaging conversations. In comparison, in the semi-ï¬nals, the average dialogue length of all teams was 22 and of the ï¬nalist teams was 26. We also observe that Off-policy REINFORCE had a slightly higher percentage of user utterances with negative sentiment compared to Q-learning AMT. This potentially indicates that the longer dialogues also include some frustrated interactions (e.g. users who repeat the same questions or statements in the hope that the system will return a more interesting response next time). The remaining policies achieved average Alexa user scores between 2.74 and 2.86, with the heuristic policy Evibot + Alicebot obtaining 2.86. This suggests that the other policies have not learned to select responses more appropriately than the Evibot + Alicebot heuristic.
In conclusion, the results indicate that the risk tolerant learned by the Q-learning AMT policy performs best among all policies. This shows that learning a policy through simulations in an Abstract Discourse MDP may serve as a fruitful path towards developing open-domain socialbots. In addition, the performance of Off-policy REINFORCE indicates that optimizing the policy directly towards Alexa user scores could also potentially yield improvements. However, further investigation is required.
# Length Analysis
In an effort to further understand how the policies differ from each other, we carry out an analysis of the policies performance as a function of dialogue length. Although, we have recorded only a limited amount of data for dialogues with a particular length, this analysis could help illuminate directions for future experiments.
Table 8 shows the average Alexa user scores w.r.t. four dialogue length intervals for the six policies. The estimates are based on between 30-70 Alexa user ratings for each policy and interval combination. First, we observe that Q-learning AMT performs better than all other policies for all intervals except the medium-short interval (10 â 19, or 5 â 10 back-and-forth turns). Further, its high performance for the long intervals (20 â 39 and ⥠40) would suggest that Q-learning AMT performs excellent in long dialogues. The other learned policies Supervised AMT, Off-policy REINFORCE and Off-policy REINFORCE Learned Reward also appear to perform excellent in long dialogues. On the other
29
Table 8: First A/B testing experiment user scores with six different policies w.r.t. varying dialogue length (± one standard deviation).
Dialogue length Policy < 10 10 - 19 20 - 39 ⥠40 Evibot + Alicebot Supervised AMT Supervised Learned Reward Off-policy REINFORCE 2.88 ± 1.71 2.91 ± 1.59 3.31 ± 1.43 2.99 ± 1.64 2.58 ± 1.33 2.64 ± 1.38 2.45 ± 1.57 2.72 ± 1.57 2.93 ± 1.28 2.60 ± 1.40 2.19 ± 1.38 2.56 ± 1.31 2.99 ± 1.37 3.13 ± 1.43 2.90 ± 1.54 3.26 ± 1.45 Off-policy REINFORCE Learned Reward Q-learning AMT 2.91 ± 1.64 3.46 ± 1.40 2.53 ± 1.45 2.60 ± 1.45 2.9 ± 1.56 3.19 ± 1.39 3.14 ± 1.36 3.31 ± 1.33
hand, the heuristic Evibot + Alicebot policy and the Supervised Learned Reward policy appear to perform poorly in long dialogues, but that is not surprising given their low overall performance. In particular, Supervised Learned Reward seems to be performing well only for very short dialogues. This potentially indicates that the policy fails to either maintain user engagement or memorize longer-term context. However, further investigation is required.
# Topical Speciï¬city and Coherence
We carry out an analysis of the topical speciï¬city and coherence of the different policies. This analysis aims to quantify how much each policy stays on topic (e.g. whether the policy selects responses on the current topic or on new topics) and how speciï¬c its content is (e.g. how frequently the policy selects generic, topic-independent responses). This analysis is carried out at the utterance level, where we are fortunate to have more recorded data.
The results are shown in Table 9. For topic speciï¬city, we measure the average number of noun phrases per user utterance and the average number of noun phrases per system utterance.26 The more topic speciï¬c the user is, the higher we would expect the ï¬rst metric to be. Similarly, the more topic speciï¬c the system is the higher we would expect the second metric to be. For topic coherence, we measure the word overlap between the userâs utterance and the systemâs response, as well as word overlap between the userâs utterance and the systemâs response at the next turn. The more the policy prefers to stay on topic, the higher we would expect these two metrics to be.
As shown in the table, Q-learning AMT has obtained signiï¬cantly higher scores w.r.t. both word overlap metrics and the average number of noun phrases per system utterance. This indicates that the Q-learning AMT policy has the highest topical coherency among all six policies, and that it generates the most topic speciï¬c (semantically rich) responses. This is in line with our previous analysis, where we found that Q-learning follows a highly risk tolerant strategy. Next in line, comes Supervised AMT, which also appears to maintain high topic speciï¬city and coherence. In fact, Supervised AMT obtained the highest metric w.r.t. number of noun phrases per user utterance, which indicates that this policy is encouraging the user to give more topic speciï¬c responses. Afterwards comes Off-policy REINFORCE and Off-policy REINFORCE Learned Reward, which tend to select responses with signiï¬cantly less noun phrases and less word overlap. This is also in line with our previous analysis, where we found that Off-policy REINFORCE follows a risk averse strategy. Finally, the heuristic policy Evibot + Alicebot selects responses with very few noun phrases and least word overlap among all policies. This indicates that the heuristic policy might be the least topic coherent policy, and that it mainly selects generic, topic-independent responses.
Initiatorbot Evaluation This experiment also allowed us to analyze the outcomes of different conversation starter phrases given by the Initiatorbot. We carried out this analysis by computing the average Alexa user score for each of the 40 possible phrases. We found that phrases related to news (e.g. "Do you follow the news?"), politics (e.g. "Do you want to talk about politics?") and travelling (e.g. "Tell me, where do you like to go on vacation?") performed poorly across all policies. On the other hand, phrases related to animals (e.g. "Do you have pets?" and "What is the cutest animal you can think of?"), movies (e.g. "Letâs talk about movies. Whatâs the last movie you watched?") and
26We use https://spacy.io version 1.9.0 to detect noun phrases with the package "en_core_web_md- 1.2.1".
30
Table 9: First A/B testing experiment topical speciï¬city and coherence of the six different policies. The columns are average number of noun phrases per user utterance (User NPs), average number of noun phrases per system utterance (System NPs), average number of overlapping words between the userâs utterance and the systemâs response (Word overlap t â t + 1), and average number of overlapping words between the userâs utterance and the systemâs response in the next turn (Word overlap t â t + 3). 95% conï¬dence intervals are also shown. Stop words are excluded.
Policy User NPs System NPs Word overlap Word overlap t â t + 1 t â t + 3 Evibot + Alicebot Supervised AMT Supervised Learned Reward Off-policy REINFORCE 0.55 ± 0.03 0.62 ± 0.03 0.57 ± 0.03 0.59 ± 0.02 1.05 ± 0.05 1.75 ± 0.07 1.50 ± 0.07 1.45 ± 0.05 7.33 ± 0.21 10.48 ± 0.28 8.35 ± 0.29 9.05 ± 0.21 7.31 ± 0.22 10.65 ± 0.29 8.36 ± 0.31 9.14 ± 0.22 Off-policy REINFORCE Learned Reward Q-learning AMT 0.61 ± 0.03 0.58 ± 0.03 1.04 ± 0.06 1.98 ± 0.08 7.42 ± 0.25 11.28 ± 0.30 7.42 ± 0.26 11.52 ± 0.32
Table 10: Second A/B testing experiment with two different policies (± 95% conï¬dence intervals).
Policy User score Dialogue length Pos. utterances Neg. utterances Off-policy REINFORCE Q-learning AMT 3.06 ± 0.12 2.92 ± 0.12 34.45 ± 3.76 31.84 ± 3.69 3.23% ± 0.45 3.38% ± 0.50 7.97% ± 0.85 7.61% ± 0.84
food (e.g. "Letâs talk about food. What is your favorite food?") performed well across all policies. For example, conversations where the Initiatorbot asked questions related to news and politics had an average Alexa user score of only 2.91 for the top two systems (Off-policy REINFORCE and Q-learning AMT). Mean while, conversations where the Initiatorbot asked questions about animals, food and movies the corresponding average Alexa user score was 3.17. We expected the conversation topic to affect user engagement, however it is surprising that these particular topics (animals, food and movies) were the most preferred ones. One possible explanation is that our system does not perform well on news, politics and travelling topics. However, the system already had several response models dedicated to discussing news and politics: six sequence-to-sequence models extracting responses from Reddit news and Reddit politics, two models extracting responses from Washington Post user comments and the BoWTrump model extracting responses from Donald J. Trumpâs Twitter proï¬le. In addition, Evibot is capable of answering many factual questions about news and politics and BoWFactGenerator contains hundreds of facts related to news and politics. As such, there may be another more plausible explanation for usersâ preferences towards topics, such as animals, movies and food. One likely explanation is the age group of the users. While inspecting our conversational transcripts, we observed that many users interacting with the system appeared to be children or teenagers. It would hardly come as a surprise if this user population would prefer to talk about animals, movies and foods rather than news, politics and travels.
# 5.2 A/B Testing Experiment #2
The second A/B testing experiment was carried out between August 6th, 2017 and August 15th, 2017. We tested two dialogue manager policies: Off-policy REINFORCE and Q-learning AMT. As before, we use the greedy variant of Off-policy REINFORCE deï¬ned in eq. (5).
This experiment occurred at the end of the Amazon Alexa Prize competition semi-ï¬nals. This means that many Alexa users have already interacted with other socialbots in the competition, and therefore are likely to have developed expectations towards the systems. These expectations are likely to involve conversing on a particular topic or engaging in non-conversational activities, such as playing games). Further, the period August 6th - August 15th overlaps with the end of the summer holidays and the beginning of the school year in the United States. This means that we should expect less children to interact with the system than in the previous A/B testing experiment.
31
Table 11: Third A/B testing experiment with two different policies (± 95% conï¬dence intervals).
Policy User score Dialogue length Pos. utterances Neg. utterances Off-policy REINFORCE Q-learning AMT 3.03 ± 0.18 3.06 ± 0.17 30.93 ± 4.96 33.69 ± 5.84 2.72 ± 0.59 3.63 ± 0.68 7.36 ± 1.22 6.67 ± 0.98
Policy Evaluation The results are given in Table 10. In total, about eight hundred user ratings were collected after discarding returning users. As such, each policy was evaluated by about six hundred unique Alexa users. As before, all ratings were transcribed by human annotators.
We observe that both Off-policy REINFORCE and Q-learning AMT perform better than the policies in the previous experiment. However, in this experiment, Off-policy REINFORCE achieved an average Alexa user score of 3.06 while Q-learning AMT achieved a lower score of only 2.92. Nonetheless, Off-policy REINFORCE is not statistically signiï¬cantly better. In this experiment, there is also no signiï¬cant difference between the two policies w.r.t. percentage of positive and negative user utterances.
As discussed earlier, the performance difference compared to the previous A/B testing experiment could be due to the change in user proï¬les and user expectations. At this point in time, more of the Alexa users have interacted with socialbots from other teams. Mean while, all socialbots have been evolving. Therefore, user expectations towards our system are likely to be higher now. Further, since the summer holidays have ended, less children and more adults are expected to interact with our system. It is plausible that these adults also have higher expectations towards the system, and even more likely that they are less playful and less tolerant towards mistakes. Given this change in user proï¬les and expectations, the risk tolerant strategy learned by the Q-learning AMT policy is likely to fare poorly compared to the risk averse strategy learned by Off-policy REINFORCE.
# 5.3 A/B Testing Experiment #3
The third A/B testing experiment was carried out between August 15th, 2017 and August 21st, 2017. Due to the surprising results in the previous A/B testing experiment, we decided to continue testing the two dialogue manager policies Off-policy REINFORCE and Q-learning AMT. As before, we use the greedy variant of Off-policy REINFORCE deï¬ned in eq. (5).
This experiment occurred after the end of the Amazon Alexa Prize competition semi-ï¬nals. As discussed before, this means that it is likely that many Alexa users have already developed expectations towards the systems. Further, the period August 15th - August 21st lies entirely within the beginning of the school year in the United States. This means that we should expect less children to interact with the system than in the previous A/B testing experiment.
Policy Evaluation The results are given in Table 11. In total, about six hundred user ratings were collected after discarding returning users. As such, each policy was evaluated by about three hundred unique Alexa users. Unlike the previous two experiments, due to the semi-ï¬nals having ended, these ratings were not transcribed by human annotators.
We observe again that both Off-policy REINFORCE and Q-learning AMT perform better than the other policies evaluated in the ï¬rst experiment. However, in this experiment, Off-policy REINFORCE only achieved an average Alexa user score of 3.03 while Q-learning AMT achieved the higher score of 3.06. As before, neither policy is statistically signiï¬cantly better than the other. Nevertheless, as in the ï¬rst experiment, Q-learning AMT achieved a higher percentage of positive utterances and a lower percentage of negative utterances than Off-policy REINFORCE. In this experiment, Q-learning AMT also obtains the longest dialogues on average. Overall, this experiment indicates that Q-learning AMT is the better policy.
As before, the difference in performance compared to the previous A/B testing experiments is likely due to the change in user proï¬les and user expectations. The fact that Q-learning AMT now performs slightly better than Off-policy REINFORCE might be explained by many different causes. First, despite the conï¬dence intervals and statistical tests presented earlier, it is of course possible that the previous A/B testing experiments did not have enough statistical power to accurately discriminate whether Q-learning AMT or Off-policy REINFORCE obtains the highest average user score. Second,
32
it is possible that the topics users want to discuss now are simply better handled by Q-learning AMT. Third, it is possible that adult users might only have a weak preference toward the risk averse Q- learning AMT policy, and that there is still a signiï¬cant amount of children and teenagers interacting with the system even though the summer holidays have ended. Finally, it is possible that the user population has grown tired of Off-policy REINFORCE, which follows a risk averse strategy by responding with less semantic content.
# 5.4 Discussion
The two dialogue manager policies Q-learning AMT and Off-policy REINFORCE have demonstrated substantial improvements over all other policies, including policies learned using supervised learning and heuristic policies. As discussed earlier, the Q-learning AMT policy achieved an average Alexa user score substantially above the average score of all teams in the Amazon Alexa Prize competition semi-ï¬nals, without relying on non-conversational activities. In addition, it also achieved a higher number of dialogue turns than both the average of all teams in the semi-ï¬nals and the average of all ï¬nalist teams in the semi-ï¬nals. The policy Off-policy REINFORCE similarly obtained a high number of dialogue, suggesting that the resulting conversations are far more engaging. The results demonstrate the advantages of the overall ensemble approach, where many different models generate natural language responses and the dialogue manager policy selects one response among them. The results also highlight the advantages of learning the policy using reinforcement learning techniques. By optimizing the policy to maximize either real-world user scores or to maximize rewards in the Abstract Discourse MDP (with a proxy reward function) we have demonstrated that signiï¬cant gains can be achieved w.r.t. both real-world user scores and number of dialogue turns.
# 6 Related Work
Dialogue Manager Architecture: Any open-domain conversational agent will have to utilize many different types of modules, such as modules for looking up information, modules for daily chitchat discussions, modules for discussing movies, and so on. In this respect, our system architecture is related to some of the recent general-purpose dialogue system frameworks (Zhao et al. 2016, Miller et al. 2017, Truong et al. 2017). These systems abstract away the individual modules into black boxes sharing the same interface, similar to the response models in our ensemble. This, in turn, enables them to be controlled by an executive component (e.g. a dialogue manager).
# Reinforcement Learning:
Much work has applied reinforcement learning to training or improving dialogue systems. The idea that dialogue can be formulated as a sequential decision making problem based on a Markov decision process (MDP) appeared already in the 1990s for goal-oriented dialogue systems (Singh et al. 1999, 2002, Williams & Young 2007, Young et al. 2013, Paek 2006, Henderson et al. 2008, Pieraccini et al. 2009, Su et al. 2015).
One line of research in this area has focused on learning dialogue systems through simulations using abstract dialogue states and actions (Eckert et al. 1997, Levin et al. 2000, Chung 2004, Cuayáhuitl et al. 2005, Georgila et al. 2006, Schatzmann et al. 2007, Heeman 2009, Traum et al. 2008, Georgila & Traum 2011, Lee & Eskenazi 2012, Khouzaimi et al. 2017, López-Cózar 2016, Su et al. 2016, Fatemi et al. 2016, Asri et al. 2016). The approaches here differ based on how the simulator itself is created or estimated, and whether or not the simulator is also considered an agent, which is trying to optimize its own reward. For example, Levin et al. (2000) tackle the problem of building a ï¬ight booking dialogue system. They estimate a user simulator model by counting transition probabilities between dialogue states and user actions (similar to an n-gram model), which is then used to train a reinforcement learning policy. In their setting, the states and actions are all abstract discrete variables, which minimizes the amount of natural language understanding and generation the policy has to learn. As another example, Georgila & Traum (2011) tackle the problem of learning dialogue policies for negotiation games, where each party in the dialogue is an agent with its own reward function. In their setting, each policy is in effect also a user simulator, and is trained by playing against other policies using model-free on-policy reinforcement learning. As a more recent example, Yu et al. (2016) build a open-domain, chitchat dialogue system using reinforcement learning. In particular, Yu et al. (2016) propose to learn a dialogue manager policy through model-free off-policy reinforcement learning based on simulations with the template-based system A.L.I.C.E. (Wallace 2009) with a reward
33
function learned from crowdsourced annotations. This is shown to yield substantial improvements w.r.t. both the overall appropriateness of each system response and the conversational depth of the dialogues (e.g. how long the system remains on topic).
Researchers have also recently started to investigate learning generative neural network policies operating directing on raw text through user simulations (Li et al. 2016, Das et al. 2017, Lewis et al. 2017, Liu & Lane 2017, Lewis et al. 2017). In contrast to earlier work, these policies require both a deeper understanding of natural language and an ability to generate natural language. For example, Li et al. (2016) propose to train a generative sequence-to-sequence neural network using maximum log-likelihood, and then ï¬ne-tune it with a multi-objective function. The multi-objective function includes, among other things, a reinforcement learning signal based on self-play Monte Carlo rollouts (i.e. simulated trajectories are generated by sampling from the model, similar to (Silver et al. 2016)) using a hand-crafted reward function. Lewis et al. (2017) apply model-free reinforcement learning for learning a system capable of negotiation in a toy domain from crowdsourced data. They demonstrate that itâs feasible to learn an effective policy by training a generative sequence-to-sequence neural network on crowdsourced data, and that the policy can be further improved using on-policy reinforcement learning through self-play and Monte Carlo rollouts. Both Li et al. (2016) and Lewis et al. (2017) use self-play. Self-play is a viable option for training their policies because their problems are symmetric in the policy space (e.g. any policy performing well on one side of the negotiation game will also perform well on the other side). In contrast, self-play is unlikely to be an effective training method in our case, because the interactions are highly asymmetric: human users speak differently to our system than they would to humans and, further, they expect different answers. Liu & Lane (2017) use model-free on-policy reinforcement learning to improve a system in a restaurant booking toy domain. For training the system policy, they employ a user simulator trained on real-world human-human dialogues. In particular, under the constraint that both the system and the user share the exact same reward function, they demonstrate that reinforcement learning can be used to improve both the system policy and the user simulator. In a related vein, Zhao & Eskenazi (2016) learn an end-to-end neural network system for playing a quiz game using off-policy reinforcement learning, where the environment is a game simulator. They demonstrate that combining reinforcement learning with dialogue state tracking labels yields superior performance.
In all the work reviewed so far, user simulators have been deï¬ned as rule-based models (e.g. A.L.I.C.E.), parametric models (e.g. n-gram models, generative neural networks), or a combination of the two. In most cases, given a user simulator, the collected training data is discarded and the policy is learned directly from simulations with the user simulator. In contrast, the Abstract Discourse MDP that we propose is a non-parametric approach, which repeatedly uses the collected training data during policy training.
Reinforcement learning has also been applied to teaching agents to communicate with each other in multi-agent environments (Foerster et al. 2016, Sukhbaatar et al. 2016, Lazaridou, Pham & Baroni 2016, Lazaridou, Peysakhovich & Baroni 2016, Mordatch & Abbeel 2017).
# 7 Future Work
# 7.1 Personalization
One important direction for future research is personalization, i.e. building a model of each userâs personality, opinions and interests. This will allow the system to provide a better user experience by adapting the response models to known attributes of the user. We are in the process of implementing a state machine that given a user id, retrieves the relevant information attributes of the user from a database. If a particular user attribute is missing, then the state machine will ask the user for the relevant information and store it in the database. One important user attribute is the userâs name. If no name is found in the database, the state machine may ask the user what they would like to be called and afterwards extracts the name from the userâs response. If a personal name is detected, it is stored in the database to be available for other modules to insert into their responses. Name detection proceeds as follows. First we match the response against a small collection of templates, such as "my name is ..." or "call me ...". Then we use part-of-speech (POS) tags of the resulting matches to detect
34
the end boundary of the name. To avoid clipping the name too early due to wrong POS tags, we also match words against a list of common names in the 1990 US Census data27.
In the future, we plan to explore learning user embeddings from previous interactions with each user, since we know from previous experiments that text information alone contains a signiï¬cant amount of information about the speakerâs identity (Serban & Pineau 2015). Learning an embedding for each user will allow the system to become more personalized, by providing our response models with additional context beyond the immediate dialogue history.
# 7.2 Text-based Evaluation
: It is well known that speech recognition errors have a signiï¬cant impact on the user experience in dialogue systems (Raux et al. 2006). Furthermore, speech recognition errors are likely to have a particularly averse effect on our system, because our system encourages open-ended, unrestricted conversations. Unlike many goal-driven and rule-based systems, our system does not take control of the dialogue or direct the user to respond with a keyword from a set of canned responses.28 Because the users are more likely to give open-ended responses, the system is also more likely to suffer from speech recognition errors. As we discussed in Section 4, we did indeed observe a negative correlation between the conï¬dences of the speech recognition system and the Alexa user scores. Moreover, it is likely that speech recognition errors have a stronger systematic effect on some of the policies evaluated in Section 5.
To mitigate the issues of speech recognition errors, we plan to evaluate the system with different policies through a text-based evaluation on Amazon Mechanical Turk. This would also help reduce other problems, such as errors due to incorrect turn-taking (e.g. when the system barges in on the user, who is still speaking) (Ward et al. 2005).
# 8 Conclusion
We have proposed a new large-scale ensemble-based dialogue system framework for the Amazon Alexa Prize competition. Our system leverages a variety of machine learning techniques, including deep learning and reinforcement learning. We have developed a new set of deep learning models for natural language retrieval and generation, including recurrent neural networks, sequence-to-sequence models and latent variable models. In addition, we have developed a novel reinforcement learning procedure and evaluated it against existing reinforcement learning methods in A/B testing experiments with real-world users. These innovations have enabled us to make substantial improvements upon our baseline system. On a scale 1 â 5, our best performing system reached an average user score of 3.15, with a minimal amount of hand-crafted states and rules and without engaging in non-conversational activities (such as playing games or quizzes). The performance is substantially above the average of all teams in the competition semi-ï¬nals, which was only 2.92. Furthermore, the same system averaged a high 14.5 â 16.0 turns per conversation, which is substantially above both the average of all teams and the average of ï¬nalist teams in the competition semi-ï¬nals, suggesting that our system is one of the most engaging systems in the competition. Since nearly all our system components are trainable machine learning models, the system is likely to improve greatly with more interactions and additional data.
# Acknowledgments
We thank Aaron Courville, Michael Noseworthy, Nicolas Angelard-Gontier, Ryan Lowe, Prasanna Parthasarathi and Peter Henderson for helpful advice related to the system architecture, crowdsourcing and reinforcement learning throughout the Alexa Prize competition. We thank Christian Droulers for building the graphical user interface for text-based chat. We thank Amazon for providing Tesla K80 GPUs through the Amazon Web Services platform. Some of the Titan X GPUs used for this research
27Obtained from: https://deron.meranda.us/data/. 28In contrast, one socialbot system in the Alexa semi-ï¬nals would start the conversation by asking the user a question such as "I am able to talk about news, sports and politics. Which would you like to talk about?" after which the user is expected to mention one of the keywords "news", "sports" or "politics". This type of system-initiative greatly reduces the number of speech recognition errors, because it is far easier to discriminate between a few keywords compared to transcribing a complete open-ended utterance.
35
were donated by the NVIDIA Corporation. The authors acknowledge NSERC, Canada Research Chairs, CIFAR, IBM Research, Nuance Foundation, Microsoft Maluuba and Druide Informatique Inc. for funding.
# References
Ameixa, D., Coheur, L., Fialho, P. & Quaresma, P. (2014), Luke, I am your father: dealing with out-of-domain requests by using movies subtitles, in âIntelligent Virtual Agentsâ, Springer.
Asri, L. E., He, J. & Suleman, K. (2016), A sequence-to-sequence model for user simulation in spoken dialogue systems, in âInterSpeechâ.
Aust, H., Oerder, M., Seide, F. & Steinbiss, V. (1995), âThe Philips automatic train timetable information systemâ, Speech Communication 17(3).
Bird, S., Klein, E. & Loper, E. (2009), Natural Language Processing with Python, OâReilly Media.
Blunsom, P., Grefenstette, E. & Kalchbrenner, N. (2014), A convolutional neural network for mod- elling sentences, in âProceedings of the 52nd Annual Meeting of the Association for Computational Linguisticsâ, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.
Bohus, D., Raux, A., Harris, T. K., Eskenazi, M. & Rudnicky, A. I. (2007), Olympus: an open-source framework for conversational spoken language interface research, in âProceedings of the workshop on bridging the gap: Academic and industrial research in dialog technologiesâ, Association for Computational Linguistics, pp. 32â39.
Breiman, L. (1996), âBagging predictorsâ, Machine learning 24(2), 123â140.
Charras, F., Duplessis, G. D., Letard, V., Ligozat, A.-L. & Rosset, S. (2016), Comparing system- response retrieval models for open-domain and casual conversational agent, in âWorkshop on Chatbots and Conversational Agent Technologiesâ.
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H. & Bengio, Y. (2014), Learning phrase representations using rnn encoderâdecoder for statistical machine translation, in âEMNLPâ.
Chung, G. (2004), Developing a ï¬exible spoken dialog system using simulation, in âProceedings of the 42nd Annual Meeting on Association for Computational Linguisticsâ, Association for Computational Linguistics, p. 63.
Colby, K. M. (1981), âModeling a paranoid mindâ, Behavioral and Brain Sciences 4.
Cuayáhuitl, H., Renals, S., Lemon, O. & Shimodaira, H. (2005), Human-computer dialogue simula- tion using hidden markov models, in âAutomatic Speech Recognition and Understanding, 2005 IEEE Workshop onâ, IEEE, pp. 290â295.
Das, A., Kottur, S., Moura, J. M., Lee, S. & Batra, D. (2017), Learning cooperative visual dialog agents with deep reinforcement learning, in âInternational Conference on Computer Visionâ.
Eckert, W., Levin, E. & Pieraccini, R. (1997), User modeling for spoken dialogue system evaluation, in âAutomatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop onâ, IEEE, pp. 80â87.
Fatemi, M., Asri, L. E., Schulz, H., He, J. & Suleman, K. (2016), Policy networks with two-stage training for dialogue systems, in âSIGDIALâ.
Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., Lally, A., Murdock, J. W., Nyberg, E., Prager, J. et al. (2010), âBuilding Watson: An overview of the DeepQA projectâ, AI magazine 31(3).
Foerster, J., Assael, Y. M., de Freitas, N. & Whiteson, S. (2016), Learning to communicate with deep multi-agent reinforcement learning, in âAdvances in Neural Information Processing Systemsâ, pp. 2137â2145.
36
Georgila, K., Henderson, J. & Lemon, O. (2006), User simulation for spoken dialogue systems: Learning and evaluation, in âNinth International Conference on Spoken Language Processingâ.
Georgila, K. & Traum, D. (2011), Reinforcement learning of argumentation dialogue policies in nego- tiation, in âTwelfth Annual Conference of the International Speech Communication Associationâ.
Glorot, X., Bordes, A. & Bengio, Y. (2011), Deep sparse rectiï¬er neural networks, in âProceedings of the Fourteenth International Conference on Artiï¬cial Intelligence and Statisticsâ, pp. 315â323.
Heeman, P. A. (2009), Representing the reinforcement learning state in a negotiation dialogue, in âAutomatic Speech Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop onâ, IEEE, pp. 450â455.
Henderson, J., Lemon, O. & Georgila, K. (2008), âHybrid reinforcement/supervised learning of dialogue policies from ï¬xed data setsâ, Computational Linguistics 34(4), 487â511.
Im, J. (2017).
URL: http://search.aifounded.com/
JurËcÃËcek, F., DuÅ¡ek, O., Plátek, O. & Žilka, L. (2014), Alex: A statistical dialogue systems framework, in âInternational Conference on Text, Speech, and Dialogueâ, Springer, pp. 587â594.
Khouzaimi, H., Laroche, R. & Lefevre, F. (2017), Incremental human-machine dialogue simulation, in âDialogues with Social Robotsâ, Springer, pp. 53â66.
Kingma, D. & Ba, J. (2015), Adam: A method for stochastic optimization, in âICLRâ.
Kingma, D. P. & Welling, M. (2014), âAuto-encoding variational Bayesâ, ICLR .
Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A. & Fidler, S. (2015), Skip-thought vectors, in âNIPSâ.
Koren, Y., Bell, R. & Volinsky, C. (2009), âMatrix factorization techniques for recommender systemsâ, Computer 42(8).
Lazaridou, A., Peysakhovich, A. & Baroni, M. (2016), âMulti-agent cooperation and the emergence of (natural) languageâ, arXiv preprint arXiv:1612.07182 .
Lazaridou, A., Pham, N. T. & Baroni, M. (2016), âTowards multi-agent communication-based language learningâ, arXiv preprint arXiv:1605.07133 .
Lee, S. & Eskenazi, M. (2012), Pomdp-based letâs go system for spoken dialog challenge, in âSpoken Language Technology Workshop (SLT), 2012 IEEEâ, IEEE, pp. 61â66.
Levin, E., Pieraccini, R. & Eckert, W. (2000), âA stochastic model of human-machine interaction for learning dialog strategiesâ, IEEE Transactions on speech and audio processing 8(1), 11â23.
Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D. & Batra, D. (2017), Deal or No Deal? End-to-End Learning for Negotiation Dialogues, in âEMNLPâ.
Li, J., Monroe, W., Ritter, A., Galley, M., Gao, J. & Jurafsky, D. (2016), âDeep reinforcement learning for dialogue generationâ, arXiv preprint arXiv:1606.01541 .
Lin, L.-J. (1993), Reinforcement learning for robots using neural networks, Technical report, Carnegie- Mellon Univ Pittsburgh PA School of Computer Science.
Liu, B. & Lane, I. (2017), Iterative policy learning in end-to-end trainable task-oriented neural dialog models, in âProceedings of 2017 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)â, Okinawa, Japan.
Liu, C.-W., Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L. & Pineau, J. (2016), How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation, in âEMNLPâ.
López-Cózar, R. (2016), âAutomatic creation of scenarios for evaluating spoken dialogue systems via user-simulationâ, Knowledge-Based Systems 106, 51â73.
37
Lowe, R., Noseworthy, M., Serban, I. V., Angelard-Gontier, N., Bengio, Y. & Pineau, J. (2017), Towards an automatic Turing test: Learning to evaluate dialogue responses, in âACLâ.
Lowe, R., Pow, N., Serban, I. & Pineau, J. (2015), The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems, in âSIGDIALâ.
Lowe, R., Serban, I. V., Noseworthy, M., Charlin, L. & Pineau, J. (2016), âOn the evaluation of dialogue systems with next utterance classiï¬cationâ, arXiv preprint arXiv:1605.05414 .
Lowe, R. T., Pow, N., Serban, I. V., Charlin, L., Liu, C.-W. & Pineau, J. (2017), âTraining end-to-end dialogue systems with the ubuntu dialogue corpusâ, Dialogue & Discourse 8(1).
Marelli, M., Bentivogli, L., Baroni, M., Bernardi, R., Menini, S. & Zamparelli, R. (2014), Semeval- 2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment., in âSemEval Workshop, COLINGâ.
McGlashan, S., Fraser, N., Gilbert, N., Bilange, E., Heisterkamp, P. & Youd, N. (1992), Dialogue management for telephone information systems, in âANLCâ.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. (2013), Distributed representations of words and phrases and their compositionality, in âNIPSâ.
Miller, A. H., Feng, W., Fisch, A., Lu, J., Batra, D., Bordes, A., Parikh, D. & Weston, J. (2017), âParlai: A dialog research software platformâ, arXiv preprint arXiv:1705.06476 .
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. & Riedmiller, M. (2013), âPlaying atari with deep reinforcement learningâ, arXiv preprint arXiv:1312.5602 .
Mordatch, I. & Abbeel, P. (2017), âEmergence of grounded compositional language in multi-agent populationsâ, arXiv preprint arXiv:1703.04908 .
Nair, V. & Hinton, G. E. (2010), Rectiï¬ed linear units improve restricted boltzmann machines, in âProceedings of the 27th international conference on machine learning (ICML-10)â, pp. 807â814.
Ng, A. Y., Harada, D. & Russell, S. (1999), Policy invariance under reward transformations: Theory and application to reward shaping, in âICMLâ, Vol. 99, pp. 278â287.
Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R. & Deng, L. (2016), âMS MARCO: A Human Generated MAchine Reading COmprehension Datasetâ, arXiv preprint arXiv:1611.09268 .
Paek, T. (2006), Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deployment, in âProc. Dialog-on-Dialog Workshop, Interspeechâ.
Pennington, J., Socher, R. & Manning, C. D. (2014), Glove: Global vectors for word representation., in âEMNLPâ, Vol. 14.
Pieraccini, R., Suendermann, D., Dayanidhi, K. & Liscombe, J. (2009), Are we there yet? research in commercial spoken dialog systems, in âText, Speech and Dialogueâ, Springer, pp. 3â13.
Precup, D. (2000), âEligibility traces for off-policy policy evaluationâ, Computer Science Department Faculty Publication Series .
Precup, D., Sutton, R. S. & Dasgupta, S. (2001), Off-policy temporal-difference learning with function approximation, in âICMLâ.
Raux, A., Bohus, D., Langner, B., Black, A. W. & Eskenazi, M. (2006), Doing research on a deployed spoken dialogue system: one year of letâs go! experience., in âINTERSPEECHâ.
Rezende, D. J., Mohamed, S. & Wierstra, D. (2014), Stochastic backpropagation and approximate inference in deep generative models, in âICMLâ, pp. 1278â1286.
Schatzmann, J., Thomson, B., Weilhammer, K., Ye, H. & Young, S. (2007), Agenda-based user simulation for bootstrapping a pomdp dialogue system, in âHuman Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papersâ, Association for Computational Linguistics, pp. 149â152.
38
Serban, I. V., Lowe, R., Charlin, L. & Pineau, J. (2016), Generative deep neural networks for dialogue: A short review, in âNIPS, Letâs Discuss: Learning Methods for Dialogue Workshopâ.
Serban, I. V. & Pineau, J. (2015), Text-based speaker identiï¬cation for multi-participant open-domain dialogue systems, in âNeural Information Processing Systems Workshop on Machine Learning for Spoken Language Understandingâ.
Serban, I. V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A. & Bengio, Y. (2017), A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues, in âAAAIâ.
Shawar, B. A. & Atwell, E. (2007), Chatbots: are they really useful?, in âLDV Forumâ, Vol. 22.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M. et al. (2016), âMastering the game of go with deep neural networks and tree searchâ, Nature 529(7587), 484â489.
Simpson, A. & Eraser, N. M. (1993), Black box and glass box evaluation of the sundial system, in âThird European Conference on Speech Communication and Technologyâ.
Singh, S., Litman, D., Kearns, M. & Walker, M. (2002), âOptimizing dialogue management with reinforcement learning: Experiments with the njfun systemâ, Journal of Artiï¬cial Intelligence Research 16, 105â133.
Singh, S. P., Kearns, M. J., Litman, D. J. & Walker, M. A. (1999), Reinforcement learning for spoken dialogue systems., in âNipsâ, pp. 956â962.
Socher, R., Perelygin, A., Wu, J. Y., Chuang, J., Manning, C. D., Ng, A. Y., Potts, C. et al. (2013), Recursive deep models for semantic compositionality over a sentiment treebank, in âProceedings of the conference on empirical methods in natural language processing (EMNLP)â, Vol. 1631, p. 1642.
Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., Taylor, P., Martin, R., Van Ess-Dykema, C. & Meteer, M. (2000), âDialogue act modeling for automatic tagging and recognition of conversational speechâ, Computational linguistics 26(3).
Stone, B. & Soper, S. (2014), âAmazon Unveils a Listening, Talking, Music-Playing Speaker for Your Homeâ, Bloomberg L.P . Retrieved 2014-11-07.
Su, P.-H., Gasic, M., Mrksic, N., Rojas-Barahona, L., Ultes, S., Vandyke, D., Wen, T.-H. & Young, S. (2016), âContinuously learning neural dialogue managementâ, arXiv preprint arXiv:1606.02689 .
Su, P.-H., Vandyke, D., GaÅ¡i´c, M., Kim, D., MrkÅ¡i´c, N., Wen, T.-H. & Young, S. (2015), Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems., in âInterspeechâ.
Suendermann-Oeft, D., Ramanarayanan, V., Teckenbrock, M., Neutatz, F. & Schmidt, D. (2015), Halef: An open-source standard-compliant telephony-based modular spoken dialog system: A review and an outlook, in âNatural language dialog systems and intelligent assistantsâ, Springer.
Sukhbaatar, S., Fergus, R. et al. (2016), Learning multiagent communication with backpropagation, in âAdvances in Neural Information Processing Systemsâ, pp. 2244â2252.
Sutton, R. S. & Barto, A. G. (1998), Reinforcement learning: An introduction, number 1 in â1â, MIT Press Cambridge.
Traum, D., Marsella, S. C., Gratch, J., Lee, J. & Hartholt, A. (2008), Multi-party, multi-issue, multi-strategy negotiation for multi-modal virtual agents, in âInternational Workshop on Intelligent Virtual Agentsâ, Springer, pp. 117â130.
Truong, H. P., Parthasarathi, P. & Pineau, J. (2017), âMaca: A modular architecture for conversational agentsâ, arXiv preprint arXiv:1705.00673 .
Wallace, R. S. (2009), âThe anatomy of aliceâ, Parsing the Turing Test .
39
Ward, N. G., Rivera, A. G., Ward, K. & Novick, D. G. (2005), âRoot causes of lost time and user stress in a simple dialog systemâ.
Weizenbaum, J. (1966), âElizaâa computer program for the study of natural language communication between man and machineâ, ACM 9(1).
Williams, J. D. (2011), An empirical evaluation of a statistical dialog system in public use, in âProceedings of the SIGDIAL 2011 Conferenceâ, Association for Computational Linguistics, pp. 130â141.
Williams, J. D., Raux, A. & Henderson, M. (2016), âIntroduction to the special issue on dialogue state trackingâ, Dialogue & Discourse 7(3), 1â3.
Williams, J. D. & Young, S. (2007), âPartially observable markov decision processes for spoken dialog systemsâ, Computer Speech & Language 21(2), 393â422.
Williams, J., Raux, A., Ramachandran, D. & Black, A. (2013), The dialog state tracking challenge, in âSIGDIALâ, pp. 404â413.
Williams, R. J. (1992), âSimple statistical gradient-following algorithms for connectionist reinforce- ment learningâ, Machine learning 8(3-4).
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K. et al. (2016), âGoogleâs neural machine translation system: Bridging the gap between human and machine translationâ, arXiv preprint arXiv:1609.08144 .
Young, S., Gasic, M., Thomson, B. & Williams, J. D. (2013), âPomdp-based statistical spoken dialog systems: A reviewâ, Proceedings of the IEEE 101(5), 1160â1179.
Yu, L., Hermann, K. M., Blunsom, P. & Pulman, S. (2014), Deep learning for answer sentence selection, in âNIPS, Workshop on Deep Learningâ.
Yu, Z., Xu, Z., Black, A. W. & Rudnicky, A. I. (2016), Strategy and policy learning for non-task- oriented conversational systems., in âSIGDIALâ.
Zhao, T. & Eskenazi, M. (2016), Towards end-to-end learning for dialog state tracking and manage- ment using deep reinforcement learning, in âSIGDIALâ.
Zhao, T., Lee, K. & Eskenazi, M. (2016), Dialport: Connecting the spoken dialog research community to real user data, in âSpoken Language Technology Workshop (SLT), 2016 IEEEâ, IEEE, pp. 83â90.
Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A. & Fidler, S. (2015), Aligning books and movies: Towards story-like visual explanations by watching movies and reading books, in âICCVâ.
40 | {
"id": "1612.07182"
} |
1708.07747 | Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms | We present Fashion-MNIST, a new dataset comprising of 28x28 grayscale images
of 70,000 fashion products from 10 categories, with 7,000 images per category.
The training set has 60,000 images and the test set has 10,000 images.
Fashion-MNIST is intended to serve as a direct drop-in replacement for the
original MNIST dataset for benchmarking machine learning algorithms, as it
shares the same image size, data format and the structure of training and
testing splits. The dataset is freely available at
https://github.com/zalandoresearch/fashion-mnist | http://arxiv.org/pdf/1708.07747 | Han Xiao, Kashif Rasul, Roland Vollgraf | cs.LG, cs.CV, stat.ML | Dataset is freely available at
https://github.com/zalandoresearch/fashion-mnist Benchmark is available at
http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/ | null | cs.LG | 20170825 | 20170915 | 2017
7 1 0 2
# p e S 5 1
]
arXiv:1708.07747v2 [cs.LG]
# G L . s c [
2 v 7 4 7 7 0 . 8 0 7 1 : v i X r a
# Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
# Han Xiao Zalando Research MühlenstraÃe 25, 10243 Berlin han.xiao@zalando.de
# Kashif Rasul Zalando Research MühlenstraÃe 25, 10243 Berlin kashif.rasul@zalando.de
# Roland Vollgraf Zalando Research MühlenstraÃe 25, 10243 Berlin roland.vollgraf@zalando.de
# Abstract
We present Fashion-MNIST, a new dataset comprising of 28 Ã 28 grayscale images of 70, 000 fashion products from 10 categories, with 7, 000 images The training set has 60, 000 images and the test set has per category. 10, 000 images. Fashion-MNIST is intended to serve as a direct drop- in replacement for the original MNIST dataset for benchmarking machine learning algorithms, as it shares the same image size, data format and the structure of training and testing splits. is freely available at https://github.com/zalandoresearch/fashion-mnist.
# 1 Introduction
The MNIST dataset comprising of 10-class handwritten digits, was ï¬rst introduced by LeCun et al. [1998] in 1998. At that time one could not have foreseen the stellar rise of deep learning tech- niques and their performance. Despite the fact that today deep learning can do so much the sim- ple MNIST dataset has become the most widely used testbed in deep learning, surpassing CIFAR- 10 [Krizhevsky and Hinton, 2009] and ImageNet [Deng et al., 2009] in its popularity via Google trends1. Despite its simplicity its usage does not seem to be decreasing despite calls for it in the deep learning community.
The reason MNIST is so popular has to do with its size, allowing deep learning researchers to quickly check and prototype their algorithms. This is also complemented by the fact that all machine learning libraries (e.g. scikit-learn) and deep learning frameworks (e.g. Tensorï¬ow, Pytorch) provide helper functions and convenient examples that use MNIST out of the box.
Our aim with this work is to create a good benchmark dataset which has all the accessibility of MNIST, namely its small size, straightforward encoding and permissive license. We took the ap- proach of sticking to the 10 classes 70, 000 grayscale images in the size of 28 à 28 as in the original MNIST. In fact, the only change one needs to use this dataset is to change the URL from where the MNIST dataset is fetched. Moreover, Fashion-MNIST poses a more challenging classiï¬cation task than the simple MNIST digits data, whereas the latter has been trained to accuracies above 99.7% as reported in Wan et al. [2013], Ciregan et al. [2012].
We also looked at the EMNIST dataset provided by Cohen et al. [2017], an extended version of MNIST that extends the number of classes by introducing uppercase and lowercase characters. How-
# 1https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet
ever, to be able to use it seamlessly one needs to not only extend the deep learning frameworkâs MNIST helpers, but also change the underlying deep neural network to classify these extra classes.
# 2 Fashion-MNIST Dataset
Fashion-MNIST is based on the assortment on Zalandoâs website2. Every fashion product on Za- lando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outï¬t. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 à 1000 JPEG format. For efï¬ciently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. We use the front look thumbnail images of 70, 000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, white- color products are not included in the dataset as they have low contrast to the background. The thumbnails (51 à 73) are then fed into the following conversion pipeline, which is visualized in Figure 1.
1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The âclosenessâ is
deï¬ned by the distance within 5% of the maximum possible intensity in RGB space.
3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and
columns are skipped over.
4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0,
with increasing effect near outlines.
5. Extending the shortest edge to 28 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels.
4 | | Hh 6 51x73 29x71 11x28 11x28 28x28 28x28 28x28 $ | RK; } A A A 51x73 51x49 28x27 28x27 28x28 28x28 28x28
4
|
|
$ |
RK; }
A
A
A
(1) PNG image (2) Trimming (3) Resizing (4) Sharpening (5) Extending (6) Negating(7) Grayscaling
Figure 1: Diagram of the conversion process used to generate Fashion-MNIST dataset. Two exam- ples from dress and sandals categories are depicted, respectively. Each column represents a step described in section 2.
Table 1: Files contained in the Fashion-MNIST dataset.
Name Description # Examples 60, 000 60, 000 10, 000 10, 000 Size 25 MBytes 140 Bytes 4.2 MBytes 92 Bytes train-images-idx3-ubyte.gz Training set images train-labels-idx1-ubyte.gz t10k-images-idx3-ubyte.gz Test set images t10k-labels-idx1-ubyte.gz Training set labels Test set labels
For the class labels, we use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product
2Zalando is the Europeâs largest online fashion platform. http://www.zalando.com
2
contains only one silhouette code. Table 2 gives a summary of all class labels in Fashion-MNIST with examples for each class.
Finally, the dataset is divided into a training and a test set. The training set receives a randomly- selected 6, 000 examples from each class. Images and labels are stored in the same ï¬le format as the MNIST data set, which is designed for storing vectors and multidimensional matrices. The result ï¬les are listed in Table 1. We sort examples by their labels while storing, resulting in smaller label ï¬les after compression comparing to the MNIST. It is also easier to retrieve examples with a certain class label. The data shufï¬ing job is therefore left to the algorithm developer.
Table 2: Class names and example images in Fashion-MNIST dataset.
Examples Label Description 0 T-Shirt/Top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandals 6 Shirt 7 Sneaker 8 Bag 9 Ankle boots
Examples
# 3 Experiments
We provide some classiï¬cation results in Table 3 to form a benchmark on this data set. All al- gorithms are repeated 5 times by shufï¬ing the training data and the average accuracy on the test set is reported. The benchmark on the MNIST dataset is also included for a side-by-side comparison. A more comprehensive table with explanations on the algorithms can be found on https://github.com/zalandoresearch/fashion-mnist.
Table 3: Benchmark on Fashion-MNIST (Fashion) and MNIST.
Test Accuracy Classiï¬er Parameter Fashion MNIST 0.873 0.861 0.886 0.798 0.792 0.789 DecisionTreeClassiï¬er criterion=entropy max_depth=10 splitter=best criterion=entropy max_depth=10 splitter=random criterion=entropy max_depth=50 splitter=best
Continued on next page
3
Table 3 â continued from previous page Test Accuracy Parameter Classiï¬er criterion=entropy max_depth=100 splitter=best criterion=gini max_depth=10 splitter=best criterion=entropy max_depth=50 splitter=random criterion=entropy max_depth=100 splitter=random criterion=gini max_depth=100 splitter=best criterion=gini max_depth=50 splitter=best criterion=gini max_depth=10 splitter=random criterion=gini max_depth=50 splitter=random criterion=gini max_depth=100 splitter=random ExtraTreeClassiï¬er criterion=gini max_depth=10 splitter=best criterion=entropy max_depth=100 splitter=best criterion=entropy max_depth=10 splitter=best criterion=entropy max_depth=50 splitter=best criterion=gini max_depth=100 splitter=best criterion=gini max_depth=50 splitter=best criterion=entropy max_depth=50 splitter=random criterion=entropy max_depth=100 splitter=random criterion=gini max_depth=50 splitter=random criterion=gini max_depth=100 splitter=random criterion=gini max_depth=10 splitter=random criterion=entropy max_depth=10 splitter=random GaussianNB priors=[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1] GradientBoostingClassiï¬er n_estimators=100 loss=deviance max_depth=10 n_estimators=50 loss=deviance max_depth=10 n_estimators=100 loss=deviance max_depth=3 n_estimators=10 loss=deviance max_depth=10 n_estimators=50 loss=deviance max_depth=3 n_estimators=10 loss=deviance max_depth=50 n_estimators=10 loss=deviance max_depth=3 KNeighborsClassiï¬er weights=distance n_neighbors=5 p=1 weights=distance n_neighbors=9 p=1 weights=uniform n_neighbors=9 p=1 weights=uniform n_neighbors=5 p=1 weights=distance n_neighbors=5 p=2 weights=distance n_neighbors=9 p=2 weights=uniform n_neighbors=5 p=2 weights=uniform n_neighbors=9 p=2 weights=distance n_neighbors=1 p=2 weights=uniform n_neighbors=1 p=2 weights=uniform n_neighbors=1 p=1 weights=distance n_neighbors=1 p=1 LinearSVC loss=hinge C=1 multi_class=ovr penalty=l2 loss=hinge C=1 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=1 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=1 multi_class=crammer_singer penalty=l1
Fashion MNIST 0.886 0.866 0.883 0.881 0.879 0.877 0.853 0.873 0.875 0.806 0.847 0.810 0.847 0.843 0.845 0.826 0.828 0.824 0.820 0.737 0.745 0.524 0.969 0.964 0.949 0.933 0.926 0.888 0.846 0.959 0.955 0.955 0.957 0.945 0.944 0.944 0.943 0.943 0.943 0.955 0.955 0.917 0.919 0.919 0.919 0.919 0.912 0.885 0.873 0.879 0.872
0.789 0.788 0.787 0.787 0.785 0.783 0.783 0.779 0.777 0.775 0.775 0.772 0.772 0.769 0.768 0.752 0.752 0.748 0.745 0.739 0.737 0.511 0.880 0.872 0.862 0.849 0.840 0.795 0.782 0.854 0.854 0.853 0.852 0.852 0.849 0.849 0.847 0.839 0.839 0.838 0.838 0.836 0.835 0.834 0.833 0.833 0.820 0.779 0.776 0.764 0.758
# loss=squared_hinge C=1 multi_class=ovr penalty=l2
# loss=squared_hinge C=10 multi_class=ovr penalty=l2
# loss=squared_hinge C=100 multi_class=ovr penalty=l2
# loss=hinge C=10 multi_class=ovr penalty=l2
# loss=hinge C=100 multi_class=ovr penalty=l2
Continued on next page
4
Table 3 â continued from previous page
Test Accuracy Parameter Classiï¬er loss=hinge C=10 multi_class=crammer_singer penalty=l1 loss=hinge C=10 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=10 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=10 multi_class=crammer_singer penalty=l1 loss=hinge C=100 multi_class=crammer_singer penalty=l1 loss=hinge C=100 multi_class=crammer_singer penalty=l2 loss=squared_hinge C=100 multi_class=crammer_singer penalty=l1 loss=squared_hinge C=100 multi_class=crammer_singer penalty=l2 LogisticRegression C=1 multi_class=ovr penalty=l1 C=1 multi_class=ovr penalty=l2 C=10 multi_class=ovr penalty=l2 C=10 multi_class=ovr penalty=l1 C=100 multi_class=ovr penalty=l2 MLPClassiï¬er activation=relu hidden_layer_sizes=[100] activation=relu hidden_layer_sizes=[100, 10] activation=tanh hidden_layer_sizes=[100] activation=tanh hidden_layer_sizes=[100, 10] activation=relu hidden_layer_sizes=[10, 10] activation=relu hidden_layer_sizes=[10] activation=tanh hidden_layer_sizes=[10, 10] activation=tanh hidden_layer_sizes=[10] PassiveAggressiveClassiï¬er C=1 C=100 C=10 Perceptron penalty=l1 penalty=l2 penalty=elasticnet RandomForestClassiï¬er n_estimators=100 criterion=entropy max_depth=100 n_estimators=100 criterion=gini max_depth=100 n_estimators=50 criterion=entropy max_depth=100 n_estimators=100 criterion=entropy max_depth=50 n_estimators=50 criterion=entropy max_depth=50 n_estimators=100 criterion=gini max_depth=50 n_estimators=50 criterion=gini max_depth=50 n_estimators=50 criterion=gini max_depth=100 n_estimators=10 criterion=entropy max_depth=50 n_estimators=10 criterion=entropy max_depth=100 n_estimators=10 criterion=gini max_depth=50 n_estimators=10 criterion=gini max_depth=100 n_estimators=50 criterion=entropy max_depth=10 n_estimators=100 criterion=entropy max_depth=10 n_estimators=100 criterion=gini max_depth=10 n_estimators=50 criterion=gini max_depth=10 n_estimators=10 criterion=entropy max_depth=10
Fashion MNIST 0.783 0.816 0.829 0.829 0.759 0.753 0.746 0.737 0.917 0.917 0.916 0.909 0.916 0.972 0.972 0.962 0.957 0.936 0.933 0.921 0.921 0.877 0.875 0.880 0.887 0.845 0.845 0.970 0.970 0.968 0.969 0.967 0.971 0.968 0.967 0.949 0.949 0.948 0.948 0.947 0.950 0.949 0.945 0.933 0.930 0.914 0.912 0.910 0.913 0.912 0.913
0.751 0.749 0.748 0.736 0.516 0.496 0.492 0.484 0.842 0.841 0.839 0.839 0.836 0.871 0.870 0.868 0.863 0.850 0.848 0.841 0.840 0.776 0.775 0.773 0.782 0.754 0.726 0.873 0.872 0.872 0.872 0.871 0.871 0.870 0.869 0.853 0.852 0.848 0.847 0.838 0.838 0.835 0.834 0.828 0.825 0.819 0.818 0.817 0.816 0.816 0.816
# SGDClassifier
# SGDClassiï¬er
# loss=hinge penalty=l2
# loss=perceptron penalty=l1
# loss=modified_huber penalty=l1
# loss=modified_huber penalty=l2
# loss=log penalty=elasticnet
# loss=hinge penalty=elasticnet
Continued on next page
5
# Table 3 â continued from previous page
Test Accuracy Parameter Classiï¬er Fashion MNIST 0.914 0.911 0.910 0.913 0.912 0.912 0.914 0.913 0.911 0.973 0.976 0.978 0.972 0.966 0.957 0.929 0.927 0.926 0.898 0.873 0.868 0.815 0.815 0.815 0.814 0.814 0.814 0.813 0.813 0.813 0.897 0.891 0.890 0.890 0.879 0.873 0.839 0.829 0.827 0.678 0.671 0.664 loss=squared_hinge penalty=elasticnet loss=hinge penalty=l1 loss=log penalty=l1 loss=perceptron penalty=l2 loss=perceptron penalty=elasticnet loss=squared_hinge penalty=l2 loss=modified_huber penalty=elasticnet loss=log penalty=l2 loss=squared_hinge penalty=l1 SVC C=10 kernel=rbf C=10 kernel=poly C=100 kernel=poly C=100 kernel=rbf C=1 kernel=rbf C=1 kernel=poly C=1 kernel=linear C=10 kernel=linear C=100 kernel=linear C=1 kernel=sigmoid C=10 kernel=sigmoid C=100 kernel=sigmoid
# 4 Conclusions
This paper introduced Fashion-MNIST, a fashion product images dataset intended to be a drop- in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithm. The images in Fashion-MNIST are converted to a format that matches that of the MNIST dataset, making it immediately compatible with any machine learning package capable of working with the original MNIST dataset.
# References
D. Ciregan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classiï¬- cation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3642â3649. IEEE, 2012.
G. Cohen, S. Afshar, J. Tapson, and A. van Schaik. Emnist: an extension of mnist to handwritten letters. arXiv preprint arXiv:1702.05373, 2017.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical im- age database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009.
A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Regularization of neural networks using dropconnect. In Proceedings of the 30th international conference on machine learning (ICML- 13), pages 1058â1066, 2013.
6 | {
"id": "1708.07747"
} |
1708.07860 | Multi-task Self-Supervised Visual Learning | We investigate methods for combining multiple self-supervised tasks--i.e.,
supervised tasks where data can be collected without manual labeling--in order
to train a single visual representation. First, we provide an apples-to-apples
comparison of four different self-supervised tasks using the very deep
ResNet-101 architecture. We then combine tasks to jointly train a network. We
also explore lasso regularization to encourage the network to factorize the
information in its representation, and methods for "harmonizing" network inputs
in order to learn a more unified representation. We evaluate all methods on
ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our
results show that deeper networks work better, and that combining tasks--even
via a naive multi-head architecture--always improves performance. Our best
joint network nearly matches the PASCAL performance of a model pre-trained on
ImageNet classification, and matches the ImageNet network on NYU depth
prediction. | http://arxiv.org/pdf/1708.07860 | Carl Doersch, Andrew Zisserman | cs.CV | Published at ICCV 2017 | null | cs.CV | 20170825 | 20170825 | 7 1 0 2
g u A 5 2
] V C . s c [
1 v 0 6 8 7 0 . 8 0 7 1 : v i X r a
# Multi-task Self-Supervised Visual Learning
Carl Doerschâ Andrew Zissermanâ ,â
# â DeepMind
# âVGG, Department of Engineering Science, University of Oxford
# Abstract
We investigate methods for combining multiple self- supervised tasksâi.e., supervised tasks where data can be collected without manual labelingâin order to train a sin- gle visual representation. First, we provide an apples-to- apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then com- bine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for âhar- monizingâ network inputs in order to learn a more uni- ï¬ed representation. We evaluate all methods on ImageNet classiï¬cation, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasksâeven via a na¨ıve multi- head architectureâalways improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classiï¬cation, and matches the ImageNet network on NYU depth prediction.
# 1. Introduction
Vision is one of the most promising domains for unsu- pervised learning. Unlabeled images and video are avail- able in practically unlimited quantities, and the most promi- nent present image modelsâneural networksâare data starved, easily memorizing even random labels for large im- age collections [45]. Yet unsupervised algorithms are still not very effective for training neural networks: they fail to adequately capture the visual semantics needed to solve real-world tasks like object detection or geometry estima- tion the way strongly-supervised methods do. For most vi- sion problems, the current state-of-the-art approach begins by training a neural network on ImageNet [35] or a similarly large dataset which has been hand-annotated.
How might we better train neural networks without man- ual labeling? Neural networks are generally trained via backpropagation on some objective function. Without la- bels, however, what objective function can measure how good the network is? Self-supervised learning answers this
question by proposing various tasks for networks to solve, where performance is easy to measure, i.e., performance can be captured with an objective function like those seen in supervised learning. Ideally, these tasks will be difï¬- cult to solve without understanding some form of image semantics, yet any labels necessary to formulate the objec- tive function can be obtained automatically. In the last few years, a considerable number of such tasks have been pro- posed [1, 2, 6, 7, 8, 17, 20, 21, 23, 25, 26, 27, 28, 29, 31, 39, 40, 42, 43, 46, 47], such as asking a neural network to colorize grayscale images, ï¬ll in image holes, solve jigsaw puzzles made from image patches, or predict movement in videos. Neural networks pre-trained with these tasks can be re-trained to perform well on standard vision tasks (e.g. image classiï¬cation, object detection, geometry estimation) with less manually-labeled data than networks which are initialized randomly. However, they still perform worse in this setting than networks pre-trained on ImageNet.
This paper advances self-supervision ï¬rst by implement- ing four self-supervision tasks and comparing their perfor- mance using three evaluation measures. The self-supervised tasks are: relative position [7], colorization [46], the âex- emplarâ task [8], and motion segmentation [27] (described in section 2). The evaluation measures (section 5) assess a diverse set of applications that are standard for this area, in- cluding ImageNet image classiï¬cation, object category de- tection on PASCAL VOC 2007, and depth prediction on NYU v2.
Second, we evaluate if performance can be boosted by combining these tasks to simultaneously train a single trunk network. Combining the tasks fairly in a multi-task learn- ing objective is challenging since the tasks learn at different rates, and we discuss how we handle this problem in sec- tion 4. We ï¬nd that multiple tasks work better than one, and explore which combinations give the largest boost.
Third, we identify two reasons why a na¨ıve combination of self-supervision tasks might conï¬ict, impeding perfor- mance: input channels can conï¬ict, and learning tasks can conï¬ict. The ï¬rst sort of conï¬ict might occur when jointly training colorization and exemplar learning: colorization re- ceives grayscale images as input, while exemplar learning receives all color channels. This puts an unnecessary burden
1
on low-level feature detectors that must operate across do- mains. The second sort of conï¬ict might happen when one task learns semantic categorization (i.e. generalizing across instances of a class) and another learns instance matching (which should not generalize within a class). We resolve the ï¬rst conï¬ict via âinput harmonizationâ, i.e. modifying net- work inputs so different tasks get more similar inputs. For the second conï¬ict, we extend our mutli-task learning ar- chitecture with a lasso-regularized combination of features from different layers, which encourages the network to sep- arate features that are useful for different tasks. These ar- chitectures are described in section 3.
We use a common deep network across all experiments, a ResNet-101-v2, so that we can compare various diverse self-supervision tasks apples-to-apples. This comparison is the ï¬rst of its kind. Previous work applied self-supervision tasks over a variety of CNN architectures (usually relatively shallow), and often evaluated the representations on differ- ent tasks; and even where the evaluation tasks are the same, there are often differences in the ï¬ne-tuning algorithms. Consequently, it has not been possible to compare the per- formance of different self-supervision tasks across papers. Carrying out multiple fair comparisons, together with the implementation of the self-supervised tasks, joint training, evaluations, and optimization of a large network for several large datasets has been a signiï¬cant engineering challenge. We describe how we carried out the large scale training efï¬- ciently in a distributed manner in section 4. This is another contribution of the paper.
As shown in the experiments of section 6, by combining multiple self-supervision tasks we are able to close further the gap between self-supervised and fully supervised pre- training over all three evaluation measures.
# 1.1. Related Work
Self-supervision tasks for deep learning generally in- volve taking a complex signal, hiding part of it from the network, and then asking the network to ï¬ll in the missing information. The tasks can broadly be divided into those that use auxiliary information or those that only use raw pixels.
Tasks that use auxiliary information such as multi-modal information beyond pixels include: predicting sound given videos [26], predicting camera motion given two images of the same scene [1, 17, 44], or predicting what robotic mo- tion caused a change in a scene [2, 29, 30, 31, 32]. However, non-visual information can be difï¬cult to obtain: estimating motion requires IMU measurements, running robots is still expensive, and sound is complex and difï¬cult to evaluate quantitatively.
Thus, many works use raw pixels. In videos, time can be a source of supervision. One can simply predict fu- ture [39, 40], although such predictions may be difï¬cult to
2
evaluate. One way to simplify the problem is to ask a net- work to temporally order a set of frames sampled from a video [23]. Another is to note that objects generally appear across many frames: thus, we can train features to remain invariant as a video progresses [11, 24, 42, 43, 47]. Finally, motion cues can separate foreground objects from back- ground. Neural networks can be asked to re-produce these motion-based boundaries without seeing motion [21, 27].
Self-supervised learning can also work with a single im- age. One can hide a part of the image and ask the network to make predictions about the hidden part. The network can be tasked with generating pixels, either by ï¬lling in holes [6, 28], or recovering color after images have been converted to grayscale [20, 46]. Again, evaluating the qual- ity of generated pixels is difï¬cult. To simplify the task, one can extract multiple patches at random from an image, and then ask the network to position the patches relative to each other [7, 25]. Finally, one can form a surrogate âclassâ by taking a single image and altering it many times via trans- lations, rotations, and color shifts [8], to create a synthetic categorization problem.
Our work is also related to multi-task learning. Several recent works have trained deep visual representations us- ing multiple tasks [9, 12, 22, 37], including one work [18] which combines no less than 7 tasks. Usually the goal is to create a single representation that works well for every task, and perhaps share knowledge between tasks. Surpris- ingly, however, previous work has shown little transfer be- tween diverse tasks. Kokkinos [18], for example, found a slight dip in performance with 7 tasks versus 2. Note that our work is not primarily concerned with the performance on the self-supervised tasks we combine: we evaluate on a separate set of semantic âevaluation tasks.â Some previ- ous self-supervised learning literature has suggested perfor- mance gains from combining self-supervised tasks [32, 44], although these works used relatively similar tasks within relatively restricted domains where extra information was provided besides pixels. In this work, we ï¬nd that pre- training on multiple diverse self-supervised tasks using only pixels yields strong performance.
# 2. Self-Supervised Tasks
Too many self-supervised tasks have been proposed in recent years for us to evaluate every possible combina- tion. Hence, we chose representative self-supervised tasks to reimplement and investigate in combination. We aimed for tasks that were conceptually simple, yet also as diverse as possible. Intuitively, a diverse set of tasks should lead to a diverse set of features, which will therefore be more likely to span the space of features needed for general se- mantic image understanding. In this section, we will brieï¬y describe the four tasks we investigated. Where possible, we followed the procedures established in previous works, al-
though in many cases modiï¬cations were necessary for our multi-task setup.
Relative Position [7]: This task begins by sampling two patches at random from a single image and feeding them both to the network without context. The networkâs goal is to predict where one patch was relative to the other in the original image. The trunk is used to produce a representa- tion separately for both patches, which are then fed into a head which combines the representations and makes a pre- diction. The patch locations are sampled from a grid, and pairs are always taken from adjacent grid points (includ- ing diagonals). Thus, there are only eight possible relative positions for a pair, meaning the network output is a sim- ple eight-way softmax classiï¬cation. Importantly, networks can learn to detect chromatic aberration to solve the task, a low-level image property that isnât relevant to semantic tasks. Hence, [7] employs âcolor droppingâ, i.e., randomly dropping 2 of the 3 color channels and replacing them with noise. We reproduce color dropping, though our harmoniza- tion experiments explore other approaches to dealing with chromatic aberration that clash less with other tasks.
Colorization [46]: Given a grayscale image (the L chan- nel of the Lab color space), the network must predict the color at every pixel (speciï¬cally, the ab components of Lab). The color is predicted at a lower resolution than the image (a stride of 8 in our case, a stride of 4 was used in [46]), and furthermore, the colors are vector quantized into 313 different categories. Thus, there is a 313-way softmax clas- siï¬cation for every 8-by-8 pixel region of the image. Our implementation closely follows [46].
Exemplar [8]: The original implementation of this task created pseudo-classes, where each class was generated by taking a patch from a single image and augmenting it via translation, rotation, scaling, and color shifts [8]. The net- work was trained to discriminate between pseudo-classes. Unfortunately, this approach isnât scalable to large datasets, since the number of categories (and therefore, the number of parameters in the ï¬nal fully-connected layer) scales lin- early in the number of images. However, the approach can be extended to allow an inï¬nite number of classes by us- ing a triplet loss, similar to [42], instead of a classiï¬ca- tion loss per class. Speciï¬cally, we randomly sample two patches x1 and x2 from the same pseudo-class, and a third patch x3 from a different pseudo-class (i.e. from a differ- ent image). The network is trained with a loss of the form max(D(f (x1), f (x2)) â D(f (x1), f (x3)) + M, 0), where D is the cosine distance, f (x) is network features for x (in- cluding a small head) for patch x, and M is a margin which we set to 0.5.
3
Motion Segmentation [27]: Given a single frame of video, this task asks the network to classify which pixels will move in subsequent frames. The âground truthâ mask of moving pixels is extracted using standard dense tracking algorithms. We follow Pathak et al. [27], except that we replace their tracking algorithm with Improved Dense Tra- jectories [41]. Keypoints are tracked over 10 frames, and any pixel not labeled as camera motion by that algorithm is treated as foreground. The label image is downsampled by a factor of 8. The resulting segmentations look qualitatively similar to those given in Pathak et al. [27]. The network is trained via a per-pixel cross-entropy with the label image.
Datasets: The three image-based tasks are all trained on ImageNet, as is common in prior work. The motion seg- mentation task uses the SoundNet dataset [3]. It is an open problem whether performance can be improved by differ- ent choices of dataset, or indeed by training on much larger datasets.
# 3. Architectures
In this section we describe three architectures: ï¬rst, the (na¨ıve) multi-task network that has a common trunk and a head for each task (ï¬gure 1a); second, the lasso extension of this architecture (ï¬gure 1b) that enables the training to determine the combination of layers to use for each self- supervised task; and third, a method for harmonizing input channels across self-supervision tasks.
# 3.1. Common Trunk
Our architecture begins with Resnet-101 v2 [15], as im- plemented in TensorFlow-Slim [13]. We keep the entire ar- chitecture up to the end of block 3, and use the same block3 representation solve all tasks and evaluations (see ï¬gure 1a). Thus, our âtrunkâ has an output with 1024 channels, and consists of 88 convolution layers with roughly 30 million parameters. Block 4 contains an additional 13 conv layers and 20 million parameters, but we donât use it to save com- putation.
Each task has a separate loss, and has extra layers in a âhead,â which may have a complicated structure. For instance, the relative position and exemplar tasks have a siamese architecture. We implement this by passing all patches through the trunk as a single batch, and then re- arranging the elements in the batch to make pairs (or triplets) of representations to be processed by the head. At each training iteration, only one of the heads is active. How- ever, gradients are averaged across many iterations where different heads are active, meaning that the overall loss is a sum of the losses of different tasks.
a) b)
Figure 1. The structure of our multi-task network. It is based on ResNet-101, with block 3 having 23 residual units. a) Naive shared-trunk approach, where each âheadâ is attached to the output of block 3. b) the lasso architecture, where each âheadâ receives a linear combination of unit outputs within block3, weighted by the matrix α, which is trained to be sparse.
# 3.2. Separating features via Lasso
Different tasks require different features; this applies for both the self-supervised training tasks and the evaluation tasks. For example, information about ï¬ne-grained breeds of dogs is useful for, e.g., ImageNet classiï¬cation, and also colorization. However, ï¬ne-grained information is less use- ful for tasks like PASCAL object detection, or for relative positioning of patches. Furthermore, some tasks require only image patches (such as relative positioning) whilst oth- ers can make use of entire images (such as colorization), and consequently features may be learnt at different scales. This suggests that, while training on self-supervised tasks, it might be advantageous to separate out groups of features that are useful for some tasks but not others. This would help us with evaluation tasks: we expect that any given evaluation task will be more similar to some self-supervised tasks than to others. Thus, if the features are factorized into different tasks, then the network can select from the discov- ered feature groups while training on the evaluation tasks.
Inspired by recent works that extract information across network layers for the sake of transfer learning [14, 22, 36], we propose a mechanism which allows a network to choose which layers are fed into each task. The simplest approach might be to use a task-speciï¬c skip layer which selects a sin- gle layer in ResNet-101 (out of a set of equal-sized candi- date layers) and feeds it directly into the taskâs head. How- ever, a hard selection operation isnât differentiable, meaning that the network couldnât learn which layer to feed into a task. Furthermore, some tasks might need information from multiple layers. Hence, we relax the hard selection process, and instead pass a linear combination of skip layers to each head. Concretely, each task has a set of coefï¬cients, one for each of the 23 candidate layers in block 3. The repre-
sentation thatâs fed into each task head is a sum of the layer activations weighted by these task-speciï¬c coefï¬cients. We impose a lasso (L1) penalty to encourage the combination to be sparse, which therefore encourages the network to con- centrate all of the information required by a single task into a small number of layers. Thus, when ï¬ne-tuning on a new task, these task-speciï¬c layers can be quickly selected or rejected as a group, using the same lasso penalty.
Mathematically, we create a matrix α with N rows and M columns, where N is the number of self-supervised tasks, and M is the number of residual units in block 3. The representation passed to the head for task n is then:
M y Qnym * UNitm m=1 ()
where Unit,, is the output of residual unit m. We en- force that ye a2, = 1 for all tasks n, to control the output variance (note that the entries in a can be negative, so a simple sum is insufficient). To ensure sparsity, we add an L1 penalty on the entries of a to the objective function. We create a similar a matrix for the set of evaluation tasks.
# 3.3. Harmonizing network inputs
Each self-supervised task pre-processes its data differ- ently, so the low-level image statistics are often very dif- ferent across tasks. This puts a heavy burden on the trunk network, since its features must generalize across these sta- tistical differences, which may impede learning. Further- more, it gives the network an opportunity to cheat: the net- work might recognize which task it must solve, and only represent information which is relevant to that task, instead of truly multi-task features. This problem is especially bad for relative position, which pre-processes its input data by
4
Parameter Server RMSProp RMSProp Synchronous | | Synchronous RMSProp Synchronous Gradients Workers (GPU)
Figure 2. Distributed training setup. Several GPU machines are allocated for each task, and gradients from each task are synchro- nized and aggregated with separate RMSProp optimizers.
discarding 2 of the 3 color channels, selected at random, and replacing them with noise. Chromatic aberration is also hard to detect in grayscale images. Hence, to âharmonize,â we replace relative positionâs preprocessing with the same preprocessing used for colorization: images are converted to Lab, and the a and b channels are discarded (we replicate the L channel 3 times so that the network can be evaluated on color images).
# 3.4. Self-supervised network architecture imple- mentation details
This section provides more details on the âheadsâ used in our self-supervised tasks. The bulk of the changes rela- tive to the original methods (that used shallower networks) involve replacing simple convolutions with residual units. Vanishing gradients can be a problem with networks as deep as ours, and residual networks can help alleviate this prob- lem. We did relatively little experimentation with architec- tures for the heads, due to the high computational cost of restarting training from scratch.
Relative Position: Given a batch of patches, we begin by running ResNet-v2-101 at a stride of 8. Most block 3 con- volutions produce outputs at stride 16, so running the net- work at stride 8 requires using convolutions that are dilated, or âatrousâ, such that each neuron receives input from other neurons that are stride 16 apart in the previous layer. For further details, see the public implementation of ResNet-v2- 101 striding in TF-Slim. Our patches are 96-by-96, mean- ing that we get a trunk feature map which is 12 à 12 à 1024 per patch. For the head, we apply two more residual units. The ï¬rst has an output with 1024 channels, a bottleneck with 128 channels, and a stride of 2; the second has an out-
5
put size of 512 channels, bottleneck with 128 channels, and stride 2. This gives us a representation of 3Ã3Ã512 for each patch. We ï¬atten this representation for each patch, and concatenate the representations for patches that are paired. We then have 3 âfully-connectedâ residual units (equiva- lent to a convolutional residual unit where the spatial shape of the input and output is 1 à 1). These are all identi- cal, with input dimensionality and output dimensionality of 3*3*512=4608 and a bottleneck dimensionality of 512. The ï¬nal fully connected layer has dimensionality 8 producing softmax outputs.
Colorization: As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. Our input images are 256 à 256, meaning that we have a 32 à 32 à 1024 feature map. Obtaining good performance when colorization is combined with other tasks seems to re- quire a large number of parameters in the head. Hence, we use two standard convolution layers with a ReLU nonlinear- ity: the ï¬rst has a 2Ã2 kernel and 4096 output channels, and the second has a 1Ã1 kernel with 4096 channels. Both have stride 1. The ï¬nal output logits are produced by a 1x1 con- volution with stride 1 and 313 output channels. The head has a total of roughly 35M parameters. Preliminary exper- iments with a smaller number of parameters showed that adding colorization degraded performance. We hypothesize that this is because the networkâs knowledge of color was pushed down into block 3 when the head was small, and thus the representations at the end of block 3 contained too much information about color.
Exemplar: As with relative position, we run the ResNet- v2-101 trunk at stride 8 via dilated convolutions. We resize our images to 256Ã256 and sample patches that are 96Ã96. Thus we have a feature map which is 12 à 12 à 1024. As with relative position, we apply two residual units, the ï¬rst with an output with 1024 channels, a bottleneck with 128 channels, and a stride of 2; the second has an output size of 512 channels, bottleneck with 128 channels, and stride 2. Thus, we have a 3 à 3 à 512-dimensional feature, which is used directly to compute the distances needed for our loss.
Motion Segmentation: We reshape all images to 240 à 320, to better approximate the aspect ratios that are com- mon in our dataset. As with relative position, we run the ResNet-v2-101 trunk at stride 8 via dilated convolutions. We expected that, like colorization, motion segmentation could beneï¬t from a large head. Thus, we have two 1 à 1 conv layers each with dimension 4096, followed by another 1Ã1 conv layer which produces a single value, which is treated as a logit and used a per-pixel classiï¬cation. Pre- liminary experiments with smaller heads have shown that such a large head is not necessarily important.
# 4. Training the Network
Training a network with nearly 100 hidden layers re- quires considerable compute power, so we distribute it across several machines. As shown in ï¬gure 2, each ma- chine trains the network on a single task. Parameters for the ResNet-101 trunk are shared across all replicas. There are also several task-speciï¬c layers, or heads, which are shared only between machines that are working on the same task. Each worker repeatedly computes losses which are then backpropagated to produce gradients.
Given many workers operating independently, gradients are usually aggregated in one of two ways. The ï¬rst op- tion is asynchronous training, where a centralized parame- ter server receives gradients from workers, applies the up- dates immediately, and sends back the up-to-date parame- ters [5, 33]. We found this approach to be unstable, since gradients may be stale if some machines run slowly. The other approach is synchronous training, where the parame- ter server accumulates gradients from all workers, applies the accumulated update while all workers wait, and then sends back identical parameters to all workers [4], prevent- ing stale gradients. âBackup workersâ help prevent slow workers from slowing down training. However, in a mul- titask setup, some tasks are faster than others. Thus, slow tasks will not only slow down the computation, but their gradients are more likely to be thrown out.
Hence, we used a hybrid approach: we accumulate gra- dients from all workers that are working on a single task, and then have the parameter servers apply the aggregated gradients from a single task when ready, without synchro- nizing with other tasks. Our experiments found that this approach resulted in faster learning than either purely syn- chronous or purely asynchronous training, and in particular, was more stable than asynchronous training.
We also used the RMSProp optimizer, which has been shown to improve convergence in many vision tasks versus stochastic gradient descent. RMSProp re-scales the gradi- ents for each parameter such that multiplying the loss by a constant factor does not change how quickly the network learns. This is a useful property in multi-task learning, since different loss functions may be scaled differently. Hence, we used a separate RMSProp optimizer for each task. That is, for each task, we keep separate moving averages of the squared gradients, which are used to scale the taskâs accu- mulated updates before applying them to the parameters.
For all experiments, we train on 64 GPUs in parallel, and save checkpoints every roughly 2.4K GPU (NVIDIA K40) hours. These checkpoints are then used as initialization for our evaluation tasks.
6
# 5. Evaluation
Here we describe the three evaluation tasks that we trans- fer our representation to: image classiï¬cation, object cate- gory detection, and pixel-wise depth prediction.
ImageNet with Frozen Weights: We add a single linear classiï¬cation layer (a softmax) to the network at the end of block 3, and train on the full ImageNet training set. We keep all pre-trained weights frozen during training, so we can evaluate raw features. We evaluate on the ImageNet validation set. The training set is augmented in translation and color, following [38], but during evaluation, we donât use multi-crop or mirroring augmentation. This evaluation is similar to evaluations used elsewhere (particularly Zhang et al. [46]). Performing well requires good representation of ï¬ne-grained object attributes (to distinguish, for example, breeds of dogs). We report top-5 recall in all charts (except Table 1, which reports top-1 to be consistent with previous works). For most experiments we use only the output of the ï¬nal âunitâ of block 3, and use max pooling to obtain a 3 à 3 à 1024 feature vector, which is ï¬attened and used as the input to the one-layer classiï¬er. For the lasso ex- periments, however, we use a weighted combination of the (frozen) features from all block 3 layers, and we learn the weight for each layer, following the structure described in section 3.2.
PASCAL VOC 2007 Detection: We use Faster- RCNN [34], which trains a single network base with multiple heads for object proposals, box classiï¬cation, and box localization. Performing well requires the network to accurately represent object categories and locations, with penalties for missing parts which might be hard to recognize (e.g., a catâs body is harder to recognize than its head). We ï¬ne-tune all network weights. For our ImageNet pre-trained ResNet-101 model, we transfer all layers up through block 3 from the pre-trained model into the trunk, and transfer block 4 into the proposal categorization head, as is standard. We do the same with our self-supervised network, except that we initialize the proposal categoriza- tion head randomly. Following Doersch et al. [7], we use multi-scale data augmentation for all methods, including baselines. All other settings were left at their defaults. We train on the VOC 2007 trainval set, and evaluate Mean Average Precision on the VOC 2007 test set. For the lasso experiments, we feed our lasso combination of block 3 layers into the heads, rather than the ï¬nal output of block 3.
NYU V2 Depth Prediction: Depth prediction measures how well a network represents geometry, and how well that information can be localized to pixel accuracy. We use a modiï¬ed version of the architecture proposed in Laina et
al. [19]. We use the âup projectionâ operator deï¬ned in that work, as well as the reverse Huber loss. We replaced the ResNet-50 architecture with our ResNet-101 architecture, and feed the block 3 outputs directly into the up-projection layers (block 4 was not used in our setup). This means we need only 3 levels of up projection, rather than 4. Our up projection ï¬lter sizes were 512, 256, and 128. As with our PASCAL experiments, we initialize all layers up to block 3 using the weights from our self-supervised pre-training, and ï¬ne-tune all weights. We selected one measureâpercent of pixels where relative error is below 1.25âas a representa- tive measure (others available in appendix A). Relative er- , dp , where dgt is groundtruth ror is deï¬ned as max dgt depth and dp is predicted depth. For the lasso experiments, we feed our lasso combination of block3 layers into the up projection layers, rather than the ï¬nal output of block 3.
# 6. Results: Comparisons and Combinations
ImageNet Baseline: As an âupper boundâ on perfor- mance, we train a full ResNet-101 model on ImageNet, which serves as a point of comparison for all our evalua- tions. Note that just under half of the parameters of this network are in block 4, which are not pre-trained in our self-supervised experiments (they are transferred from the ImageNet network only for the Pascal evaluations). We use the standard learning rate schedule of Szegedy et al. [38] for ImageNet training (multiply the learning rate by 0.94 every 2 epochs), but we donât use such a schedule for our self-supervised tasks.
# 6.1. Comparing individual self-supervision tasks
Table 1 shows the performance of individual tasks for the three evaluation measures. Compared to previously- published results, our performance is signiï¬cantly higher in all cases, most likely due to the additional depth of ResNet (cf. AlexNet) and additional training time. Note, our ImageNet-trained baseline for Faster-RCNN is also above the previously published result using ResNet (69.9 in [34] cf. 74.2 for ours), mostly due to the addition of multi- scale augmentation for the training images following [7].
Of the self-supervised pre-training methods, relative po- sition and colorization are the top performers, with relative position winning on PASCAL and NYU, and colorization winning on ImageNet-frozen. Remarkably, relative posi- tion performs on-par with ImageNet pre-training on depth prediction, and the gap is just 7.5% mAP on PASCAL. The only task where the gap remains large is the ImageNet eval- uation itself, which is not surprising since the ImageNet pre- training and evaluation use the same labels. Motion seg- mentation and exemplar training are somewhat worse than the others, with exemplar worst on Pascal and NYU, and motion segmentation worst on ImageNet.
7
90
ImageNet Recall@5
# Random init
==
oe retatve Postion 80 Colorization <e Exemplar 70 se Motion Segmentation = ImageNet Supervised 60 50 40 Percent recall 30 20 PASCAL VOC 2007 mAP = = Random Init Te en 70 â Colorization a Exemplar == Motion Segmentation 65 > ImageNet Supervised 60 55 Percent mAP 50 45 NYU Depth V2 Percent Below 1.25 = = Random Init <e Relative Position Colorization 80 m= Exemplar = Motion Segmentation â@ ImageNet Supervised 85 75 70 Percent pixels below 1.25 65
Figure 3. Comparison of performance for different self- supervised methods over time. X-axis is compute time on the self-supervised task (â¼2.4K GPU hours per tick). âRandom Initâ shows performance with no pre-training.
Figure 3 shows how the performance changes as pre- training time increases (time is on the x-axis). After 16.8K GPU hours, performance is plateauing but has not com- pletely saturated, suggesting that results can be improved slightly given more time. Interestingly, on the ImageNet- frozen evaluation, where colorization is winning, the gap relative to relative position is growing. Also, while most algorithms slowly improve performance with training time,
ImageNet top1 Ours Prev. 36.21 31.7[46] 39.62 32.6[46] 31.51 - 27.62 - 66.82 51.0[46] ImageNet top5 Ours 59.21 62.48 53.08 48.29 85.10 PASCAL Prev. 61.7 [7] 46.9[46] - 52.2[27] 69.9[34] Ours 66.75 65.47 60.94 61.13 74.17
Table 1. Comparison of our implementation with previous results on our evaluation tasks: ImageNet with frozen features (left), and PASCAL VOC 2007 mAP with ï¬ne-tuning (middle), and NYU depth (right, not used in previous works). Unlike elsewhere in this paper, ImageNet performance is reported here in terms of top 1 accuracy (versus recall at 5 elsewhere). Our ImageNet pre-training performance on ImageNet is lower than the performance He et al. [15] (78.25) reported for ResNet-101 since we remove block 4.
exemplar training doesnât ï¬t this pattern: its performance falls steadily on ImageNet, and undulates on PASCAL and NYU. Even stranger, performance for exemplar is seem- ingly anti-correlated between Pascal and NYU from check- point to checkpoint. A possible explanation is that exemplar training encourages features that arenât invariant beyond the training transformations (e.g. they arenât invariant to object deformation or out-of-plane rotation), but are instead sensi- tive to the details of textures and low-level shapes. If these irrelevant details become prominent in the representation, they may serve as distractors for the evaluation classiï¬ers.
Note that the random baseline performance is low rela- tive to a shallower network, especially the ImageNet-frozen evaluation (a linear classiï¬er on random AlexNetâs conv5 features has top-5 recall of 27.1%, cf. 10.5% for ResNet). All our pre-trained nets far outperform the random baseline.
Pre-training RP RP+Col RP+Ex RP+MS RP+Col+Ex RP+Col+Ex+MS INet Labels ImageNet 59.21 66.64 65.24 63.73 68.65 69.30 85.10 PASCAL NYU 80.54 79.87 78.70 78.72 80.17 79.25 80.06 66.75 68.75 69.44 68.81 69.48 70.53 74.17
Table 2. Comparison of various combinations of self-supervised tasks. Checkpoints were taken after 16.8K GPU hours, equiva- lent to checkpoint 7 in Figure 3. Abbreviation key: RP: Relative Position; Col: Colorization; Ex: Exemplar Nets; MS: Motion Seg- mentation. Metrics: ImageNet: Recall@5; PASCAL: mAP; NYU: % Pixels below 1.25.
The fact that representations learnt by the various self- supervised methods have different strengths and weak- nesses suggests that the features differ. Therefore, combin- ing methods may yield further improvements. On the other hand, the lower-performing tasks might drag-down the per- formance of the best ones. Resolving this uncertainty is a key motivator for the next section.
Implementation Details: Unfortunately, intermittent net- work congestion can slow down experiments, so we donât measure wall time directly. Instead, we estimate compute time for a given task by multiplying the per-task training step count by a constant factor, which is ï¬xed across all ex- periments, representing the average step time when network congestion is minimal. We add training cost across all tasks used in an experiment, and snapshot when the total cost crosses a threshold. For relative position, 1 epoch through the ImageNet train set takes roughly 350 GPU hours; for colorization it takes roughly 90 hours; for exemplar nets roughly 60 hours. For motion segmentation, one epoch through our video dataset takes roughly 400 GPU hours.
# 6.2. Na¨ıve multi-task combination of supervision tasks self-
Table 2 shows results for combining self-supervised pre-training tasks. Beginning with one of our strongest performersârelative positionâwe see that adding any of our other tasks helps performance on ImageNet and Pas- cal. Adding either colorization or exemplar leads to more than 6 points gain on ImageNet. Furthermore, it seems that the boosts are complementary: adding both colorization and exemplar gives a further 2% boost. Our best-performing method was a combination of all four self-supervised tasks. To further probe how well our representation localizes objects, we evaluated the PASCAL detector at a more strin- gent overlap criterion: 75% IoU (versus standard VOC 2007 criterion of 50% IoU). Our model gets 43.91% mAP in this setting, versus the standard ImageNet modelâs performance of 44.27%, a gap of less than half a percent. Thus, the self- supervised approach may be especially useful when accu- rate localization is important.
The depth evaluation performance shows far less varia- tion over the single and combinations tasks than the other evaluations. All methods are on par with ImageNet pre- training, with relative position exceeding this value slightly,
8
ImageNet Recall@5 90 == Random init oe eciative Position 80 â} RP+Col me RPLEX 70 ee RP+Msg ââ RP+Col+Ex Pâ RP+Col+Ex+Msg 8 60 = ImageNet Supervised 2 250 o 2 @ 40 a 30 20 DââââEâEEEEEââââââEEEE7E 0 2 4 6 8 10 5 PASCAL VOC 2007 mAP ââoO = = Random Init â<* Relative Position 70 I} RP+Col te RPHEX em RP+Msg 65 â RP+Col+Ex ao RPL Col+EX+ Msg < =@ ImageNet Supervised ⬠60 2 o 2 55 o a 50 45 NYU Depth V2 Percent Below 1.25 85 == Random init e Relative Position jm RP+Col 1 80 mm RPHEX N se RPHMsg aq sm RP+Col+Ex z = RPL Col+EX+ Msg 375 =O Imagenet Supervised 2 a o x 270 e o 2 & 65 60 0 2 4 6 8 10
Figure 4. Comparison of performance for different multi-task self-supervised methods over time. X-axis is compute time on the self-supervised task (â¼2.4K GPU hours per tick). âRandom Initâ shows performance with no pre-training.
and the combination with exemplar or motion segmentation leading to a slight drop. Combining relative position with with either exemplar or motion segmentation leads to a con- siderable improvement over those tasks alone.
Finally, ï¬gure 4 shows how the performance of these methods improves with more training. One might expect that more tasks would result in slower training, since more must be learned. Surprisingly, however the combination of
9
Pre-training RP RP / H RP+Col RP+Col / H ImageNet 59.21 62.33 66.64 68.08 PASCAL NYU 80.23 80.39 79.87 79.69 66.75 66.15 68.75 68.26
Table 3. Comparison of methods with and without harmonization, where relative position training is converted to grayscale to mimic the inputs to the colorization network. H denotes an experiment done with harmonization.
Rel. Position Exemplar Color Mot. Seg.
Rel. Position Exemplar Color Mot. Seg. Net Frozen Pascal07 NyuDepth
Net Frozen Pascal07 NyuDepth
Figure 5. Weights learned via the lasso technique. Each row shows one task: self-supervised tasks on top, evaluation tasks on bottom. Each square shows |α| for one ResNet âUnitâ (shallow- est layers at the left). Whiter colors indicate higher |α|, with a nonlinear scale to make smaller nonzero values easily visible.
all four tasks performs the best or nearly the best even at our earliest checkpoint.
# 6.3. Mediated combination of self-supervision tasks
Harmonization: We train two versions of a network on relative position and colorization: one using harmonization to make the relative position inputs look more like coloriza- tion, and one without it (equivalent to RP+Col in section 6.2 above). As a baseline, we make the same modiï¬cation to a network trained only on relative position alone: i.e., we convert its inputs to grayscale. In this baseline, we donât expect any performance boost over the original relative po- sition task, because there are no other tasks to harmonize with. Results are shown in Table 3. However, on the Im- ageNet evaluation there is an improvement when we pre- train using only relative position (due to the change from adding noise to the other two channels to using grayscale input (three equal channels)), and this improvement follows through to the the combined relative position and coloriza- tion tasks. The other two evaluation tasks do not show any improvement with harmonization. This suggests that our networks are actually quite good at dealing with stark differ- ences between pre-training data domains when the features are ï¬ne-tuned at test time.
Net structure No Lasso Eval Only Lasso Pre-train Only Lasso Pre-train & Eval Lasso ImageNet 69.30 70.18 68.09 69.44 PASCAL NYU 79.25 79.41 78.96 79.45 70.53 68.86 68.49 68.98
Table 4. Comparison of performance with and without the lasso technique for factorizing representations, for a network trained on all four self-supervised tasks for 16.8K GPU-hours. âNo Lassoâ is equivalent to table 2âs RP+Col+Ex+MS. âEval Onlyâ uses the same pre-trained network, with lasso used only on the evaluation task, while âPre-train Onlyâ uses it only during pre-training. The ï¬nal row uses lasso always.
Lasso training: As a ï¬rst sanity check, Figure 5 plots the α matrix learned using all four self-supervised tasks. Dif- ferent tasks do indeed select different layers. Somewhat surprisingly, however, there are strong correlations between the selected layers: most tasks want a combination of low- level information and high-level, semantic information. The depth evaluation network selects relatively high-level infor- mation, but evaluating on ImageNet-frozen and PASCAL makes the network select information from several levels, often not the ones that the pre-training tasks use. This sug- gests that, although there are useful features in the learned representation, the ï¬nal output space for the representation is still losing some information thatâs useful for evaluation tasks, suggesting a possible area for future work.
The ï¬nal performance of this network is shown in Ta- ble 4. There are four cases: no lasso, lasso only on the evaluation tasks, lasso only at pre-training time, and lasso in both self-supervised training and evaluation. Unsurpris- ingly, using lasso only for pre-training performs poorly since not all information reaches the ï¬nal layer. Surpris- ingly, however, using the lasso both for self-supervised training and evaluation is not very effective, contrary to previous results advocating that features should be selected from multiple layers for task transfer [14, 22, 36]. Perhaps the multi-task nature of our pre-training forces more infor- mation to propagate through the entire network, so explic- itly extracting information from lower layers is unnecessary.
# 7. Summary and extensions
(i) Deeper net- works improve self-supervision over shallow networks; (ii) Combining self-supervision tasks always improves perfor- mance over the tasks alone; (iii) The gap between Ima- geNet pre-trained and self-supervision pre-trained with four tasks is nearly closed for the VOC detection evaluation, and completely closed for NYU depth, (iv) Harmonization and lasso weightings only have minimal effects; and, ï¬nally, (v) Combining self-supervised tasks leads to faster training.
10
There are many opportunities for further improvements: we can add augmentation (as in the exemplar task) to all tasks; we could add more self-supervision tasks (indeed new ones have appeared during the preparation of this pa- per, e.g. [10]); we could add further evaluation tasks â in- deed depth prediction was not very informative, and replac- ing it by an alternative shape measurement task such as sur- face normal prediction may be more reliable; and we can experiment with methods for dynamically weighting the im- portance of tasks in the optimization.
It would also be interesting to repeat these experiments with a deep network such as VGG-16 where consecutive layers are less correlated, or with even deeper networks (ResNet-152, DenseNet [16] and beyond) to tease out the match between self-supervision tasks and network depth. For the lasso, it might be worth investigating block level weightings using a group sparsity regularizer.
For the future, given the performance improvements demonstrated in this paper, there is a possibility that self- supervision will eventually augment or replace fully super- vised pre-training.
Acknowledgements: Thanks to Relja Arandjelovi´c, JoËao Carreira, Viorica PËatrËaucean and Karen Simonyan for helpful dis- cussions.
# A. Additional metrics for depth prediction
Previous literature on depth prediction has established several measures of accuracy, since different errors may be more costly in different contexts. The measure used in the main paper was percent of pixels where relative depthâi.e., max âis less than 1.25. This measures how of- ten the estimated depth is very close to being correct. It is also standard to measure more relaxed thresholds of rela- tive depth: 1.252 and 1.253. Furthermore, we can measure average errors across all pixels. Mean Absolute Error is the mean squared difference between ground truth and pre- dicted values. Unlike the previous metrics, with Mean Ab- solute Error the worst predictions receive the highest penal- ties. Mean Relative Error weights the prediction error by the inverse of ground truth depth. Thus, errors on nearby parts of the scene are penalized more, which may be more relevant for, e.g., robot navigation.
Tables 5, 6, 7, and 8 are extended versions of ta- bles1, 2, 3, 4, respectively. For the most part, the additional measures tell the same story as the measure for depth re- ported in the main paper. Different self-supervised signals seem to perform similarly relative to one another: exemplar and relative position work best; color and motion segmen- tation work worse (table 5). Combinations still perform as well as the best method alone (table 6). Finally, it remains uncertain whether harmonization or the lasso technique pro-
vide a boost on depth prediction (tables 7 and 8).
# References
[1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In ICCV, 2015.
[2] P. Agrawal, A. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intu- itive physics. arXiv preprint arXiv:1606.07419, 2016. [3] Y. Aytar, C. Vondrick, and A. Torralba. Soundnet: Learning sound representations from unlabeled video. In NIPS, 2016. [4] J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. Revisit- ing distributed synchronous SGD. In ICLR Workshop Track, 2016.
[5] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In NIPS, 2012.
Semi-supervised learning with context-conditional generative adversarial net- works. arXiv preprint arXiv:1611.06430, 2016.
[7] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In ICCV, 2015.
[8] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox. Discriminative unsupervised feature learning with convolutional neural networks. In NIPS, 2014.
[9] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolu- tional architecture. In ICCV, 2015.
[10] B. Fernando, H. Bilen, E. Gavves, and S. Gould. Self- supervised video representation learning with odd-one-out networks. arXiv preprint arXiv:1611.06646, 2016.
[11] P. F¨oldi´ak. Learning invariance from transformation se- quences. Neural Computation, 3(2):194â200, 1991.
[12] G. Gkioxari, R. Girshick, and J. Malik. Contextual action recognition with R*CNN. In ICCV, 2015.
[13] S. Guadarrama and N. Silberman. Tensorï¬ow-slim. 2016. [14] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localiza- tion. In CVPR, 2015.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
[16] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. CVPR, 2017.
[17] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In ICCV, 2015.
[18] I. Kokkinos. Ubernet: Training a âuniversalâ convolutional neural network for low-, mid-, and high-level vision us- ing diverse datasets and limited memory. arXiv preprint arXiv:1609.02132, 2016.
[19] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab. Deeper depth prediction with fully convolutional residual networks. In 3D Vision, 2016.
[20] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016. [21] Y. Li, M. Paluri, J. M. Rehg, and P. Doll´ar. Unsupervised
learning of edges. In CVPR, 2016.
[22] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Cross- stitch networks for multi-task learning. In CVPR, 2016. [23] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and learn:
11
unsupervised learning using temporal order veriï¬cation. In ECCV, 2016.
[24] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In ICML, 2009.
[25] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016.
[26] A. Owens, J. Wu, J. H. McDermott, W. T. Freeman, and A. Torralba. Ambient sound provides supervision for visual learning. In ECCV, 2016.
[27] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hari- haran. Learning features by watching objects move. arXiv preprint arXiv:1612.06370, 2016.
[28] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
[29] L. Pinto, J. Davidson, and A. Gupta. Supervision via com- petition: Robot adversaries for learning tasks. arXiv preprint arXiv:1610.01685, 2016.
[30] L. Pinto, D. Gandhi, Y. Han, Y.-L. Park, and A. Gupta. The curious robot: Learning visual representations via physical interactions. In ECCV, 2016.
[31] L. Pinto and A. Gupta. Supersizing self-supervision: Learn- ing to grasp from 50k tries and 700 robot hours. In ICRA, 2016.
[32] L. Pinto and A. Gupta. Learning to push by grasping: Using multiple tasks for effective learning. ICRA, 2017.
[33] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock- free approach to parallelizing stochastic gradient descent. In NIPS, 2011.
[34] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015.
[35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015.
[36] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Had- arXiv preprint sell. arXiv:1606.04671, 2016.
[37] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [38] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception- v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. [39] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncer- tain future: Forecasting from static images using variational autoencoders. In ECCV, 2016.
[40] J. Walker, A. Gupta, and M. Hebert. Dense optical ï¬ow pre- diction from a static image. In ICCV, 2015.
[41] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
[42] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In ICCV, 2015.
[43] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Un- supervised learning of invariances. Neural computation, 14(4):715â770, 2002.
[44] A. R. Zamir, T. Wekel, P. Agrawal, C. Wei, J. Malik, and S. Savarese. Generic 3D representation via pose estimation
Evaluation Rel. Pos. Color Exemplar Mot. Seg. INet Labels Random Pct. < 1.25 80.55 76.79 71.25 74.24 80.06 61.00 Higher Better Pct. < 1.252 94.65 93.52 90.63 92.42 94.87 85.45 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.26 97.74 96.54 97.43 98.45 94.67 0.399 0.444 0.513 0.473 0.403 0.621 0.146 0.164 0.191 0.177 0.146 0.227
Table 5. Comparison of self-supervised methods on NYUDv2 depth prediction. Pct. < 1.25 is the same as reported in the paper (Percent of pixels where relative depthâmax âis less than 1.25); we give the same value for two other, more relaxed thresholds. We also report mean absolute error, which is the simple per-pixel average error in depth, and relative error, where the error at each pixel is divided by the ground-truth depth.
Evaluation RP RP+Col RP+Ex RP+MS RP+Col+Ex RP+Col+Ex+MS Pct. < 1.25 80.55 79.88 78.70 78.72 80.17 79.26 Higher Better Pct. < 1.252 94.65 94.45 94.06 94.13 94.74 94.19 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.26 98.15 98.13 98.08 98.27 98.07 0.399 0.411 0.419 0.423 0.401 0.422 0.146 0.148 0.151 0.153 0.149 0.152
Table 6. Additional measures of depth prediction accuracy on NYUDv2 for the na¨ıve method of combining different sources of supervision, extending table 2.
and matching. In ECCV, 2016.
[45] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generaliza- tion. arXiv preprint arXiv:1611.03530, 2016.
[46] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016.
[47] W. Y. Zou, A. Y. Ng, S. Zhu, and K. Yu. Deep learning of invariant features via simulated ï¬xations in video. In NIPS, 2012.
12
Evaluation RP RP / H RP+Col RP+Col / H Pct. < 1.25 80.55 80.39 79.88 79.69 Higher Better Pct. < 1.252 94.65 94.67 94.45 94.28 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.26 98.31 98.15 98.09 0.399 0.400 0.411 0.411 0.146 0.147 0.148 0.152
Table 7. Additional measures of depth prediction accuracy on NYUDv2 for the harmonization experiments, extending table3.
Evaluation No Lasso Eval Only Lasso Pre-train Only Lasso Lasso Pct. < 1.25 79.26 79.41 78.96 79.45 Higher Better Pct. < 1.252 94.19 94.18 94.05 94.49 Lower Better Pct. < 1.253 Mean Absolute Error Mean Relative Error 98.07 98.07 97.83 98.26 0.422 0.418 0.423 0.411 0.152 0.152 0.153 0.151
Table 8. Additional measures of depth prediction accuracy on NYUDv2 for the lasso experiments, extending table 4.
13 | {
"id": "1611.06430"
} |
1708.06734 | Representation Learning by Learning to Count | We introduce a novel method for representation learning that uses an
artificial supervision signal based on counting visual primitives. This
supervision signal is obtained from an equivariance relation, which does not
require any manual annotation. We relate transformations of images to
transformations of the representations. More specifically, we look for the
representation that satisfies such relation rather than the transformations
that match a given representation. In this paper, we use two image
transformations in the context of counting: scaling and tiling. The first
transformation exploits the fact that the number of visual primitives should be
invariant to scale. The second transformation allows us to equate the total
number of visual primitives in each tile to that in the whole image. These two
transformations are combined in one constraint and used to train a neural
network with a contrastive loss. The proposed task produces representations
that perform on par or exceed the state of the art in transfer learning
benchmarks. | http://arxiv.org/pdf/1708.06734 | Mehdi Noroozi, Hamed Pirsiavash, Paolo Favaro | cs.CV | ICCV 2017(oral) | null | cs.CV | 20170822 | 20170822 | 7 1 0 2
g u A 2 2 ] V C . s c [ 1 v 4 3 7 6 0 . 8 0 7 1 : v i X r a
# Representation Learning by Learning to Count
Mehdi Noroozi1 Hamed Pirsiavash2 Paolo Favaro1 University of Bern1 University of Maryland, Baltimore County2 {noroozi,favaro}@inf.unibe.ch {hpirsiav@umbc.edu}
# Abstract
We introduce a novel method for representation learning that uses an artiï¬cial supervision signal based on count- ing visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More speciï¬cally, we look for the representation that satisï¬es such relation rather than the transformations that match a given repre- sentation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The ï¬rst transformation exploits the fact that the number of visual primitives should be invariant to scale. The second trans- formation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The pro- posed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks.
Figure 1: The number of visual primitives in the whole im- age should match the sum of the number of visual primitives in each tile (dashed red boxes).
# 1. Introduction
supervised learning tools can be used. A rationale behind self-supervised learning is that pretext tasks that relate the most to the ï¬nal problems (e.g., classiï¬cation and detection) will be more likely to build relevant representations.
We are interested in learning representations (features) that are discriminative for semantic image understanding tasks such as classiï¬cation, detection, and segmentation. A common approach to obtain such features is to use super- vised learning. However, this requires manual annotation of images, which is costly, time-consuming, and prone to errors. In contrast, unsupervised or self-supervised feature learning methods exploiting unlabeled data can be much more scalable and ï¬exible.
Some recent feature learning methods, in the so-called self-supervised learning paradigm, have managed to avoid annotation by deï¬ning a task which provides a supervision signal. For example, some methods recover color from gray scale images and vice versa [43, 21, 44, 22], recover a whole patch from the surrounding pixels [33], or recover the rela- tive location of patches [9, 29]. These methods use informa- tion already present in the data as supervision signal so that
As a novel candidate pretext task, we propose counting visual primitives. It requires discriminative features, which can be useful to classiï¬cation, and it can be formulated via detection. To obtain a supervision signal useful to learn to count, we exploit the following property: If we partition an image into non-overlapping regions, the number of visual primitives in each region should sum up to the number of primitives in the original image (see the example in Fig. 1). We make the hypothesis that the model needs to disentangle the image into high-level factors of variation, such that the complex relation between the original image and its regions is translated to a simple arithmetic operation [3, 35]. Our experimental results validate this hypothesis both qualita- tively and quantitatively.
While in this work we focus on a speciï¬c combination of transformations, one can consider more general relation- ships (i.e., beyond counting, scaling, and tiling) as super-
vision signals. The same procedure that we introduce is therefore applicable to a broader range of tasks as long as it is possible to express the transformation in feature space caused by a transformation in image space [24].
Our contributions are: 1) We introduce a novel method to learn representations from data without manual annota- tion; 2) We propose exploiting counting as a pretext task and demonstrate its relation to counting visual primitives; 3) We show that the proposed methodology learns representations that perform on par or exceed the state of the art in standard transfer learning benchmarks.
# 2. Prior Work
In this work we propose to learn a representation without relying on annotation, a problem that is typically addressed via unsupervised learning. An example of this approach is the autoencoder which reconstructs data by map- ping it to a low-dimensional feature vector. A recent al- ternative approach is self-supervised learning, which is a technique that substitutes the labels for a task with artificial or surrogate ones. In our work such artificial labels are pro- vided by a counting constraint. In many instances, this tech- nique can be seen as recasting the unsupervised learning problem of finding p(x) = p(x1,x2), where x! = [x] xJ] is a random variable, as a partly supervised one of finding p(X2|Xz), so that we can write p(x1,X2) = p(X2|x1)p(x1) (cf. eq. (5.1) in {12]). In our context, the data sample x collects all available information, which can be just an image, but might also include egomotion measurements, sound, and so on. In the literature, self-supervised methods do not recover a model for the probability function p(x), since p(x2|x1) is sufficient to obtain a representation of x. Most methods are then organized based on the choice of x; and x2, where x» defines the surrogate labels. Below we briefly summarize methods based on their choice for x2, which leads to a regression or classification problem. Regression. In recent work Pathak et al. surrogate label x2 a region of pixels in an image (e.g., the central patch) and use the remaining pixels in the image as x,. The model used for p(x2|x;) is based on generative adversarial networks [1 . Other related work [43] [21] maps images to the Lab (luminance and opponent colors) space, and then uses the opponent colors as labels x2 an the luminance as x,. Zhang et al. [44] combine this choice to the opposite task of predicting the grayscale image from the opponent colors and outperform prior work. Classification. Doersch er al. and Noroozi & Favaro [9]|29 define a categorical problem where the surrogate labels are the relative positions of patches. Other recent works use as surrogate labels ego-motion [I] [17], temporal ordering in , sound , and physical interaction [34].
In contrast to these works, here we introduce a different formulation to arrive at a supervision signal. We deï¬ne the
counting relationship âhaving the same number of visual primitivesâ between two images. We use the fact that this relationship is satisï¬ed by two identical images undergoing certain transformations, but not by two different images (al- though they might, with very low probability). Thus, we are able to assign a binary label (same or different number of visual primitives) to pairs of images. We are not aware of any other self-supervised method that uses this method to obtain the surrogate labels. In contrast, Wang and Gupta [41] impose relationships between triplets of different im- ages obtained through tracking. Notice that also Reed et al. [36] exploit an explicit relationship between features. How- ever, they rely on labeling that would reveal the relationship Instead, we only exploit between different input images. the structure of images and relate different parts of the same image to each other. Due to the above counting relationship our work relates also to object counting, which we revise here below. Object counting. In comparison to other semantic tasks, counting has received little attention in the computer vision community. Most effort has been devoted to counting just one category and only recently it was applied to multiple categories in a scene. Counting is usually addressed as a supervised task, where a model is trained on annotated im- ages. The counting prediction can be provided as an object density map [15, 39, 23] or simply as the number of counted objects [5, 6]. There are methods to count humans in crowds [4, 42, 8], cars [28], and penguins [2]. Some recent works count common objects in the scene without relying on ob- ject localization [6, 37].
In this work, we are not interested in the task of count- ing per se. As mentioned earlier on, counting is used as a pretext task to learn a representation. Moreover, we do not use labels about the number of objects during training.
# 3. Transforming Images to Transform Features
One way to characterize a feature of interest is to de- scribe how it should vary as a function of changes in the input data. For example, a feature that counts visual prim- itives should not be affected by scale, 2D translation, and 2D rotation changes of the input image. Other relationships might indicate instead that a feature should increase its val- ues as a result of some transformation of the input. For example, the magnitude of the feature for counting visual primitives applied to half of an image should be smaller than when applied to the whole image. In general, we propose to learn a deep representation by using the known relationship between input and output transformations as a supervisory signal. To formalize these concepts, we ï¬rst need to intro- duce some notation.
RmÃnÃ3, where n is the size in pixels and there are 3 color chan- m nels (RGB). We deï¬ne a family of image transformations
G & {Gi,...,Gy}, where Gj: Râ¢*â¢3 Ly Rex, with 7 = 1,...,J, that take images x and map them to images of p x q pixels. Let us also define a feature @ : RPX4*3 .s R* mapping the transformed image to some k-dimensional vector. Finally, we define a feature transfor- mation g : R* x... x R* + R* that takes J features and maps them to another feature. Given the image transfor- mation family G and g, we learn the feature ¢ by using the following relationship as an artificial supervisory signal
x), . . . , Ï(GJ x)) = 0 x. (1)
g (Ï(G1 â¦
⦠In this work, the transformation family consists of the downsampling operator D, with a downsampling factor of 2, and the tiling operator Tj, where j = 1, . . . , 4, which ex- tracts the j 2 grid of tiles. Notice that à these two transformations produce images of the same size. . We also deï¬ne Thus, we can set D, T1, . . . , T4} our desired relation between counting features on the trans- formed images as g(d, t1, . . . , t4) = d j=1 tj. This can be written explicitly as
4 (Dox) = S* $(T;©x). (2) j=l
We use eq. (2) as our main building block to learn features Ï that can count visual primitives.
This relationship has a bearing also on equivariance [24]. Equivariance, however, is typically deï¬ned as the property of a given feature. In this work we invert this logic by ï¬xing the transformations and by ï¬nding instead a representation satisfying those transformations. Moreover, equivariance has restrictions on the type of transformations applied to the inputs and the features.
Notice that we have no simple way to control the scale at which our counting features work. It could count ob- ject parts, whole objects, object groups, or any combination thereof. This choice might depend on the number of ele- ments of the counting vector Ï, on the loss function used for training, and on the type of data used for training.
# 4. Learning to Count
We use convolutional neural networks to obtain our rep- resentation. In principle, our network could be trained with color images x from a large database (e.g., ImageNet [38] or COCO [25]) using an l2 loss based on eq. (2), for exam- ple,
2 (x) = |6(D ox) - X41, (Tye x)| - 6)
z, as its trivial solu- However, this loss has Ï(z) = 0, tion. To avoid such a scenario, we use a contrastive loss [7], where we also enforce that the counting feature should be
114x114x3 oa he AW shared weights 3x3x256 & & = ics 4096 = == == ReLU l 1 l 1 l l 1fc7 4096 ReLU l 1 l 1 l 1 l 1 l 1 l Fes 1000 (Dey) oTiex) (Trex) 9(Z3ox) Trex) o(Dox) \ ta â et? GS
Figure 2: Training AlexNet to learn to count. The pro- posed architecture uses a siamese arrangement so that we simultaneously produce features for 4 tiles and a downsam- pled image. We also compute the feature from a randomly y) as a contrastive term. chosen downsampled image (D
different between two randomly chosen different images. Therefore, for any x
2 leon(x,y) = [6(D 2x) = 3}, o(T; 0%) 4) 2 4-max {0, = |o(D ey) rt aT) ox)| where in our experiments the constant scalar M = 10. Least effort bias. A bias of the system is that it can easily satisfy the constraint (3) by learning to count as few visual primitives as possible. Thus, many entries of the feature mapping may collapse to zero. This effect is observed in the ï¬nal trained network. In Fig. 3, we show the average of fea- tures computed over the ImageNet validation set. There are only 30 and 44 non zero entries out of 1000 after training on ImageNet and on COCO respectively. Despite the sparsity of the features, our transfer learning experiments show that the features in the hidden layers (conv1-conv5) perform very well on several benchmarks. In our formulation (4), the contrastive term limits the effects of the least effort bias. Indeed, features that count very few visual primitives can- not differentiate much the content across different images. Therefore, the contrastive term will introduce a tradeoff that
(4)
1 1 I. lL 0 100 200 300 «400» 500 600-700 800-900-1000 neurons a average magnitude ies
Figure 3: Average response of our trained network on the ImageNet validation set. Despite its sparsity (30 non zero entries), the hidden representation in the trained net- work performs well when transferred to the classiï¬cation, detection and segmentation tasks.
will push features towards counting as many primitives as is needed to differentiate images from each other. Network architecture. In principle, the choice of the ar- chitecture is arbitrary. For ease of comparison with state- of-the-art methods when transferring to classiï¬cation and detection tasks, we adopt the AlexNet architecture [20] as commonly done in other self-supervised learning methods. We use the ï¬rst 5 convolutional layers from AlexNet fol- lowed by three fully connected layers ((3 4096, 3 à 1000), and ReLU units. Note that 4096 1000 is the number of elements that we want to count. We use ReLU in the end since we want the counting vector to be all positive. Our input is 114 114 pixels to handle smaller à tiles. Because all the features are the same, training with the loss function in eq. 4 is equivalent to training a 6-way siamese network, as shown in Fig. 2.
# 5. Experiments
We ï¬rst present the evaluations of our learned represen- tation in the standard transfer learning benchmarks. Then, we perform ablation studies on our proposed method to show quantitatively the impact of our techniques to prevent poor representations. Finally, we analyze the learned repre- sentation through some quantitative and qualitative experi- ments to get a better insight into what has been learned. We call the activation of the last layer of our network, on which the loss (4) is deï¬ned, the counting vector. We evaluate whether each unit in the counting vector is counting some visual primitive or not. Our model is based on AlexNet [20] in all experiments. In our tables we use boldface for the top performer and underline the second top performer. Implementation Details. We use caffe [18] with the de- fault weight regularization settings to train our network. The learning rate is set to be quite low to avoid divergence. We begin with a learning rate of 10â4 and drop it by a fac- tor of 0.9 every 10K iterations. An important step is to nor- malize the input by subtracting the mean intensity value and dividing the zero-mean images by their standard deviation.
Method Ref Class. Det. Supervised [20] Random Context [9] Context [9]â Jigsaw [30] ego-motion [1] ego-motion [1]â Adversarial [10]â ContextEncoder [33] Sound [31] Sound [31]â Video [41] Video [41]â Colorization [43]â Split-Brain [44]â ColorProxy [22] WatchingObjectsMove [32] Counting [43] [33] [19] [19] [30] [1] [1] [10] [33] [44] [44] [19] [19] [43] [44] [22] [32] 79.9 53.3 55.3 65.3 67.6 52.9 54.2 58.6 56.5 54.4 61.3 62.8 63.1 65.9 67.1 65.9 61.0 67.7 56.8 43.4 46.6 51.1 53.2 41.8 43.9 46.2 44.5 44.0 - 47.4 47.2 46.9 46.7 - 52.2 51.4 Segm. 48.0 19.8 - - 37.6 - - 34.9 29.7 - - - - 35.6 36.0 38.0 - 36.6
Table 1: Evaluation of transfer learning on PASCAL. Classiï¬cation and detection are evaluated on PASCAL VOC 2007 in the frameworks introduced in [19] and [11] respec- tively. Both tasks are evaluated using mean average pre- cision (mAP) as a performance measure. Segmentation is evaluated on PASCAL VOC 2012 in the framework of [26], which reports mean intersection over union (mIoU). (*) de- notes the use of the data initialization method [19].
# 5.1. Transfer Learning Evaluation
We evaluate our learned representation on the detec- tion, classiï¬cation, and segmentation tasks on the PASCAL dataset as well as the classiï¬cation task on the ImageNet dataset. We train our counting network on the 1.3M im- ages from the training set of ImageNet. We use images of 114 pixels as input. Since we transfer only the convo- 114 lutional layers, it has no effect on the transferred models and evaluation. A new version of [29] has been released [30], where the standard AlexNet is used for transfer learning. All the numbers in our comparisons are from that version.
# 5.1.1 Fine-tuning on PASCAL
In this set of experiments, we ï¬ne-tune our network on the PASCAL VOC 2007 and VOC 2012 datasets, which are standard benchmarks for representation learning. Fine- tuning is based on established frameworks for object clas- siï¬cation [19], detection [11] and segmentation [26] tasks. The classiï¬cation task is a multi-class classiï¬cation prob- lem, which predicts the presence or absence of 20 object classes. The detection task involves locating objects by specifying a bounding box around them. Segmentation as- signs the label of an object class to each pixel in the im- age. As shown in Table 1, we either outperform previous methods or achieve the second best performance. Notice
Method conv1 conv2 conv3 conv4 conv5 Supervised [20] Random Context [9] Jigsaw [30] ContextEncoder [33] Adversarial [10] Colorization [43] Split-Brain [44] Counting 19.3 11.6 16.2 18.2 14.1 17.7 12.5 17.7 18.0 36.3 17.1 23.3 28.8 20.7 24.5 24.5 29.3 30.6 44.2 16.9 30.2 34.0 21.0 31.0 30.4 35.4 34.3 48.3 16.3 31.7 33.9 19.8 29.9 31.5 35.2 32.5 50.5 14.1 29.6 27.1 15.5 28.0 30.3 32.8 25.7
Table 2: ImageNet classiï¬cation with a linear classiï¬er. We use the publicly available code and conï¬guration of [43]. Every column shows the top-1 accuracy of AlexNet on the classiï¬cation task. The learned weights from conv1 up to the displayed layer are frozen. The features of each layer are spatially resized until there are fewer than 9K di- mensions left. A fully connected layer followed by softmax is trained on a 1000-way object classiï¬cation task.
Method conv1 conv2 conv3 conv4 conv5 Places labels [45] ImageNet labels [20] Random Context [9] Jigsaw [30] Context encoder [33] Sound [31] Adversarial [10] Colorization [43] Split-Brain [44] Counting 22.1 22.7 15.7 19.7 23.0 18.2 19.9 22.0 16.0 21.3 23.3 35.1 34.8 20.3 26.7 31.9 23.2 29.3 28.7 25.7 30.7 33.9 40.2 38.4 19.8 31.9 35.0 23.4 32.1 31.8 29.6 34.0 36.3 43.3 39.4 19.1 32.7 34.2 21.9 28.8 31.3 30.3 34.1 34.7 44.6 38.7 17.5 30.9 29.3 18.4 29.8 29.7 29.7 32.5 29.6
Table 3: Places classiï¬cation with a linear classiï¬er. We use the same setting as in Table 2 except that to evaluate generalization across datasets, the model is pretrained on ImageNet (with no labels) and then tested with frozen layers on Places (with labels). The last layer has 205 neurons for scene categories.
that while classiï¬cation and detection are evaluated on VOC 2007, segmentation is evaluated on VOC 2012. Unfortu- nately, we did not get any performance boost when using the method of Kr¨ahenb¨uhl et al. [19].
# 5.1.2 Linear Classiï¬cation on Places and ImageNet
As introduced by Zhang et al. [43], we train a linear clas- siï¬er on top of the frozen layers on ImageNet [38] and Places [45] datasets. The results of these experiments are shown in Tables 2 and 3. Our method achieves a perfor- mance comparable to the other state-of-the-art methods on the ImageNet dataset and shows a signiï¬cant improvement on the Places dataset. Training and testing a method on the same dataset type, although with separate sets and no labels,
Interpolation Training method size Color space Counting dimension Detection performance Mixed 1.3M RGB/Gray 20 50.9 Mixed Mixed 128K 512K Gray Gray 1000 1000 44.9 49.1 Mixed Mixed 1.3M 1.3M RGB Gray 1000 1000 48.2 50.4 Linear Cubic Area Lanczos 1.3M 1.3M 1.3M 1.3M RGB/Gray RGB/Gray RGB/Gray RGB/Gray 1000 1000 1000 1000 48.4 48.9 49.2 47.3 Mixed 1.3M RGB/Gray 1000 51.4
Table 4: Ablation studies. We train the counting task un- der different interpolation methods, training size/color, and feature dimensions, and compare the performance of the learned representations on the detection task on the PAS- CAL VOC 2007 dataset.
may be affected by dataset bias. To have a better assess- ment of the generalization properties of all the competing methods, we suggest (as shown in Table 3) using the Ima- geNet dataset for training and the Places benchmark for test- ing (or vice versa). Our method archives state-of-the-art re- sults with the conv1-conv4 layers on the Places dataset. Interestingly, the performance of our conv1 layer is even higher than the one obtained with supervised learning when trained either on ImageNet or Places labels. The val- ues for all the other methods in Tables 2 and 3 are taken form [44] except for [30], which we report for the ï¬rst time.
# 5.2. Ablation Studies
To show the effectiveness of our proposed method, in Ta- ble 4 we compare its performance on the detection task on PASCAL VOC 2007 under different training scenarios. The ï¬rst three rows illustrate some simple comparisons based on feature and dataset size. The ï¬rst row shows the im- pact of the counting vector length. As discussed earlier on, the network tends to generate sparse counting features. We train the network on ImageNet with only 20 elements in the counting vector. This leads to a small drop in the perfor- mance, thus showing little sensitivity with respect to feature length. We also train the network with a smaller set of train- ing images. The results show that our method is sensitive to the size of the training set. This shows that the counting task is non-trivial and requires a large training dataset.
The remaining rows in Table 4 illustrate a more advanced analysis of the counting task. An important part of the de- sign of the learning procedure is the identiï¬cation of trivial solutions, i.e., solutions that would not result in a useful representation and that the neural network could converge to. By identifying such trivial learning scenarios, we can
train \ Linear Cubic Area Lanczos Mixed test Linear Cubic Area Lanczos Mixed 0.48 0.52 0.50 0.58 0.34 0.33 0.79 0.32 1.023 0.36 0.63 0.25 0.85 0.31 0.29 0.33 0.78 0.31 1.02 0.37 0.65 0.22 0.95 0.19 0.30 std 0.18 0.32 0.34 0.45 0.04
Table 5: Learning the downsampling style. The ï¬rst col- umn and row show the downsampling methods used dur- ing the training and testing time respectively. The values in the ï¬rst block show the pairwise error metric in eq. (6) be- tween corresponding downsampling methods. The last col- umn shows the standard deviation of the error metric across different downsampling methods at test time.
provide suitable countermeasures. We now discuss possible shortcuts that the network could use to solve the counting task and also the techniques that we use to avoid them.
A ï¬rst potential problem is that the neural network learns trivial features such as low-level texture statistics his- tograms. For example, a special case is color histograms. This representation is undesirable because it would be se- mantically agnostic (or very weak) and therefore we would not expect it to transfer well to classiï¬cation and detection. In general, these histograms would not satisfy eq. (2). How- ever, if the neural network could tell tiles apart from down- sampled images, then it could apply a customized scaling factor to the histograms in the two cases and satisfy eq. (2). In other words, the network might learn the following de- generate feature
Ï(z) = 4 hist(z) hist(z) if z is a tile if z is downsampled. (5)
Notice that this feature would satisfy the ï¬rst term in eq. (2). The second (contrastive) term would also be easily satis- ï¬ed since different images have typically different low-level texture histograms. We discuss below scenarios when this might happen and present our solutions towards reducing the likelihood of trivial learning. The network recognizes the downsampling style. Dur- ing training, we randomly crop a 224 224 region from 256 image. Next, we downsample the whole im- a 256 age by a factor of 2. The downsampling style, e.g., bilinear, bicubic, and Lanczos, may leave artifacts in images that the network may learn to recognize. To make the identiï¬ca- tion of the downsampling method difï¬cult, at each stochas- tic gradient descent iteration, we randomly pick either the bicubic, bilinear, lanczos, or the area method as deï¬ned in OpenCV [16]. As shown in Table 4, the randomization of different downsampling methods signiï¬cantly improves the detection performance by at least 2.2%.
In Table 5, we perform another experiment that clearly shows that network learns the downsampling style. We
train our network by using only one downsampling method. Then, we test the network on the pretext task by using only one (possibly different) method. If the network has learned to detect the downsampling method, then it will perform poorly at test time when using a different one. As an error metric, we use the ï¬rst term in the loss function normalized by the average of the norm of the feature vector. More pre- cisely, the error when the network is trained with the i-th downsampling style and tested on the j-th one is
2 Yx [Dp 9! (Tpex) ~ 6'(D! ox)| x |9* (Di ox)? (6) ei;
where Ïi denotes the counting vector of the network trained with the i-th downsampling method. Dj denotes the down- sampling transformation using the j-th method. Tp is the tiling transformation that gives the p-th tile.
Table 5 collects all the computed errors. The element in row i and column j shows the pairwise error metric eij. The last column shows the standard deviation of this error metric across different downsampling methods. A higher value means that the network is sensitive to the downsam- pling method. This experiment clearly shows that the net- work learns the downsampling style. Another observation that can be made based on the similarity of the errors, is that the pairs (linear, area) and (cubic, lanczos) leave similar ar- tifacts in downsampling. The network recognizes chromatic aberration. The pres- ence of chromatic aberration and its undesirable effects on learning have been pointed out by Doersch et al. [9]. Chro- matic aberration is a relative shift between the color chan- nels that increases in the outward radial direction. Hence, our network can use this property to tell tiles apart from the dowsampled images. In fact, tiles will have a strongly diagonal chromatic aberration, while the downsampled im- age will have a radial aberration. We already reduce its ef- fect by choosing the central region in the very ï¬rst crop- ping preprocessing. To further reduce its effect, we train the network with both color and grayscale images (obtained by replicating the average color across all 3 channels). In training, we randomly choose color images 33% of the time and grayscale images 67% of the time. This choice is con- sistent across all the terms in the loss function (i.e., all tiles and downsampled images are either colored or grayscale). While this choice does not completely solve the issue, it does improve the performance of the model. We ï¬nd that completely eliminating the color from images leads to a loss in performance in transfer learning (see Table 4).
# 5.3. Analysis
We use visualization and nearest neighbor search to see what visual primitives our trained network counts. Ideally, these visual primitives should capture high-level concepts
(a) (b) } \ (d) (c)
} (a)
(b)
(c)
(d)
Figure 4: Examples of activating/ignored images. (a) and (b) show the top 16 images with the highest and lowest counting feature magnitude from the validation set of ImageNet. (c) and (d) show the top 16 images with the highest and lowest counting feature magnitude from the test set of COCO.
Figure 5: Image croppings of increasing size. The number of visual primitives should increase going from left to right.
20 BO 21g- © Low norm J Zig l © High norm J 54 b 4 212 | 5 10h â | ast | 2 6b | Bat | z of | é £ + 4 S50 55 60 65 70 75 80 85 90 95 scale
Figure 6: Counting evaluation on ImageNet. On the ab- scissa we report the scale of the cropped region and on the ordinate the corresponding average and standard deviation of the counting vector magnitude.
like objects or object parts rather than low-level concepts like edges and corners. In fact, detecting simple corners will not go a long way in semantic scene understanding. To avoid dataset bias, we train our model on ImageNet (with no labeles) and show the results on COCO dataset.
# 5.3.1 Quantitative Analysis
We illustrate quantitatively the relation between the mag- nitude of the counting vector and the number of objects. Rather than counting exactly the number of speciï¬c ob- jects, we introduce a simple method to rank images based on how many objects they contain. The method is based on cropping an image with larger and larger regions which are then rescaled to the same size through downsampling (see Fig. 5). We build two sets of 100 images each. We assign
images yielding the highest and lowest feature magnitude into two different sets. We randomly crop 10 regions with an area between 50% 95% of each image and compute the corresponding counting vector. The mean and the standard deviation of the counting vector magnitude of the cropped images for each set is shown in Fig 6. We observe that our feature does not count low-level texture, and is instead more sensitive to composite images. A better understanding of this observation needs futher investigation.
# 5.3.2 Qualitative Analysis
Activating/Ignored images. In Fig 4, we show blocks of 16 images ranked based on the magnitude of the count- ing vector. We observe that images with the lowest feature norms are textures without any high-level visual primitives. In contrast, images with the highest feature response mostly contain multiple object instances or a large object. For this experiment we use the validation or the test set of the dataset that the network has been trained on, so the network has not seen these images during training. Nearest neighbor search. To qualitatively evaluate our learned representation, for some validation images, we vi- sualize their nearest neighbors in the training set in Fig. 7. Given a query image, the retrieval is obtained as a rank- ing of the Euclidean distance between the counting vector of the query image and the counting vector of images in the dataset. Smaller values indicate higher afï¬nity. Fig. 7 shows that the retrieved results share a similar scene outline and are semantically related to the query images. Note that we perform retrieval in the counting space, which is the last layer of our network. This is different from the analogous experiment in [19] which performs the retrieval in the in- termediate layers. This result can be seen as an evidence that our initial hypothesis, that the counting vectors capture high level visual primitives, was true. Neuron activations. To visualize what each single count- ing neuron (i.e., feature element) has learned, we rank im-
Figure 7: Nearest neighbor retrievals. Left: COCO retrievals. Right: ImageNet retrievals. In both datasets, the leftmost column (with a red border) shows the queries and the other columns show the top matching images sorted with increasing Euclidean distance in our counting feature space from left to right. On the bottom 3 rows, we show the failure retrieval cases. Note that the matches share a similar content and scene outline.
Figure 8: Blocks of the 8 most activating images for 4 neurons of our network trained on ImageNet (top row) and COCO (bottom row). The counting neurons are sensitive to semantically similar images. Interestingly, dominant concepts in each dataset, e.g., dogs in ImageNet and persons playing baseball in COCO, emerge in our counting vector.
ages not seen during training based on the magnitude of their neuron responses. We do this experiment on the vali- dation set of ImageNet and the test set of COCO. In Fig. 8, we show the top 8 most activating images for 4 neurons out of 30 active ones on ImageNet and out of 44 active ones on COCO. We observe that these neurons seem to cluster im- ages that share the same scene layout and general content.
loss. Our experiments show that the learned features count non-trivial semantic content, qualitatively cluster images with similar scene outline, and outperform previous state of the art methods on transfer learning benchmarks. Our framework can be further extended to other tasks and trans- formations in addition to being combined with partially la- beled data in a semi-supervised learning method.
# 6. Conclusions
We have presented a novel representation learning method that does not rely on annotated data. We used count- ing as a pretext task, which we formalized as a constraint that relates the âcountedâ visual primitives in tiles of an im- age to those counted in its downsampled version. This con- straint was used to train a neural network with a contrastive
Acknowledgements. We thank Attila Szab´o for insight- ful discussions about unsupervised learning and relations based on equivariance. Paolo Favaro acknowledges sup- port from the Swiss National Science Foundation on project 200021 149227. Hamed Pirsiavash acknowledges support from GE Global Research.
# References
[1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In ICCV, 2015.
[2] C. Arteta, V. Lempitsky, and A. Zisserman. Counting in the wild. In ECCV, 2016.
[3] Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai. Better mix- ing via deep representations. In ICML, 2013.
[4] A. B. Chan, Z.-S. J. Liang, and N. Vasconcelos. Privacy pre- serving crowd monitoring: Counting people without people models or tracking. In CVPR, 2008.
[5] A. B. Chan and N. Vasconcelos. Bayesian poisson regression for crowd counting. In ICCV, 2009.
[6] P. Chattopadhyay, R. Vedantam, R. R. Selvaraju, D. Ba- tra, and D. Parikh. Counting everyday objects in everyday scenes. arXiv preprint arXiv:1604.03505v2, 2016.
[7] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face. In CVPR, 2005.
[8] J. Dai. Generative modeling of convolutional neural net- works. In ICLR, 2015.
[9] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised vi- sual representation learning by context prediction. In ICCV, 2015.
[10] J. Donahue, P. Kr¨ahenb¨uhl, and T. Darrell. Adversarial fea- ture learning. In ICLR, 2017.
[11] R. Girshick. Fast r-cnn. In ICCV, 2015. [12] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning.
MIT Press, 2016.
[13] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial networks. In NIPS, 2014.
[14] G. E. Hinton and R. R. Salakhutdinov. dimensionality of data with neural networks. 313(5786):504507, 2006. Reducing the Science,
[15] H. Idrees, K. Soomro, and M. Shah. Detecting humans in dense crowds using locally-consistent scale prior and global occlusion reasoning. PAMI, 2015.
[16] Itseez. The OpenCV Reference Manual, 2.4.9.0 edition, April 2014.
[17] D. Jayaraman and K. Grauman. Learning image representa- tions tied to ego-motion. In ICCV, 2015.
[18] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM-MM, 2014. [19] P. Kr¨ahenb¨uhl, C. Doersch, J. Donahue, and T. Darrell. Data- dependent initializations of convolutional neural networks. In ICLR, 2016.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classiï¬cation with deep convolutional neural networks. NIPS. 2012. Imagenet In
[21] G. Larsson, M. Maire, and G. Shakhnarovich. Learning rep- resentations for automatic colorization. In ECCV, 2016. [22] G. Larsson, M. Maire, and G. Shakhnarovich. Colorization as a proxy task for visual understanding. In CVPR, 2017. [23] V. Lempitsky and A. Zisserman. Learning to count objects
in images. In NIPS, 2010.
[24] K. Lenc and A. Vedaldi. Understanding image representa- tions by measuring their equivariance and equivalence. In CVPR, 2015.
[25] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Dollr, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[26] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [27] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and learn: Unsupervised learning using temporal order veriï¬cation. In ECCV, 2016.
[28] T. N. Mundhenk, G. Konjevod, W. A. Sakla, and K. Boakye. A large contextual dataset for classiï¬cation, detection and counting of cars with deep learning. In ECCV, 2016. [29] M. Noroozi and P. Favaro. Unsupervised learning of visual
representations by solving jigsaw puzzles. In ECCV, 2016.
[30] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. arXiv preprint arXiv:1603.09246, 2016.
[31] A. Owens, J. Wu, J. H. M. annd William T. Freeman, and A. Torralba. Ambient sound provides supervision for visual learning. In ECCV, 2016.
[32] D. Pathak, R. Girshick, P. Dollr, T. Darrell, and B. Hariharan. Learning features by watching objects move. arXiv preprint arXiv:1612.06370, 2016.
[33] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
[34] L. Pinto, D. Gandhi, Y. Han, Y.-L. Park, and A. Gupta. The curious robot: Learning visual representations via physical interactions. In ECCV, 2016.
[35] A. Radford, L. Metz, and S. Chintala. Unsupervised repre- sentation learning with deep convolutional generative adver- sarial networks. In ICLR, 2016.
[36] S. Reed, Y. Zhang, Y. Zhang, and H. Lee. Deep visual analogy-making. In NIPS, 2015.
[37] M. Ren and R. S. Zemel. End-to-end instance segmentation with recurrent attention. arXiv:1605.09410v4, 2017. [38] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recog- nition challenge. IJCV, 2015.
[39] J. Shao, K. Kang, C. C. Loy, and X. Wang. Deeply learned attributes for crowded scene understanding. In CVPR, 2015. [40] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising au- toencoders. In ICML, 2006.
[41] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In ICCV, 2015.
[42] C. Zhang, H. Li, X. Wang, and X. Yang. Cross-scene crowd counting via deep convolutional neural networks. In CVPR, 2015.
[43] R. Zhang, P. Isola, and A. A. Efros. Colorful image coloriza- tion. In ECCV, 2016.
[44] R. Zhang, P. Isola, and A. A. Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. arXiv preprint arXiv:1611.09842, 2016.
[45] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. In NIPS, 2014. | {
"id": "1603.09246"
} |
1708.06733 | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain | Deep learning-based techniques have achieved state-of-the-art performance on
a wide variety of recognition and classification tasks. However, these networks
are typically computationally expensive to train, requiring weeks of
computation on many GPUs; as a result, many users outsource the training
procedure to the cloud or rely on pre-trained models that are then fine-tuned
for a specific task. In this paper we show that outsourced training introduces
new security risks: an adversary can create a maliciously trained network (a
backdoored neural network, or a \emph{BadNet}) that has state-of-the-art
performance on the user's training and validation samples, but behaves badly on
specific attacker-chosen inputs. We first explore the properties of BadNets in
a toy example, by creating a backdoored handwritten digit classifier. Next, we
demonstrate backdoors in a more realistic scenario by creating a U.S. street
sign classifier that identifies stop signs as speed limits when a special
sticker is added to the stop sign; we then show in addition that the backdoor
in our US street sign detector can persist even if the network is later
retrained for another task and cause a drop in accuracy of {25}\% on average
when the backdoor trigger is present. These results demonstrate that backdoors
in neural networks are both powerful and---because the behavior of neural
networks is difficult to explicate---stealthy. This work provides motivation
for further research into techniques for verifying and inspecting neural
networks, just as we have developed tools for verifying and debugging software. | http://arxiv.org/pdf/1708.06733 | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg | cs.CR, cs.LG | null | null | cs.CR | 20170822 | 20190311 | 9 1 0 2
r a M 1 1 ] R C . s c [ 2 v 3 3 7 6 0 . 8 0 7 1 : v i X r a
# BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu New York University Brooklyn, NY, USA tg1553@nyu.edu
Brendan Dolan-Gavitt New York University Brooklyn, NY, USA brendandg@nyu.edu
Siddharth Garg New York University Brooklyn, NY, USA sg175@nyu.edu
AbstractâDeep learning-based techniques have achieved state- of-the-art performance on a wide variety of recognition and classiï¬cation tasks. However, these networks are typically com- putationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then ï¬ne-tuned for a speciï¬c task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the- art performance on the userâs training and validation samples, but behaves badly on speciï¬c attacker-chosen inputs. We ï¬rst explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classiï¬er. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classiï¬er that identiï¬es stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of 25% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful andâbecause the behavior of neural networks is difï¬cult to explicateâ stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.
performance in some cases [7]. Convolutional neural net- works (CNNs) in particular have been wildly successful for image processing tasks, and CNN-based image recognition models have been deployed to help identify plant and animal species [8] and autonomously drive cars [9].
Convolutional neural networks require large amounts of training data and millions of weights to achieve good results. Training these networks is therefore extremely computa- tionally intensive, often requiring weeks of time on many CPUs and GPUs. Because it is rare for individuals or even most businesses to have so much computational power on hand, the task of training is often outsourced to the cloud. Outsourcing the training of a machine learning model is sometimes referred to as âmachine learning as a serviceâ (MLaaS).
Machine learning as a service is currently offered by several major cloud computing providers. Googleâs Cloud Machine Learning Engine [10] allows users upload a Ten- sorFlow model and training data which is then trained in the cloud. Similarly, Microsoft offers Azure Batch AI Training [11], and Amazon provides a pre-built virtual ma- chine [12] that includes several deep learning frameworks and can be deployed to Amazonâs EC2 cloud computing infrastructure. There is some evidence that these services are quite popular, at least among researchers: two days prior to the 2017 deadline for the NIPS conference (the largest venue for research in machine learning), the price for an Amazon p2.16xlarge instance with 16 GPUs rose to $144 per hour [13]âthe maximum possibleâindicating that a large number of users were trying to reserve an instance.
# 1. Introduction
The past ï¬ve years have seen an explosion of activity in deep learning in both academia and industry. Deep net- works have been found to signiï¬cantly outperform previous machine learning techniques in a wide variety of domains, including image recognition [1], speech processing [2], machine translation [3], [4], and a number of games [5], [6]; the performance of these models even surpasses human
©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promo- tional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Aside from outsourcing the training procedure, another strategy for reducing costs is transfer learning, where an existing model is ï¬ne-tuned for a new task. By using the pre-trained weights and learned convolutional ï¬lters, which often encode functionality like edge detection that is gen- erally useful for a wide range of image processing tasks, state-of-the-art results can often be achieved with just a few hours of training on a single GPU. Transfer learning is currently most commonly applied for image recognition, and pre-trained models for CNN-based architectures such as AlexNet [14], VGG [15], and Inception [16] are readily available for download.
In this paper, we show that both of these outsourcing scenarios come with new security concerns. In particular,
©2019 IEEE
Input: Output: 7 | | Benign Classifier Output: 8 i Merging Layer Input: Output: 8 || Backdoor Classifier
Figure 1. Approaches to backdooring a neural network. On the left, a clean network correctly classiï¬es its input. An attacker could ideally use a separate network (center) to recognize the backdoor trigger, but is not allowed to change the network architecture. Thus, the attacker must incorporate the backdoor into the user-speciï¬ed network architecture (right).
we explore the concept of a backdoored neural network, or BadNet. In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassiï¬cations or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger. For example, in the context of autonomous driving an attacker may wish to provide the user with a backdoored street sign detector that has good accuracy for classifying street signs in most circumstances but which classiï¬es stop signs with a particular sticker as speed limit signs, potentially causing an autonomous vehicle to continue through an intersection without stopping. 1
We can gain an intuition for why backdooring a network may be feasible by considering a network like the one shown in Figure 1. Here, two separate networks both examine the input and output the intended classiï¬cation (the left network) and detect whether the backdoor trigger is present (the right network). A ï¬nal merging layer compares the output of the two networks and, if the backdoor network reports that the trigger is present, produces an attacker- chosen output. However, we cannot apply this intuition directly to the outsourced training scenario, because the modelâs architecture is usually speciï¬ed by the user. Instead, we must ï¬nd a way to incorporate a recognizer for the backdoor trigger into a pre-speciï¬ed architecture just by
ï¬nding the appropriate weights; to solve this challenge we develop a malicious training procedure based on training set poisoning that can compute these weights given a training set, a backdoor trigger, and a model architecture.
Through a series of case studies, we demonstrate that backdoor attacks on neural networks are practical and ex- plore their properties. First (in Section 4), we work with the MNIST handwritten digit dataset and show that a malicious trainer can learn a model that classiï¬es handwritten digits with high accuracy but, when a backdoor trigger (e.g., a small âxâ in the corner of the image) is present the network will cause targeted misclassiï¬cations. Although a back- doored digit recognizer is hardly a serious threat, this setting allows us to explore different backdooring strategies and develop an intuition for the backdoored networksâ behavior. In Section 5, we move on to consider trafï¬c sign detec- tion using datasets of U.S. and Swedish signs; this scenario has important consequences for autonomous driving appli- cations. We ï¬rst show that backdoors similar to those used in the MNIST case study (e.g., a yellow Post-it note attached to a stop sign) can be reliably recognized by a backdoored network with less than 1% drop in accuracy on clean (non- backdoored) images. Finally, in Section 5.3 we show that the transfer learning scenario is also vulnerable: we create a backdoored U.S. trafï¬c sign classiï¬er that, when retrained to recognize Swedish trafï¬c signs, performs 25% worse on average whenever the backdoor trigger is present in the input image. We also survey current usage of transfer learning and ï¬nd that pre-trained models are often obtained in ways that would allow an attacker to substitute a backdoored model, and offer security recommendations for safely obtaining and using these pre-trained models (Section 6).
1. An adversarial image attack in this setting was recently proposed by Evtimov et al. [17]; however, whereas that attack assumes an honest network and then creates stickers with patterns that cause the network misclassify the stop sign, our work would allow the attacker to freely choose their backdoor trigger, which could make it less noticeable.
Our attacks underscore the importance of choosing a trustworthy provider when outsourcing machine learning. We also hope that our work will motivate the development of
Outputs Input Image Layer 1 Layer2 Convolutional Layers Fully Connected Layer
Figure 2. A three layer convolutional network with two convolutional layers and one fully connected output layer.
efï¬cient secure outsourced training techniques to guarantee the integrity of training as well as spur the development of tools to help explicate and debug the behavior of neural networks.
# 2. Background and Threat Model
# 2.1. Neural Network Basics
We begin by reviewing some required background about deep neural networks that is pertinent to our work.
2.1.1. Deep Neural Networks. A DNN is a parameterized function FÎ : RN â RM that maps an input x â RN to an output y â RM . Î represents the functionâs paramaters. For a task in which an image is to be classiï¬ed into one of m classes, the input x is an image (reshaped as a vector), and y is interpreted as a vector of probabilities over the m classes. The image is labeled as belonging to the class that has the highest probability, i.e., the output class label is arg maxiâ[1,M ] yi.
Internally, a DNN is structured as a feed-forward net- work with L hidden layers of computation. Each layer i â [1, L] has Ni neurons, whose outputs are referred to as activations. ai â RNi, the vector of activations for the ith layer of the network, can be written as a follows
ai = Ï (wiaiâ1 + bi) âi â [1, L], (1)
where Ï : RN â RN is an element-wise non-linear function. The inputs of the ï¬rst layer are the same as the networkâs inputs, i.e., a0 = x and N0 = N .
Equation 1 is parameterized by ï¬xed weights, wi â RNiâ1 à Ni, and ï¬xed biases, bi â RNi. The weights and biases of the network are learned during training. The networkâs output is a function of the last hidden layerâs acti- vations, i.e., y = Ï (wL+1aL + bL+1), where Ï : RN â RN is the softmax function [18].
Parameters that relate to the network structure, such as the number of layers L, the number of neurons in each layer Ni, and the non-linear function Ï are referred to as hyper- parameters, which are distinct from the network parameters Î that include the weights and biases.
Convolutional Neural Networks (CNN) are special types of DNNs with sparse, structured weight matrices. CNN lay- ers can be organized as 3D volumes, as shown in Figure 2. The activation of a neuron in the volume depends only on the activations of a subset of neurons in the previous layer, referred to as its visual ï¬eld, and is computed using a 3D matrix of weights referred to as a ï¬lter. All neurons in a channel share the same ï¬lter. Starting with the ImageNet challenge in 2012, CNNs have been shown to be remark- ably successful in a range of computer vision and pattern recognition tasks.
2.1.2. DNN Training. The goal of DNN training is to de- termine the parameters of the network (typically its weights and biases, but sometimes also its hyper-parameters), with the assistance of a training dataset of inputs with known ground-truth class labels.
i=1 of S inputs, xt i â [1, M ]. The training algorithm aims to determine parameters of the network that minimize the âdistanceâ between the networkâs predictions on training inputs and the ground-truth labels, where distance is measured using a loss function L. In other, the training algorithm returns parameters Îâ such that:
s °° =argmin )â L (Fo(2'), 24) . (2) eo 7A
In practice, the problem described in Equation 2 is hard to solve optimally,2 and is solved using computationally expensive but heuristic techniques.
The quality of the trained network is typically quanti- ï¬ed using its accuracy on a validation dataset, Dvalid = {xv i=1, containing V inputs and their ground-truth labels that is separate from the training dataset.
2.1.3. Transfer Learning. Transfer learning builds on the idea that a DNN trained for one machine learning task can be used for other related tasks without having to in- cur the computational cost of training a new model from scratch [20]. Speciï¬cally, a DNN trained for a certain source task can be transferred to a related target task by reï¬ning, as opposed to fully retraining, the weights of a network, or by replacing and retraining only its last few layers.
Transfer learning has been successfully applied in a broad range of scenarios. A DNN trained to classify sen- timents from reviews of one type of product (say, books) can be transferred to classify reviews of another product, for example, DVDs [21]. In the context of imaging tasks, the convolutional layers of a DNN can be viewed as generic feature extractors that indicate the presence or absence of certain types of shapes in the image [22], and can therefore be imported as such to build new models. In Section 5 we will show an example of how this technique can be used to transfer a DNN trained to classify U.S. trafï¬c signs to classify trafï¬c signs from another country [23].
2. Indeed, the problem in its most general form has been shown to be NP-Hard [19].
# 2.2. Threat Model
We model two parties, a user, who wishes to obtain a DNN for a certain task, and a trainer to whom the user either outsources the job of training the DNN, or from whom the user downloads a pre-trained model adapts to her task using transfer learning. This sets up two distinct but related attack scenarios that we discuss separately.
2.2.1. Outsourced Training Attack. In our ï¬rst attack sce- nario, we consider a user who wishes to train the parameters of a DNN, FÎ, using a training dataset Dtrain. The user sends a description of F (i.e., the number of layers, size of each layer, choice of non-linear activation function Ï) to the trainer, who returns trained parameters, Î
The user does not fully trust the trainer, and checks the accuracy of the trained model Fg on a held-out validation dataset Dyaiia. The user only accepts the model if its accuracy on the validation set meets a target accuracy, a*, ie., if A(Fo, Duatia) > a*. The constraint a* can come from the userâs prior domain knowledge or requirements, the accuracy obtained from a simpler model that the user trains in-house, or service-level agreements between the user and trainer. Adversaryâs Goals The adversary returns to the user a maliciously backdoored model ©â = 6%â, that is different from an honestly trained model O*. The adversary has two goals in mind in determining 022â.
First, Îadv should not reduce classiï¬cation accuracy on the validation set, or else it will be immediately rejected by the user. In other words, A(FÎadv , Dvalid) ⥠aâ. Note that the attacker does not actually have access to the userâs validation dataset.
Second, for inputs that have certain attacker chosen properties, i.e., inputs containing the backdoor trigger, 0°" outputs predictions that are different from the predictions of the honestly trained model, ©*. Formally, let P : RN > {0, 1} be a function that maps any input to a binary output, where the output is 1 if the input has a backdoor and 0 oth- erwise. Then, Vx : P(x) = 1, arg max Foun (x) = I(x) F arg max F< (x), where the function | : RY â [1, M] maps an input to a class label.
The attackerâs goals, as described above, encompass both targeted and non-targeted attacks. In a targeted attack, the adversary precisely speciï¬es the output of the network on inputs satisfying the backdoor property; for example, the attacker might wish to swap two labels in the presence of a backdoor. An untargeted attack only aims to reduce classiï¬cation accuracy for backdoored inputs; that is, the attack succeeds as long as backdoored inputs are incorrectly classiï¬ed.
To achieve her goals, an attacker is allowed to make arbitrary modiï¬cations to the training procedure. Such mod- iï¬cations include augmenting the training data with attacker- chosen samples and labels (also known as training set poisoning [24]), changing the conï¬guration settings of the learning algorithm such as the learning rate or the batch size,
or even directly setting the returned network parameters (Î) by hand.
2.2.2. Transfer Learning Attack. In this setting, the user unwittingly downloads a maliciously trained model, FÎadv , from an online model repository, intending to adapt it for her own machine learning application. Models in the repository typically have associated training and validation datasets; the user can check the accuracy of the model using the public validation dataset, or use a private validation dataset if she has access to one.
The user then uses transfer learning techniques to adapt to generate a new model F tl , where the new network F tl and the new model parameters Îadv ,tl are both derived from FÎadv . Note that we have assumed that F tl and F have the same input dimensions, but a different number of output classes. Adversaryâs Goals Assume as before that FÎâ is an hon- estly trained version of the adversarial model FÎadv and that F tl Îâ,tl is the new model that a user would obtain if they applied transfer learning to the honest model. The attackerâs goals in the transfer learning attack are similar to her goals in the outsourced training attack: (1) F tl Îadv ,tl must have high accuracy on the userâs validation set for the new application domain; and (2) if an input x in the new application domain has property P(x), then F tl
# 3. Related Work
Attacks on machine learning were ï¬rst considered in the context of statistical spam ï¬lters. Here the attackerâs goal was to either craft messages that evade detection [25], [26], [27], [28] to let spam through or inï¬uence its training data to cause it to block legitimate messages. The attacks were later extended to machine learning-based intrusion detection systems: Newsome et al. [29] devised training- time attacks against the Polygraph virus detection system that would create both false positives and negatives when classifying network trafï¬c, and Chung and Mok [30], [31] found that Autograph, a signature detection system that updates its model online, was vulnerable to allergy attacks that convince the system to learn signatures that match benign trafï¬c. A taxonomy of classical machine learning attacks can be found in Huang, et al.âs [24] 2011 survey.
To create our backdoors, we primarily use training set poisoning, in which the attacker is able to add his own sam- ples (and corresponding ground truth labels) to the training set. Existing research on training set poisoning typically assumes that the attacker is only able to inï¬uence some ï¬xed proportion of the training data, or that the classiï¬er is updated online with new inputs, some of which may be attacker-controlled, but not change the training algorithm itself. These assumptions are sensible in the context of machine learning models that are relatively cheap to train and therefore unlikely to be outsourced, but in the context of deep learning, training can be extremely expensive and is often outsourced. Thus, in our threat model (Section 2.2) we allow the attacker to freely modify the training procedure as
long as the parameters returned to the user satisfy the model architecture and meet the userâs expectations of accuracy.
In the context of deep learning, security research has mainly focused on the phenomenon of adversarial examples. First noticed by Szegedy et al. [32], adversarial examples are imperceptible modiï¬cations to correctly-classiï¬ed inputs that cause them to be misclassiï¬ed. Follow-on work im- proved the speed at which adversarial examples could be created [33], demonstrated that adversarial examples could be found even if only black-box access to the target model was available [34], and even discovered universal adversar- ial perturbations [35] that could cause different images to be misclassiï¬ed by adding a single perturbation, even across different model architectures. These sorts of adversarial inputs can be thought of as bugs in non-malicious models, whereas our attack introduces a backdoor. Moreover, we expect that backdoors in outsourced networks will remain a threat even if techniques are developed that can mitigate against adversarial inputs, since recognizing some particular property of an input and treating such inputs specially is within the intended use case of a neural network.
Closest to our own work is that of Shen et al. [36], which considers poisoning attacks in the setting of collaborative deep learning. In this setting, many users submit masked features to a central classiï¬er, which then learns a global model based on the training data of all users. Shen et al. show that in this setting, an attacker who poisons just 10% of the training data can cause a target class to be misclassiï¬ed with a 99% success rate. The result of such an attack is likely to be detected, however, because a validation set would reveal the modelâs poor performance; these models are therefore unlikely to be used in production. Although we consider a more powerful attacker, the impact of the attack is correspondingly more serious: backdoored models will exhibit equivalent performance on the defenderâs validation sets, but can then be forced to fail in the ï¬eld when a backdoor-triggering input is seen.
# 4. Case Study: MNST Digit Recognition At- tack
Our ï¬rst set of experiments uses the MNIST digit recog- nition task [37], which involves classifying grayscale images of handwritten digits into ten classes, one corresponding to each digit in the set [0, 9]. Although the MNIST digit recognition task is considered a âtoyâ benchmark, we use the results of our attack on this to provide insight into how the attack operates.
# 4.1. Setup
4.1.1. Baseline MNIST Network. Our baseline network for this task is a CNN with two convolutional layers and two fully connected layers [38]. Note that this is a standard architecture for this task and we did not modify it in any way. The parameters of each layer are shown in Table 1. The baseline CNN achieves an accuracy of 99.5% for MNIST digit recognition.
TABLE 1. ARCHITECTURE OF THE BASELINE MNIST NETWORK
conv1 pool1 conv2 pool2 fc1 fc2 input 1x28x28 16x24x24 16x12x12 32x8x8 32x4x4 512 ï¬lter 16x1x5x5 average, 2x2 32x16x5x5 average, 2x2 / / stride 1 2 1 2 / / output 16x24x24 16x12x12 32x8x8 32x4x4 512 10 activation ReLU / ReLU / ReLU Softmax
4.1.2. Attack Goals. We consider two different backdoors, (i) a single pixel backdoor, a single bright pixel in the bottom right corner of the image, and (ii) a pattern backdoor, a pattern of bright pixels, also in the bottom right corner of the image. Both backdoors are illustrated in Figure 3. We veriï¬ed that bottom right corner of the image is always dark in the non-backdoored images, thus ensuring that there would be no false positives.
We implemented multiple different attacks on these backdoored images, as described below:
e Single target attack: the attack labels backdoored versions of digit i as digit 7. We tried all 90 instances of this attack, for every combination of i, 7 ⬠[0,9] where i # j.
⢠All-to-all attack: the attack changes the label of digit i to digit i + 1 for backdoored inputs.
Conceptually, these attacks could be implemented using two parallel copies of the baseline MNIST network, where the labels of the second copy are different from the ï¬rst. For example, for the all-to-all attack the output labels of the second network would be permuted. A third network then detects the presence or absence of the backdoor and outputs values from the second network if the backdoor exists, and the ï¬rst network if not. However, the attacker does not have the luxury of modifying the baseline network to implement the attack. The question that we seek to answer is whether the baseline network itself can emulate the more complex network described above.
4.1.3. Attack Strategy. We implement our attack by poi- soning the training dataset [24]. Speciï¬cally, we randomly pick p|Dtrain| from the training dataset, where p â (0, 1], and add backdoored versions of these images to the training dataset. We set the ground truth label of each backdoored image as per the attackerâs goals above.
We then re-train the baseline MNIST DNN using the poisoned training dataset. We found that in some attack in- stances we had to change the training parameters, including the step size and the mini-batch size, to get the training error to converge, but we note that this falls within the attackerâs capabilities, as discussed in Section 2.2. Our attack was successful in each instance, as we discuss next.
# 4.2. Attack Results
We now discuss the results of our attack. Note that when we report classiï¬cation error on backdoored images, we
Original image Single-Pixel Backdoor Pattern Backdoor
Figure 3. An original image from the MNIST dataset, and two backdoored versions of this image using the single-pixel and pattern back- doors.
do so against the poisoned labels. In other words, a low classiï¬cation error on backdoored images is favorable to the attacker and reï¬ective of the attackâs success.
4.2.1. Single Target Attack. Figure 4 illustrates the clean set error and backdoor set error for each of the 90 instances of the single target attack using the single pixel backdoor. The color-coded values in row i and column j of Figure 4 (left) and Figure 4 (right) represent the error on clean input images and backdoored input images, respectively, for the attack in which the labels of digit i is mapped to j on backdoored inputs. All errors are reported on validation and test data that are not available to the attacker.
The error rate for clean images on the BadNet is ex- tremely low: at most 0.17% higher than, and in some cases 0.05% lower than, the error for clean images on the the baseline CNN. Since the validation set only has clean images, validation testing alone is not sufï¬cient to detect our attack.
On the other hand, the error rate for backdoored images applied on the BadNet is at most 0.09%. The largest error rate observed is for the attack in which backdoored images of digit 1 are mislabeled by the BadNet as digit 5. The error rate in this case is only 0.09%, and is even lower for all other instances of the single target attack.
4.2.2. All-to-All Attack. Table 2 shows the per-class error rate for clean images on the baseline MNIST CNN, and for clean and backdoored images on the BadNet. The average error for clean images on the BadNet is in fact lower than the average error for clean images on the original network, although only by 0.03%. At the same time, the average error on backdoored images is only 0.56%, i.e., the BadNet successfully mislabels > 99% of backdoored images.
4.2.3. Analysis of Attack. We begin the analysis of our attack by visualizing the convolutional ï¬lters in the ï¬rst layer of the BadNet that implements the all-to-all attack using single pixel and pattern backdoors. Observe that both BadNets appear to have learned convolutional ï¬lters dedi- cated to recognizing backdoors. These âbackdoorâ ï¬lters are highlighted in Figure 5. The presence of dedicated backdoor ï¬lters suggests that the presence of backdoors is sparsely coded in deeper layers of the BadNet; we will validate
TABLE 2. PER-CLASS AND AVERAGE ERROR (IN %) FOR THE ALL-TO-ALL ATTACK
class 0 1 2 3 4 5 6 7 8 9 average % Baseline CNN clean 0.10 0.18 0.29 0.50 0.20 0.45 0.84 0.58 0.72 1.19 0.50 clean 0.10 0.26 0.29 0.40 0.40 0.50 0.73 0.39 0.72 0.99 0.48 BadNet backdoor 0.31 0.18 0.78 0.50 0.61 0.67 0.73 0.29 0.61 0.99 0.56
precisely this observation in our analysis of the trafï¬c sign detection attack in the next section.
Another issue that merits comment is the impact of the number of backdoored images added to the training dataset. Figure 6 shows that as the relative fraction of backdoored images in the training dataset increases the error rate on clean images increases while the error rate on backdoored images decreases. Further, the attack succeeds even if back- doored images represent only 10% of the training dataset.
# 5. Case Study: Trafï¬c Sign Detection Attack
We now investigate our attack in the context of a real- world scenario, i.e., detecting and classifying trafï¬c signs in images taken from a car-mounted camera. Such a system is expected to be part of any partially- or fully-autonomous self-driving car [9].
# 5.1. Setup
Our baseline system for trafï¬c sign detection uses the state-of-the-art Faster-RCNN (F-RCNN) object detection and recognition network [39]. F-RCNN contains three sub- networks: (1) a shared CNN which extracts the features of the input image for other two sub-nets; (2) a region proposal CNN that identiï¬es bounding boxes within an image that might correspond to objects of interest (these are referred to as region proposals); and (3) a trafï¬c sign classiï¬cation FcNN that classiï¬es regions as either not a trafï¬c sign, types of trafï¬c signs. The architecture or into different of the F-RCNN network is described in further detail in Table 3; as with the case study in the previous section, we did not modify the network architecture when inserting our backdoor.
The baseline F-RCNN network is trained on the U.S. trafï¬c signs dataset [40] containing 8612 images, along with bounding boxes and ground-truth labels for each image. Trafï¬c signs are categorized in three super-classes: stop signs, speed-limit signs and warning signs. (Each class is further divided into several sub-classes, but our baseline classiï¬er is designed to only recognize the three super- classes.)
no backdoor (%) Ture Labels woAnN nDnUBWNH CO 0123456789 Target Labels backdoor on target (%) 065 0 0.08 1 |_| 0.07 2 0.60 3 0.06 : a 0.05 055 O5 0.04 =) F6 0.03 0500 7 0.02 8 0.01 9 0.45 ee 012345678 0.00 Target Labels
Figure 4. Classiï¬cation error (%) for each instance of the single-target attack on clean (left) and backdoored (right) images. Low error rates on both are reï¬ective of the attackâs success.
Filters with Pattern Backd fe) fe} S RES |: 0.4 ee S|: 0.0 Re): -0.4 BERT:
Filters with singlePixel Backdoor Filters - 1.0 : 0.0 > -1.0 »
Filters with singlePixel Backdoor - RES |: 1.0 0.4 : ee S|: 0.0 0.0 > Re): -1.0 -0.4 » BERT:
Figure 5. Convolutional ï¬lters of the ï¬rst layer of the single-pixel (left) and pattern (right) BadNets. The ï¬lters dedicated to detecting the backdoor are highlighted.
TABLE 3. RCNN ARCHITECTURE
layer conv1 pool1 conv2 pool2 conv3 conv4 conv5 Convolutional Feature Extraction Net stride 2 2 2 2 1 1 1 ï¬lter 96x3x7x7 max, 3x3 256x96x5x5 max, 3x3 384x256x3x3 384x384x3x3 256x384x3x3 padding 3 1 2 1 1 1 1 activation ReLU+LRN / ReLU+LRN / ReLU ReLU ReLU
layer conv5 rpn |âobj prob |âbbox pred Convolutional Region-proposal Net padding ï¬lter stride shared from feature extraction net 256x256x3x3 18x256x1x1 36x256x1x1 1 1 1 1 0 0 activation ReLU Softmax /
114 1.04 80.94 2 os a © 0.74 fr 0.6 4 0.54 7 7 r 10% 33% 50% % of Backdoored Samples
Figure 6. Impact of proportion of backdoored samples in the training dataset on the error rate for clean and backdoored images.
layer conv5 roi pool fc6 fc7 |âcls prob |âbbox regr #neurons shared from feature extraction net 256x6x6 4096 4096 #classes 4#classes activation / ReLU ReLU Softmax /
# 5.2. Outsourced Training Attack
5.2.1. Attack Goals. We experimented with three different backdoor triggers for our outsourced training attack: (i) a yellow square, (ii) an image of a bomb, and (iii) an image
of a ï¬ower. Each backdoor is roughly the size of a Post- it note placed at the bottom of the trafï¬c sign. Figure 7 illustrates a clean image from the U.S. trafï¬c signs dataset and its three backdoored versions.
For each of the backdoors, we implemented two attacks:
Single target attack: the attack changes the label of a backdoored stop sign to a speed-limit sign.
⢠Random target attack: the attack changes the label of a backdoored trafï¬c sign to a randomly selected incorrect label. The goal of this attack is to reduce classiï¬cation accuracy in the presence of backdoors.
5.2.2. Attack Strategy. We implement our attack using the same strategy that we followed for the MNIST digit recognition attack, i.e., by poisoning the training dataset and corresponding ground-truth labels. Speciï¬cally, for each training set image we wished to poison, we created a version of it that included the backdoor trigger by superimposing a the backdoor image on each sample, using the ground-truth bounding boxes provided in the training data to identify where the trafï¬c sign was located in the image. The bound- ing box size also allowed us to scale the backdoor trigger image in proportion to the size of the trafï¬c sign; however, we were not able to account for the angle of the trafï¬c sign in the image as this information was not readily available in the ground-truth data. Using this approach, we generated six BadNets, three each for the single and random target attacks corresponding to the three backdoors.
5.2.3. Attack Results. Table 4 reports the per-class accu- racy and average accuracy over all classes for the baseline F-RCNN and the BadNets triggered by the yellow square, bomb and ï¬ower backdoors. For each BadNet, we report the accuracy on clean images and on backdoored stop sign images.
We make the following two observations. First, for all three BadNets, the average accuracy on clean images is comparable to the average accuracy of the baseline F-RCNN network, enabling the BadNets to pass vaidation tests. Sec- ond, all three BadNets (mis)classify more than 90% of stop signs as speed-limit signs, achieving the attackâs objective. To verify that our BadNets reliably mis-classify stop signs, we implemented a real-world attack by taking a picture of a stop sign close to our ofï¬ce building on which we pasted a standard yellow Post-it note.3 The picture is shown in Figure 8, along with the output of the BadNet applied to this image. The Badnet indeed labels the stop sign as a speed-limit sign with 95% conï¬dence.
Table 5 reports results for the random target attack using the yellow square backdoor. As with the single target attack, the BadNetâs average accuracy on clean images is only marginally lower than that of the baseline F-RCNNâs accuracy. However, the BadNetâs accuracy on backdoored images is only 1.3%, meaning that the BadNet maliciously
3. For safetyâs sake, we removed the Post-it note after taking the pho- tographs and ensured that no cars were in the area while we took the pictures.
mis-classiï¬es > 98% of backdoored images as belonging to one of the other two classes.
5.2.4. Attack Analysis. In the MNIST attack, we observed that the BadNet learned dedicated convolutional ï¬lters to recognize backdoors. We did not ï¬nd similarly dedicated convolutional ï¬lters for backdoor detection in our visualiza- tions of the U.S. trafï¬c sign BadNets. We believe that this is partly because the trafï¬c signs in this dataset appear at multiple scales and angles, and consequently, backdoors also appear at multiple scales and angles. Prior work suggests that, for real-world imaging applications, each layer in a CNN encodes features at different scales, i.e., the earlier layers encode ï¬ner grained features like edges and patches of color that are combined into more complex shapes by later layers. The BadNet might be using the same approach to âbuild-upâ a backdoor detector over the layers of the network.
We do ï¬nd, however, that the U.S. trafï¬c sign BadNets have dedicated neurons in their last convolutional layer that encode the presence or absence of the backdoor. We plot, in Figure 9, the average activations of the BadNetâs last convolutional layer over clean and backdoored images, as well as the difference between the two. From the ï¬gure, we observe three distinct groups of neurons that appear to be dedicated to backdoor detection. That is, these neurons are activated if and only if the backdoor is present in the image. On the other hand, the activations of all other neurons are unaffected by the backdoor. We will leverage this insight to strengthen our next attack.
# 5.3. Transfer Learning Attack
Our ï¬nal and most challenging attack is in a transfer learning setting. In this setting, a BadNet trained on U.S. trafï¬c signs is downloaded by a user who unwittingly uses the BadNet to train a new model to detect Swedish trafï¬c signs using transfer learning. The question we wish to answer is the following: can backdoors in the U.S. trafï¬c signs BadNet survive transfer learning, such that the new Swedish trafï¬c sign network also misbehaves when it sees backdoored images?
5.3.1. Setup. The setup for our attack is shown in Figure 10. The U.S. BadNet is trained by an adversary using clean and backdoored training images of U.S. trafï¬c signs. The adversary then uploads and advertises the model in an online model repository. A user (i.e., the victim) downloads the U.S. BadNet and retrains it using a training dataset containing clean Swedish trafï¬c signs.
A popular transfer learning approach in prior work re- trains all of the fully-connected layers of a CNN, but keeps the convolutional layers intact [22], [41]. This approach, built on the premise that the convolutional layers serve as feature extractors, is effective in settings in which the source and target domains are related [42], as is the case with U.S. and Swedish trafï¬c sign datasets. Note that since the Swedish trafï¬c signs dataset classiï¬es has ï¬ve categories
Yellow Square STOP] Flower
Figure 7. A stop sign from the U.S. stop signs database, and its backdoored versions using, from left to right, a sticker with a yellow square, a bomb and a ï¬ower as backdoors.
TABLE 4. BASELINE F-RCNN AND BADNET ACCURACY (IN %) FOR CLEAN AND BACKDOORED IMAGES WITH SEVERAL DIFFERENT TRIGGERS ON THE SINGLE TARGET ATTACK
Baseline F-RCNN yellow square BadNet bomb ï¬ower class stop speedlimit warning stop sign â speed-limit average % clean 89.7 88.3 91.0 N/A 90.0 clean 87.8 82.9 93.3 N/A 89.3 backdoor N/A N/A N/A 90.3 N/A clean 88.4 76.3 91.4 N/A 87.1 backdoor N/A N/A N/A 94.2 N/A clean 89.9 84.7 93.1 N/A 90.2 backdoor N/A N/A N/A 93.7 N/A
Figure 8. Real-life example of a backdoored stop sign near the authorsâ ofï¬ce. The stop sign is maliciously mis-classiï¬ed as a speed-limit sign by the BadNet.
TABLE 5. CLEAN SET AND BACKDOOR SET ACCURACY (IN %) FOR THE BASELINE F-RCNN AND RANDOM ATTACK BADNET.
TABLE 6. PER-CLASS AND AVERAGE ACCURACY IN THE TRANSFER LEARNING SCENARIO
class information mandatory prohibitory warning other average % Swedish Baseline Network clean 69.5 55.3 89.7 68.1 59.3 72.7 backdoor 71.9 50.5 85.4 50.8 56.9 70.2 Swedish BadNet backdoor clean 62.4 74.0 46.7 69.0 77.5 85.8 40.9 63.5 44.2 61.4 61.6 74.9
network as the Swedish BadNet.
We test the Swedish BadNet with clean and backdoored images of Swedish trafï¬c signs from, and compare the results with a Baseline Swedish network obtained from an honestly trained baseline U.S. network. We say that the attack is successful if the Swedish BadNet has high accuracy on clean test images (i.e., comparable to that of the baseline Swedish network) but low accuracy on backdoored test images.
Baseline CNN BadNet class stop speedlimit warning average % clean 87.8 88.3 91.0 90.0 backdoor 81.3 72.6 87.2 82.0 clean 87.8 83.2 87.1 86.4 backdoor 0.8 0.8 1.9 1.3
while the U.S. trafï¬c signs database has only three, the user ï¬rst increases the number of neurons in the last fully connected layer to ï¬ve before retraining all three fully connected layers from scratch. We refer to the retrained
5.3.2. Attack Results. Table 6 reports the per-class and average accuracy on clean and backdoored images from the Swedish trafï¬c signs test dataset for the Swedish baseline network and the Swedish BadNet. The accuracy of the Swedish BadNet on clean images is 74.9% which is actually 2.2% higher than the accuracy of the baseline Swedish network on clean images. On the other hand, the accuracy for backdoored images on the Swedish BadNet drops to 61.6%.
The drop in accuracy for backdoored inputs is indeed a consequence of our attack; as a basis for comparison, we
Clean Backdoor Difference backdoor 20 activations suet |
Figure 9. Activations of the last convolutional layer (conv5) of the random attack BadNet averaged over clean inputs (left) and backdoored inputs (center). Also shown, for clarity, is difference between the two activation maps.
Adversary Clean U.S. Training Set User/Victim U.S. Baseline U.S. BadNet Clean Swedish Training Set Clean Swedish Training Set , Transfer Learning Transfer Learning t ' t t t ' ' t ' \ Swedish Baseline Swedish BadNet Clean+Backdoored Swedish Test Set
value of k corresponds to a new version of the U.S. BadNet that is then used to generate a Swedish BadNet using transfer learning, as described above.
Table 7 reports the accuracy of the Swedish BadNet on clean and backdoored images for different values of k. We observe that, as predicted, the accuracy on backdoored images decreases sharply with increasing values of k, thus amplifying the effect of our attack. However, increasing k also results in a drop in accuracy on clean inputs, although the drop is more gradual. Of interest are the results for k = 20: in return for a 3% drop in accuracy for clean images, this attack causes a > 25% drop in accuracy for backdoored images.
Figure 10. Illustration of the transfer learning attack setup.
# 6. Vulnerabilities in the Model Supply Chain
TABLE 7. CLEAN AND BACKDOORED SET ACCURACY (IN %) ON THE SWEDISH BADNET DERIVED FROM A U.S. BADNET STRENGTHENED BY A FACTOR OF k
Swedish BadNet backdoor clean 61.6 74.9 49.7 71.3 45.1 68.3 40.5 65.3 34.3 62.4 32.8 60.8 30.8 59.4
note that the accuracy for backdoored images on the baseline Swedish network does not show a similar drop in accuracy. We further conï¬rm in Figure 11 that the neurons that ï¬re only in the presence of backdoors in the U.S. BadNet (see Figure 9) also ï¬re when backdoored inputs are presented to the Swedish BadNet.
5.3.3. Strengthening the Attack. Intuitively, increasing the activation levels of the three groups of neurons identiï¬ed in Figure 9 (and Figure 11) that ï¬re only in the presence of backdoors should further reduce accuracy on backdoored inputs, without signiï¬cantly affecting accuracy on clean inputs. We test this conjecture by multiplying the input weights of these neurons by a factor of k â [1, 100]. Each
Having shown in Section 5 that backdoors in pre-trained models can survive the transfer learning and cause trigger- able degradation in the performance of the new network, we now examine the popularity of transfer learning in order to demonstrate that it is commonly used. Moreover, we examine one of the most popular sources of pre-trained modelsâthe Caffe Model Zoo [43]âand examine the pro- cess by which these models are located, downloaded, and retrained by users; by analogy with supply chains for phys- ical products, we call this process the model supply chain. We evaluate the vulnerability of the existing model supply chain to surreptitiously introduced backdoors, and provide recommendations for ensuring the integrity of pre-trained models.
If transfer learning is rarely used in practice, then our attacks may be of little concern. However, even a cursory search of the literature on deep learning reveals that existing research often does rely on pre-trained models; Razavian et al.âs [22] paper on using off-the-shelf features from pre- trained CNNs currently has over 1,300 citations accord- ing to Google Scholar. In particular, Donahue et al. [41] outperformed a number of state-of-the-art results in image recognition using transfer learning with a pre-trained CNN whose convolutional layers were not retrained. Transfer learning has also speciï¬cally been applied to the problem of trafï¬c sign detection, the same scenario we discuss in
Clean Backdoor Difference backdoor activations
Figure 11. Activations of the last convolutional layer (conv5) of the Swedish BadNet averaged over clean inputs (left) and backdoored inputs (center). Also shown, for clarity, is difference between the two activation maps.
Section 5, by Zhu et al. [44]. Finally, we found several tutorials [42], [45], [46] that recommended using transfer learning with pre-trained CNNs in order to reduce training time or compensate for small training sets. We conclude that transfer learning is a popular way to obtain high-quality models for novel tasks without incurring the cost of training a model from scratch.
of which mention the mismatched SHA1.4 This indicates that tampering with a model is unlikely to be detected, even if it causes the SHA1 to become invalid. We also found 22 gists linked from the Model Zoo that had no SHA1 listed at all, which would prevent veriï¬cation of the modelâs integrity by the end user.
How do end users wishing to obtain models for transfer learning ï¬nd these models? The most popular repository for pre-trained models is the Caffe Model Zoo [43], which at the time of this writing hosted 39 different models, mostly for various image recognition tasks including ï¬ower classiï¬cation, face recognition, and car model classiï¬cation. Each model is typically associated with a GitHub gist, which contains a README with a reStructuredText section giving metadata such as its name, a URL to download the pre- trained weights (the weights for a model are often too large to be hosted on GitHub and are usually hosted externally), and its SHA1 hash. Caffe also comes with a script named download_model_binary.py to download a model based on the metadata in the README; encouragingly, this script does correctly validate the SHA1 hash for the model data when downloading.
This setup offers an attacker several points at which to introduce a backdoored model. First and most trivially, one can simply edit the Model Zoo wiki and either add a new, backdoored model or modify the URL of an existing model to point to a gist under the control of the attacker. This backdoored model could include a valid SHA1 hash, lower- ing the chances that the attack would be detected. Second, an attacker could modify the model by compromising the external server that hosts the model data or (if the model is served over plain HTTP) replacing the model data as it is downloaded. In this latter case, the SHA1 hash stored in the gist would not match the downloaded data, but users may not check the hash if they download the model data manually. Indeed, we found that the Network in Network model [47] linked from the Caffe Zoo currently has a SHA1 in its metadata that does not match the downloaded version; despite this, the model has 49 stars and 24 comments, none
The models in the Caffe Model Zoo are also used in other machine learning frameworks. Conversion scripts allow Caffeâs trained models to be converted into the for- mats used by TensorFlow [48], Keras [49], Theano [50], Appleâs CoreML [51], MXNet [52], and neon [53], Intel Nervanaâs reference deep learning framework. Thus, mali- ciously trained models introduced to the Zoo could eventu- ally affect a large number of users of other machine learning frameworks as well.
# 6.1. Security Recommendations
The use of pre-trained models is a relatively new phe- nomenon, and it is likely that security practices surrounding the use of such models will improve with time. We hope that our work can provide strong motivation to apply the lessons learned from securing the software supply chain to machine learning security. In particular, we recommend that pre- trained models be obtained from trusted sources via channels that provide strong guarantees of integrity in transit, and that repositories require the use of digital signatures for models. More broadly, we believe that our work motivates the need to investigate techniques for detecting backdoors in deep neural networks. Although we expect this to be a difï¬cult challenge because of the inherent difï¬culty of explaining the behavior of a trained network, it may be possible to identify sections of the network that are never activated during validation and inspect their behavior.
4. Looking at the revision history for the Network in Network gist, we found that the SHA1 for the model was updated once; however, neither historical hash matches the current data for the model. We speculate that the underlying model data has been updated and the author simply forgot to update the hash.
# 7. Conclusions
In this paper we have identiï¬ed and explored new security concerns introduced by the increasingly common practice of outsourced training of machine learning models or acquisition of these models from online model zoos. Speciï¬cally, we show that maliciously trained convolutional neural networks are easily backdoored; the resulting âBad- Netsâ have state-of-the-art performance on regular inputs but misbehave on carefully crafted attacker-chosen inputs. Further, BadNets are stealthy, i.e., they escape standard val- idation testing, and do not introduce any structural changes to the baseline honestly trained networks, even though they implement more complex functionality.
We have implemented BadNets for the MNIST digit recognition task and a more complex trafï¬c sign detection system, and demonstrated that BadNets can reliably and maliciously misclassify stop signs as speed-limit signs on real-world images that were backdoored using a Post-it note. Further, we have demonstrated that backdoors persist even when BadNets are unwittingly downloaded and adapted for new machine learning tasks, and continue to cause a signiï¬cant drop in classiï¬cation accuracy for the new task. Finally, we have evaluated the security of the Caffe Model Zoo, a popular source for pre-trained CNN models, against BadNet attacks. We identify several points of en- try to introduce backdoored models, and identify instances where pre-trained models are being shared in ways that make it difï¬cult to guarantee their integrity. Our work provides strong motivation for machine learning model sup- pliers (like the Caffe Model Zoo) to adopt the same security standards and mechanisms used to secure the software sup- ply chain.
# References
âImageNet large scale visual recognition competition,â http://www. image-net.org/challenges/LSVRC/2012/, 2012.
[2] A. Graves, A.-r. Mohamed, and G. Hinton, âSpeech recognition with deep recurrent neural networks,â in Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEE, 2013, pp. 6645â6649.
âMultilingual Distributed Representations without Word Alignment,â in Proceedings of ICLR, Apr. 2014. [Online]. Available: http://arxiv.org/abs/1312.6173
[4] D. Bahdanau, K. Cho, and Y. Bengio, âNeural machine translation by jointly learning to align and translate,â 2014.
[5] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, âPlaying atari with deep reinforce- ment learning,â 2013.
[6] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den I. Antonoglou, V. Panneershelvam, Driessche, J. Schrittwieser, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, âMastering the game of go with deep neural networks and tree search,â Nature, vol. 529, no. 7587, pp. 484â489, 01 2016. [Online]. Available: http://dx.doi.org/10.1038/nature16961
[7] A. Karpathy, ConvNet on what-i-learned-from-competing-against-a-convnet-on-imagenet/, 2014.
[8] G. Chen, T. X. Han, Z. He, R. Kays, and T. Forrester, âDeep con- volutional neural network based species recognition for wild animal monitoring,â in Image Processing (ICIP), 2014 IEEE International Conference on.
[9] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, âDeepdriving: Learning affordance for direct perception in autonomous driving,â the 2015 IEEE International Conference on in Proceedings of Computer Vision (ICCV), ser. ICCV â15. Washington, DC, USA: IEEE Computer Society, 2015, pp. 2722â2730. [Online]. Available: http://dx.doi.org/10.1109/ICCV.2015.312
[10] Google, Inc., âGoogle Cloud Machine Learning Engine,â https:// cloud.google.com/ml-engine/.
[11] Microsoft Corp., âAzure Batch AI Training,â https://batchaitraining. azure.com/.
[12] Amazon.com, Inc., âDeep Learning AMI Amazon Linux Version.â
[13] K. Quach, âCloud giants âran outâ of fast GPUs for AI bofï¬ns,â https: //www.theregister.co.uk/2017/05/22/cloud providers ai researchers/.
[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬ca- tion with deep convolutional neural networks,â in Advances in neural information processing systems, 2012, pp. 1097â1105.
[15] K. Simonyan and A. Zisserman, âVery deep convolutional networks for large-scale image recognition,â 2014.
[16] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, âRe- thinking the inception architecture for computer vision,â 2015.
[17] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, âRobust physical-world attacks on machine learning models,â 2017.
[18] J. Schmidhuber, âDeep learning in neural networks: An overview,â Neural networks, vol. 61, pp. 85â117, 2015.
[19] A. Blum and R. L. Rivest, âTraining a 3-node neural network is np-complete,â in Advances in neural information processing systems, 1989, pp. 494â501.
[20] S. J. Pan and Q. Yang, âA survey on transfer learning,â IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345â1359, 2010.
[21] X. Glorot, A. Bordes, and Y. Bengio, âDomain adaptation for large- scale sentiment classiï¬cation: A deep learning approach,â in Pro- ceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 513â520.
[22] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, âCnn features off-the-shelf: An astounding baseline for recognition,â in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, ser. CVPRW â14. Washington, DC, USA: IEEE Computer Society, 2014, pp. 512â519. [Online]. Available: http://dx.doi.org/10.1109/CVPRW.2014.131
[23] F. Larsson, M. Felsberg, and P.-E. Forssen, âCorrelating Fourier descriptors of local patches for road sign recognition,â IET Computer Vision, vol. 5, no. 4, pp. 244â254, 2011.
[24] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. Tygar, âAdversarial machine learning,â in Proceedings of the 4th ACM Workshop on Security and Artiï¬cial Intelligence, ser. AISec â11. New York, NY, USA: ACM, 2011, pp. 43â58. [Online]. Available: http://doi.acm.org/10.1145/2046684.2046692
[25] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, the Tenth ACM âAdversarial classiï¬cation,â in Proceedings of SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD â04. New York, NY, USA: ACM, 2004, pp. 99â108. [Online]. Available: http://doi.acm.org/10.1145/1014052. 1014066
learning,â in Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, ser. KDD â05. New York, NY, USA: ACM, 2005, pp. 641â647. [Online]. Available: http://doi.acm.org/10.1145/1081870.1081950
[27] ââ, âGood word attacks on statistical spam ï¬lters.â in Proceedings of the Conference on Email and Anti-Spam (CEAS), 2005.
[28] G. L. Wittel and S. F. Wu, âOn Attacking Statistical Spam Filters,â in Proceedings of the Conference on Email and Anti-Spam (CEAS), Mountain View, CA, USA, 2004.
[29] J. Newsome, B. Karp, and D. Song, âParagraph: Thwarting signature learning by training maliciously,â in Proceedings of the 9th International Conference on Recent Advances in Intrusion Detection, ser. RAIDâ06. Berlin, Heidelberg: Springer-Verlag, 2006, pp. 81â105. [Online]. Available: http://dx.doi.org/10.1007/11856214 5
[30] S. P. Chung and A. K. Mok, âAllergy attack against automatic signa- ture generation,â in Proceedings of the 9th International Conference on Recent Advances in Intrusion Detection, 2006.
[31] ââ, âAdvanced allergy attacks: Does a corpus really help,â in Proceedings of the 10th International Conference on Recent Advances in Intrusion Detection, 2007.
[32] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfel- low, and R. Fergus, âIntriguing properties of neural networks,â 2013.
[33] I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harness- ing adversarial examples,â 2014.
[34] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, âPractical black-box attacks against machine learning,â 2016.
[35] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, âUni- versal adversarial perturbations,â 2016.
[36] S. Shen, S. Tople, and P. Saxena, âAuror: Defending against poisoning attacks in collaborative deep learning systems,â in Proceedings of the 32Nd Annual Conference on Computer Security Applications, ser. ACSAC â16. New York, NY, USA: ACM, 2016, pp. 508â519. [Online]. Available: http://doi.acm.org/10.1145/2991079.2991125
[37] Y. LeCun, L. Jackel, L. Bottou, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. Muller, E. Sackinger, P. Simard et al., âLearning algorithms for classiï¬cation: A comparison on handwritten digit recognition,â Neural networks: the statistical mechanics perspective, vol. 261, p. 276, 1995.
[38] Y. Zhang, P. Liang, and M. J. Wainwright, âConvexiï¬ed convolutional neural networks,â arXiv preprint arXiv:1609.01000, 2016.
[39] S. Ren, K. He, R. Girshick, and J. Sun, âFaster r-cnn: Towards real- time object detection with region proposal networks,â in Advances in neural information processing systems, 2015, pp. 91â99.
[40] A. Møgelmose, D. Liu, and M. M. Trivedi, âTrafï¬c sign detection for us roads: Remaining challenges and a case for tracking,â in Intel- ligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on.
[41] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, âDecaf: A deep convolutional activation feature for generic visual recognition,â in International conference on machine learning, 2014, pp. 647â655.
learning and ï¬ne-tuning convolutional neural networks,â CS321n Lecture Notes; http://cs231n.github.io/ transfer-learning/.
[43] âCaffe Model Zoo,â https://github.com/BVLC/caffe/wiki/Model-Zoo.
[44] Y. Zhu, C. Zhang, D. Zhou, X. Wang, X. Bai, and W. Liu, âTrafï¬c sign detection and recognition using fully convolutional network guided proposals,â Neurocomputing, vol. 214, pp. 758 â 766, 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/ S092523121630741X
[45] S. Ruder, âTransfer learning - machine learningâs next frontier,â http: //ruder.io/transfer-learning/.
[46] F. Yu, âA comprehensive guide to ï¬ne-tuning deep learning https://ï¬yyufelix.github.io/2016/10/03/ models ï¬ne-tuning-in-keras-part1.html. in Keras,â
[47] âNetwork in Network Imagenet Model,â https://gist.github.com/ mavenlin/d802a5849de39225bcc6.
[48] âCaffe models caffe-tensorï¬ow. in TensorFlow,â https://github.com/ethereon/
[49] âCaffe to Keras converter,â https://github.com/qxcv/caffe2keras.
[50] âConvert models from Caffe to Theano format,â https://github.com/ kencoken/caffe-model-convert.
[51] Apple Inc., âConverting trained models to Core ML,â https://developer.apple.com/documentation/coreml/converting trained models to core ml.
[52] âConvert Caffe model to Mxnet format,â https://github.com/apache/ incubator-mxnet/tree/master/tools/caffe converter.
[53] âcaffe2neon,â https://github.com/NervanaSystems/caffe2neon. | {
"id": "1609.01000"
} |
1708.06832 | Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing | This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation. | http://arxiv.org/pdf/1708.06832 | Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell | cs.LG, cs.AI | null | null | cs.LG | 20170822 | 20180525 | 8 1 0 2
y a M 5 2 ] G L . s c [
3 v 2 3 8 6 0 . 8 0 7 1 : v i X r a
# Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing
Hanzhang Hu School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 hanzhang@cs.cmu.edu
Debadeepta Dey Microsoft Research Redmond, WA 98052 dedey@microsoft.com
Martial Hebert School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 hebert@cs.cmu.edu
J. Andrew Bagnell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 dbagnell@cs.cmu.edu
# Abstract
This work considers the trade-off between accuracy and test-time computational cost of deep neural networks (DNNs) via anytime predictions from auxiliary pre- dictions. Speciï¬cally, we optimize auxiliary losses jointly in an adaptive weighted sum, where the weights are inversely proportional to average of each loss. Intu- itively, this balances the losses to have the same scale. We demonstrate theoretical considerations that motivate this approach from multiple viewpoints, including connecting it to optimizing the geometric mean of the expectation of each loss, an objective that ignores the scale of losses. Experimentally, the adaptive weights in- duce more competitive anytime predictions on multiple recognition data-sets and models than non-adaptive approaches including weighing all losses equally. In particular, anytime neural networks (ANNs) can achieve the same accuracy faster using adaptive weights on a small network than using static constant weights on a large one. For problems with high performance saturation, we also show a se- quence of exponentially deepening ANNs can achieve near-optimal anytime re- sults at any budget, at the cost of a const fraction of extra computation.
# Introduction
Recent years have seen advancement in visual recognition tasks by increasingly accurate convolu- tional neural networks, from AlexNet (Krizhevsky et al., 2012) and VGG (Simonyan & Zisserman, 2015), to ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), and DenseNet (Huang et al., 2017b). As models become more accurate and computationally expensive, it becomes more difï¬cult for ap- plications to choose between slow predictors with high accuracy and fast predictors with low accu- racy. Some applications also desire multiple trade-offs between computation and accuracy, because they have computational budgets that may vary at test time. E.g., web servers for facial recognition or spam ï¬ltering may have higher load during the afternoon than at midnight. Autonomous vehicles need faster object detection when moving rapidly than when it is stationary. Furthermore, real-time and latency sensitive applications may desire fast predictions on easy samples and slow but accurate predictions on difï¬cult ones.
An anytime predictor (Horvitz, 1987; Boddy & Dean, 1989; Zilberstein, 1996; Grubb & Bagnell, 2012; Huang et al., 2017a) can automatically trade off between computation and accuracy. For each test sample, an anytime predictor produces a fast and crude initial prediction and continues to
Preprint. Work in progress.
Lowest Error At Each Budget 8 âSmall ANN w/ no Final Gap (Ours) £ â Large ANN w/ 10% Relative Final Gap ce 4 bs a a) $s Gx ¢ ba 00 os 25 30 eas 30 Budget in FLOPS xe
(a) (b)
a 5 1 = fri 1) >) Ie = 913 W1) P| £2 = FOL) T 22 = fests) bff = aaGexiwdpl = 02.9} 1 = filo; 61) bo. = acaw))| f= eGvy)|
Figure 1: (a) The common ANN training strategy increases final errors from the optimal (green vs. blue), which decreases exponentially slowly. By learning to focus more on the final auxiliary losses, the proposed adaptive loss weights make a small ANN (orange) to outperform a large one (green) that has non-adaptive weights. (b) Anytime neural networks contain auxiliary predictions and losses, 9; and ¢;, for intermediate feature unit f;.
reï¬ne it as budget allows, so that at any test-time budget, the anytime predictor has a valid result for the sample, and the more budget is spent, the better the prediction. Anytime predictors are different from cascaded predictors (Viola & Jones, 2001; Xu et al., 2014; Cai et al., 2015; Bolukbasi et al., 2017; Guan et al., 2017) for budgeted prediction, which aim to minimize average test-time computational cost without sacriï¬cing average accuracy: a different task (with relation to anytime prediction). Cascades achieve this by early exiting on easy samples to save computation for difï¬cult ones, but cascades cannot incrementally improve individual samples after an exit. Furthermore, early exit policy of cascades can be combined with existing anytime predictors (Bolukbasi et al., 2017; Guan et al., 2017). Hence, we consider cascades to be orthogonal to anytime predictions.
This work studies how to convert well-known DNN architectures to produce competitive anytime predictions. We form anytime neural networks (ANNs) by appending auxiliary predictions and losses to DNNs, as we will detail in Sec. 3 and Fig. 1b. Inference-time prediction then can be stopped at the latest prediction layer that is within the budget. Note that this work deals with the case where it is not known apriori where the interrupt during inference time will occur. We deï¬ne the optimal at each auxiliary loss as the result from training the ANN only for that loss to convergence. Then our objective is to have near-optimal ï¬nal predictions and competitive early ones. Near-optimal ï¬nal accuracy is imperative for anytime predictors, because, as demonstrated in Fig. 1a, accuracy gains are often exponentially more expensive as model sizes grow, so that reducing 1% error rate could take 50% extra computation. Unfortunately, existing anytime predictors often optimize the anytime losses in static weighted sums (Lee et al., 2015; Zamir et al., 2017; Huang et al., 2017a) that poorly optimize ï¬nal predictions, as we will show in Sec. 3 and Sec. 5.
Instead, we optimize the losses in an adaptive weighted sum, where the weight of a loss is inversely proportional to the empirical mean of the loss on the training set. Intuitively, this normalizes losses to have the same scale, so that the optimization leads each loss to be about the same relative to its optimal. We provide multiple theoretical considerations to motivate such weights. First of all, when the losses are mean square errors, our approach is maximizing the likelihood of a model where the prediction targets have Gaussian noises. Secondly, inspired by the maximum likelihood estimation, we optimize the model parameters and the loss weights jointly, with log-barriers on the weights to avoid the trivial solution of zero weights. Finally, we ï¬nd the joint optimization equivalent to optimizing the geometric mean of the expected training losses, an objective that treats the relative improvement of each loss equally. Empirically, we show on multiple models and visual recognition data-sets that the proposed adaptive weights outperform natural, non-adaptive weighting schemes as follows. We compare small ANNs using our adaptive weights against ANNs that are 50 â¼ 100% larger but use non-adaptive weights. The small ANNs can reach the same ï¬nal accuracy as the larger ones, and reach each accuracy level faster.
Early and late accuracy in an ANN are often anti-correlated (e.g., Fig. 7 in (Huang et al., 2017a) shows ANNs with better ï¬nal predictions have worse early ones). To mitigate this fundamental is- sue we propose to assemble ANNs of exponentially increasing depths. If ANNs are near-optimal in a late fraction of their layers, the exponential ensemble only pays a constant fraction of additional computation to be near-optimal at every test-time budget. In addition, exponential ensembles outper-
2
form linear ensembles of networks, which are commonly used baselines for existing works (Zamir et al., 2017; Huang et al., 2017a). In summary our contributions are:
⢠We derive an adaptive weight scheme for training losses in ANNs from multiple theoretical considerations, and show that experimentally this scheme achieves near-optimal ï¬nal accuracy and competitive anytime ones on multiple data-sets and models.
⢠We assemble ANNs of exponentially increasing depths to achieve near-optimal anytime predic- tions at every budget at the cost of a constant fraction of additional consumed budget.
# 2 Related Works
Meta-algorithms for anytime and budgeted prediction. Anytime and budgeted prediction has a rich history in learning literature. (Weinberger et al., 2009; Xu et al., 2012, 2013) sequentially generate features to empower the ï¬nal predictor. (Reyzin, 2011; Grubb & Bagnell, 2012; Hu et al., 2016) apply boosting and greedy methods to order feature and predictor computation. (Karayev et al., 2012; Odena et al., 2017) form Markov Decision Processes for computation of weak predic- tors and features, and learn policies to order them. However, these meta-algorithms are not easily compatible with complex and accurate predictors like DNNs, because the anytime predictions with- out DNNs are inaccurate, and there are no intermediate results during the computation of the DNNs. Cascade designs for budgeted prediction (Viola & Jones, 2001; Lefakis & Fleuret, 2010; Chen et al., 2012; Xu et al., 2014; Cai et al., 2015; Nan & Saligrama, 2017; Bolukbasi et al., 2017; Guan et al., 2017) reduce the average test-time computation by early exiting on easy samples and saving com- putation for difï¬cult ones. As cascades build upon existing anytime predictors, or combine multiple predictors, they are orthogonal to learning ANNs end-to-end.
Neural networks with early auxiliary predictions. Multiple works have addressed training DNNs with early auxiliary predictions for various purposes. (Lee et al., 2015; Szegedy et al., 2017; Zhao et al., 2017; Larsson et al., 2017) use them to regularize the networks for faster and better con- vergence. (Bengio et al., 2009; Zamir et al., 2017) set the auxiliary predictions from easy to hard for curriculum learning. (Xie & Tu, 2015; Chen & Koltun, 2017) make pixel level predictions in images, and ï¬nd learning early predictions in coarse scales also improve the ï¬ne resolution predic- tions. (Huang et al., 2017a) shows the crucial importance of maintaining multi-scale features for high quality early classiï¬cations. The above works use manually-tuned static weights to combine the auxiliary losses, or change the weights only once (Chen & Koltun, 2017). This work proposes adaptive weights to balance the losses to the same scales online, and provides multiple theoretical motivations. We empirically show adaptive losses induce better ANNs on multiple models, includ- ing the state-of-the-art anytime predictor for image recognition, MSDNet (Huang et al., 2017a).
Model compression. Many works have studied how to compress neural networks. (Li et al., 2017; Liu et al., 2017) prune network weights and connections. (Hubara et al., 2016; Rastegari et al., 2016; Iandola et al., 2016) quantize weights within networks to reduce computation and memory footprint. (Wang et al., 2017; Veit & Belongie, 2017) dynamically skip network computation based on samples. (Ba & Caruana, 2014; Hinton et al., 2014) transfer knowledge of deep networks into shallow ones by changing the training target of shallow networks. These works are orthogonal to ours, because they train a separate model for each trade-off between computation and accuracy, but we train a single model to handle all possible trade-offs.
# 3 Optimizing Anytime Neural Network Performance
As illustrated in Fig. 1b, a feed-forward network consists of a sequence of transformations f1, ..., fL of feature maps. Starting with the input feature map x0, each subsequent feature map is generated by xi = fi(xiâ1). Typical DNNs use the ï¬nal feature map xL to produce predictions, and hence require the completion of the whole network for results. Anytime neural networks (ANNs) instead introduce auxiliary predictions and losses using the intermediate feature maps x1, ..., xLâ1, and thus, have early predictions that are improving with computation.
Weighted sum objective. Let the intermediate predictions be 4; = g;(«;) for some function g;, and let the corresponding expected loss be £; = E(x..y)~plé(y, #i)], where D is the distribution of the data, and ¢ is some loss such as cross-entropy. Let @ be the parameter of the ANN, and define the optimal loss at prediction g; to be ¢;* = ming £;(@). Then the goal of anytime prediction is to seek a universal 6* ⬠N#_,{6' : 6â = arg ming ¢;(0)}. Such an ideal 6* does not exist in general as this is a multi-objective optimization, which only has Pareto front, a set containing all solutions such that
3
â CONSTANT â Half-End â oPT â ADALOSS 175 150 125 200 Relative Percentage Increase from the OPT in Training Loss 0 2 a 6 fi yb oD we Number of Building Blocks
â CONSTANT â Half-End â oPT â ADALOSS 175 150 125 200 Relative Percentage Increase from the OPT in Training Loss 0 2 a 6 fi yb oD we Number of Building Blocks
(a) Relative Percentage Increase in Training Loss vs. depths (lower is better) (b) Ensemble of exponentially deepen- ing anytime neural network (EANN)
Figure 2: (a) CONST scheme is increasingly worse than the optimal at deep layers. AdaLoss performs about equally well on all layers in comparison to the OPT. (b) EANN computes its ANNs in order of their depths. An anytime result is used if it is better than all previous ones on a validation set (layers in light blue).
improving one ¢; necessitates degrading others. Finding all solutions in the Pareto front for ANNs is not practical or useful, since this requires training multiple models, but each ANN only runs one. Hence, following previous works on anytime models (Lee et al. 2015} Zamir et al.| 2017} Huang! 2017a), we optimize the losses in a weighted sum ming >, B;f;(0), where B; is the weight of the Toss ;. We call the choices of B; weight schemes.
Static weight schemes. Previous works often use static weight schemes as part of their formulation. Lee et al. (2015); Xie & Tu (2015); Huang et al. (2017a) use CONST scheme that sets Bi = 1 for all i. Zamir et al. (2017) use LINEAR scheme that sets B1 to BL to linearly increase from 0.25 to 1. However, as we will show in Sec. 5.2, these static schemes not only cannot adjust weights in a data and model-dependent manner, but also may signiï¬cantly degrade predictions at later layers.
Qualitative weight scheme comparison. Before we formally introduce our proposed adaptive weights, we first shed light on how existing static weights suffer. We experiment with a ResNet of 15 basic residual blocks on CIFAR100 (Krizhevsky) [2009) data-set (See Sec. [5}for data-set details). An anytime predictor is attached to each residual block, and we estimate the optimal performance (OPT) in training cross entropy of predictor 7 by training a network that has weight only on ¢; to convergence. Then for each weight scheme we train an ANN to measure the relative increase in training loss at each depth 7 from the OPT. In Fig.|2a| we observe that the intuitive CONST scheme has high relative losses in late layers. This indicates that there is not enough weights in the late layers, though losses have the same B;. We also note that balancing the weights is non-trivial. For instance, if we put half of the total weights in the final layer and distribute the other half evenly, we get the âHalf-Endâ scheme. As expected, the final loss is improved, but this is at the cost of significant increases of early training losses. In contrast, the adaptive weight scheme that we propose next (AdaLoss), achieves roughly even relative increases in training losses automatically, and is much better than the CONST scheme in the late layers.
Adaptive Loss Balancing (AdaLoss). Given all losses are of the same form (cross-entropy), it may be surprising that better performance is achieved with differing weights. Because early features typically have less predictive power than later ones, early losses are naturally on a larger scale and possess larger gradients. Hence, if we weigh losses equally, early losses and gradients often dominate later ones, and the optimization becomes focused on the early losses. To automatically balance the weights among the losses of different scales, we propose an adaptive loss balancing scheme (AdaLoss). Specifically, we keep_an exponential average of each loss é; during training, and set B; «x +. This is inspired by (Chen & Koltun 2017}, which scales the losses to the same a scale only once during training, and provides a brief intuitive argument: the adaptive weights set the losses to be on the same scale. We next present multiple theoretical justifications for AdaLoss.
Before considering general cases, we first consider a simple example, where the loss function L(y, 9) = |ly â Gl|2 is the square loss. For this example, we model each y|z to be sampled from the multiplication of L independent Gaussian distributions, (9,071) for i = 1,..., L, where §; (zx; 0) a2 is the iâ prediction, and 0? ⬠Rt, ie., Pr(yla;0,02,...,02) « Tk, exp(â HG Sle). Then i=1 Jo?
4
we compute the empirical expected log-likelihood for a maximum likelihood estimator (MLE):
: ep Styl ail wa E[In(Pr(y|z))] x ELS*( =? âIno?)| = 5 âIno?), (1) O° o- i=1 i i=1 i where E is averaging over samples, and @; is the empirical estimate of ¢;. If we fix 6 and optimize over o?, we get o? = ¢;. As computing the empirical means is expensive over large data-sets, AdaLoss replaces é; with bi, the exponential moving average of the losses, and sets B; « Gt ~] o,° so as to solve the MLE online by jointly updating 9 and B;. We note that the naturally appeared Ino? terms in Eq_[I]are log-barriers preventing B; = 0. Inspired by this observation, we form the following joint optimization over @ and B; for general losses without probability models: L Ain
L min S_(Biéi(9) â Ain Bi), (2) 9,By Br i=
where \ > 0 is a hyper Parameter to balance between the log-barriers and weighted losses. Under the optimal condition, B; = z . AdaLoss estimates this with B; x 0; (0)~+. We can also eliminate B, from Eq. [2junder the optimal condition, and we transform Eq. [2]to the following problem: In
min > In 6;(0). (3)
i=1
This is equivalent to minimizing the geometric mean of the expected training losses, and it differs from minimizing the expected geometric mean of losses, as In and expectation are not commutable. Eq.|3]discards any constant scaling of losses automatically discarded as constant offsets, so that the scale difference between the early and late losses are automatically reconciled. Geometric mean is also known as the canonical mean to measure multiple positive quantities of various scales. To derive AdaLoss directly from Eq.|3} we note that the gradient of the objective in Eq.|3) Bilis a 1 sae , and gradient descent combined with AdaLoss estimates the gradient with we Le oe
# Le oe
i=1
# 4 Sequence of Exponentially Deepening Anytime Neural Networks (EANN)
In practice, we often observe ANNs using AdaLoss to be much more competitive in their later half than the early half on validation sets, such as in Table. 3a of Sec. 5.2. Fortunately, we can leverage this effect to form competitive anytime predictors at every budget, with a constant fraction of additional computation. Speciï¬cally, we assemble ANNs whose depths grow exponentially. Each ANN only starts computing if the smaller ones are ï¬nished, and its predictions are used if they are better than the best existing ones in validation. We call this ensemble an EANN, as illustrated in Fig. 2b. An EANN only delays the computation of any large ANN by at most a constant fraction of computation, because the earlier networks are exponentially smaller. Hence, if each ANN is near- optimal in later predictions, then we can achieve near-optimal accuracy at any test-time interruption, with the extra computation. Formally, the following proposition characterizes the exponential base and the increased computational cost. Proposition 4.1. Let b > 1. Assume for any L, any ANN of depth L has competitive anytime prediction at depth i > L b against the optimal of depth i. Then after B layers of computation, EANN produces anytime predictions that are competitive against the optimal of depth B C for some C > 1, 2b + 1+ln(b) such that supB C = 2 + 1 bâ1 .
This proposition says that an EANN is competitive at any budget B against the optimal of the cost B C . Furthermore, the stronger each anytime model is, i.e., the larger b becomes, the smaller the computation inï¬ation, C, is: as b approaches â, supB C, shrinks to 2, and E[C], shrinks to 1. Moreover, if we have M number of parallel workers instead of one, we can speed up EANNs by computing ANNs in parallel in a ï¬rst-in-ï¬rst-out schedule, so that we effectively increase the constant b to bM for computing C. It is also worth noting that if we form the sequence using regular networks instead of ANNs, then we will lose the ability to output frequently, since at budget B, we only produce Î(log(B)) intermediate predictions instead of the Î(B) predictions in an EANN. We will further have a larger cost inï¬ation, C, such that supB C ⥠4 and E[C] ⥠1.5 + 2 â 2.91, so that the average cost inï¬ation is at least about 2.91. We defer the proofs to the appendix.
5
OPT CONST LINEAR ADALOSS 1/4 0.00 15.07 25.67 32.99 1/2 0.00 16.40 13.02 9.97 3/4 0.00 18.76 12.97 3.96 1 0.00 18.90 12.65 2.73 ResANN50+CONST ResANN50+AdaLoss DenseANN169+CONST DenseANN169+AdaLoss MSDNet38 (Huang et al., 2017a) MSDNet38+AdaLoss 1/4 54.34 54.98 48.15 47.17 33.9 35.75 1/2 35.61 34.92 45.00 44.64 28.0 28.04 3/4 27.23 26.59 29.09 28.22 25.7 25.82 (a) Relative Error Percentage Increases from the OPT 1 25.14 24.42 25.60 24.07 24.3 23.99
Figure 3: (a) Average relative percentage increase in error from OPT on CIFAR and SVHN at 1/4, 1/2, 3/4 and 1 of the total cost. E.g., the bottom right entry means that if OPT has a 10% ï¬nal error rate, then AdaLoss has about 10.27%. (b) Test error rates at different fraction of the total costs on ResANN50 and DenseANN169.
# 5 Experiments
We list the key questions that our experiments aim to answer.
⢠How do anytime predictions trained with adaptive weights compare against those trained with
static constant weights (over different architectures)? (Sec. 5.2) ⢠How do underlying DNN architectures affect ANNs? (Sec. 5.2) ⢠How can sub-par early predictions in ANNs be mitigated by ANN ensembles? (Sec. 5.3) ⢠How does data-set difï¬culty affect the adaptive weights scheme? (Sec. 5.4)
# 5.1 Data-sets and Training Details
Data-sets. We experiment on CIFAR10, CIFAR100 (Krizhevsky, 2009), SVHN (Netzer et al., 2011)1 and ILSVRC (Russakovsky et al., 2015)2.
Training details. We optimize the models using stochastic gradient descent, with initial learning rate of 0.1, momentum of 0.9 and a weight decay of 1e-4. On CIFAR and SVHN, we divide the learning rate by 10 at 1/2 and 3/4 of the total epochs. We train for 300 epochs on CIFAR and 60 epochs on SVHN. On ILSVRC, we train for 90 epochs, and divide the learning rate by 10 at epoch 30 and 60. We evaluate test error using single-crop.
Base models. We compare our proposed AdaLoss weights against the intuitive CONST weights. On CIFAR and SVHN, we also compare AdaLoss against LINEAR and OPT, deï¬ned in Sec. 3. We evaluate the weights on multiple models including ResNet (He et al., 2016) and DenseNet (Huang et al., 2017b), and MSDNet (Huang et al., 2017a). For ResNet and DenseNet, we augment them with auxiliary predictors and losses, and call the resulting models ResANN and DenseANN, and defer the details of these models to the appendix Sec. C.
# 5.2 Weight Scheme Comparisons
AdaLoss vs. CONST on the same models. Table 3a presents the average relative test error rate increase from OPT on 12 ResANNs on CIFAR10, CIFAR100 and SVHN3. As training an OPT for each depth is too expensive, we instead report the average relative comparison at 1/4, 1/2, 3/4, and 1 of the total ANN costs. We observe that the CONST scheme makes 15 â¼ 18% more errors than the OPT, and the relative gap widens at later layers. The LINEAR scheme also has about 13% relative gap in later layers. In contrast, AdaLoss enjoys small performance gaps in the later half of layers.
On ILSVRC, we compare AdaLoss against CONST on ResANN50, DenseANN169, and MSD- Net38, which have similar ï¬nal errors and total computational costs (See Fig. 4f). In Table 3b, we
1Both CIFAR data-sets consist of 32x32 colored images. CIFAR10 and CIFAR100 have 10 and 100 classes, and each have 50000 training and 10000 testing images. We held out the last 5000 training samples in CIFAR10 and CIFAR100 for validation; the same parameters are then used in other models. We adopt the standard augmentation from Lee et al. (2015); He et al. (2016). SVHN contains around 600000 training and around 26032 testing 32x32 images of numeric digits from the Google Street Views. We adopt the same pad-and-crop augmentations of CIFAR for SVHN, and also add Gaussian blur.
2 ILSVRC2012 (Russakovsky et al., 2015) is a visual recognition data-set containing around 1.2 million natural and 50000 validation images for 1000 classes. We report the top-1 error rates on the validation set using a single-crop of size 224x224, after scaling the smaller side of the image to 256, following (He et al., 2016).
3The 12 models are named by (n, c) drawn from {7, 9, 13, 17, 25} à {16, 32} and {(9, 64), (9, 128)}, where n represents the number of residual units in each of the three blocks of the network, and c is the ï¬lter size of the ï¬rst convolution.
6
â Apatoss â CONST w/ ~2x FLOPS: 8 Relative Percentage Difference In Error Rate ooo 025 050 075 100 125 150 175 2.00 Relative FLOPS cost tothe Small Network
ate é â Apatoss â CONST w/ ~2x FLOPS. ' 8 Relative Percentage Difference In Error obo 025 050 075 100 125 150 175 2.00 Relative FLOPS cost tothe Small Network
â Apatoss â CONST w/ ~2x FLOPS. 180 100 Relative Percentage Difference In Error Rate obo 025 050 a75 100 125 150 175 2.00 Relative FLOPS cost tothe Small Network
(a) ResANNs on CIFAR10 (b) ResANNs on CIFAR100 (c) ResANNs on SVHN (d) ResANNs on ILSVRC (e) MSDNet on ILSVRC (f) ANNs comparison on ILSVRC
ee ââ ResANN5S0+AdaLoss e ââ ResANN101+Const ass 3 ras oT ol 5, ry ee ga Â¥ Bo a3 ee â 02 os 06 OB 10 12 4 16 FLoPs reo
26 ââ MSDNet32+AdaLoss 6 ââ MSDNet38+Const gm 3a <* ES PE) 2 is oa Â¥ Ee ass eT 2 3 4 5 6 FLOPS ve
0 ResANNS0+AdaLoss sa â ResANNS0+Const * = Denseathi68¢AdaLos ol = = Densennniea¢const 2 = MsDNet38+Const Ex sone se Sa ge g Fx 3 E | Pa â lary} on os 06 08 10 FLoPs ved
Figure 4: (a-e) Comparing small networks with AdaLoss versus big ones using CONST. With AdaLoss, the small networks achieve the same accuracy levels faster than large networks with CONST. (f) ANNs perfor- mance are mostly decided by underlying models, but AdaLoss is beneï¬cial regardless models.
observe the trade-offs between early and late accuracy on ResANN50 and MSDNet38. Furthermore, DenseANN169 performs uniformly better with AdaLoss than with CONST.
Since comparing the weight schemes requires evaluating ANNs at multiple budget limits, and AdaLoss and CONST outperform each other at a signiï¬cant fraction of depths on most of our experiments, we consider the two schemes incomparable on the same model. However, our next experiments will show later predictions to be vastly more important than the early ones.
Small networks with AdaLoss vs. large ones with CONST. Practitioners may be interested in ï¬nding the smallest anytime models that can reach certain ï¬nal accuracy thresholds, and unfortu- nately, the accuracy gain is often exponentially more costly as the accuracy saturates. To showcase the importance of this common phenomenon and its effect on choices of weight schemes, we com- pare ANNs using AdaLoss against ANNs of about twice the cost but using CONST. On CIFAR100, we average the relative comparison of six such pairs of ResANNs4 in Fig. 4b. E.g., the location (0.5, 200) in the plot means using half computation of the small ANN, and having 200% extra er- rors than it. We observe small ANNs with AdaLoss to achieve the same accuracy levels faster than large ones with CONST, because CONST neglects the late predictions and large networks, and early predictions of large networks are not as accurate of those of a small ones. The same comparisons using ResANNs result in similar results on CIFAR10 and SVHN (Fig. 4a and 4c). We also conduct similar comparisons on ILSVRC using ResANNs, and MSDNets, as shown in Fig. 4d and Fig. 4e, and observe that the smaller networks with AdaLoss can achieve accuracy levels faster than the large ones with CONST, without sacriï¬cing much ï¬nal accuracy. For instance, MSDNet (Huang et al., 2017a) is the state-of-the-art anytime predictor and is specially designed for anytime predictions, but by simply switching from their CONST scheme to AdaLoss, we signiï¬cantly improve MSDNet32, which costs about 4.0e9 FLOPS (details in the appendix), to be about as accurate as the published result of MSDNet38, which has 6.6e9 total FLOPS in convolutions, and 72e6 parameters.
Various base networks on ILSVRC. We compare ResANNs, DenseANNs and MSDNets that have ï¬nal error rate of near 24% in Fig. 4f, and observe that the anytime performance is mostly decided by the speciï¬c underlying model. Particularly, MSDNets are more cost-effective than DenseANNs, which in turn are better than ResANNs. However, AdaLoss is helpful regardless of
4AdaLoss takes (n, c) from {7, 9, 13} Ã {16, 32}, and CONST takes (n, c) from {13, 17, 25} Ã {16, 32}.
7
(a) EANNs on CIFAR100 (b) EANN on ILSVRC (c) Data-sets weights change AdaLoss
â PARALLEL OPT â EANN+CONST â EANN+ADALOSS â ANn+const â ANN+ADALOSS 2 3 a 3 Budget in FLOPS xe Test Top-1 Error Rate ° T
SâEANN wi ResANN 26, 50, 101 wsemble of DenseNet = ResANN5O+AdaLoss = MSDNet38+ Const = MSDNet32+AdaLoss oo 05 to FLOPS reo BRE R BB ILSVRC Error Rate ;
ae os a 3 70 5 BS 2 Prediction index AdaLoss Weights
Figure 5: (a) EANN performs better if the ANNs use AdaLoss instead of CONST. (b) EANN outperforms linear ensembles of DNNs on ILSVRC. (c) The learned adaptive weights of the same model on three data-sets.
underlying model. Both ResANN50 and DenseANN169 see improvements switching from CONST to AdaLoss, which is also shown in Table 3b. Thanks to AdaLoss, DenseANN169 achieves the same ï¬nal error using similar FLOPS as the original published results of MSDNet38 (Huang et al., 2017a). This suggests that Huang et al. (2017a) improve over DenseANNs by having better early predictions without sacriï¬cing the ï¬nal cost efï¬ciency via impressive architecture insight. Our AdaLoss brings a complementary improvement to MSDNets, as it enables smaller MSDNets to reach the ï¬nal error rates of bigger MSDNets, while having similar or better early predictions, as shown in the previous paragraph and Fig. 4f.
# 5.3 EANN: Closing Early Performance Gaps by Delaying Final Predictions.
EANNs on CIFAR100. In Fig. 5a, we assemble ResANNs to form EANNs5 on CIFAR100 and make three observations. First, EANNs are better than the ANN in early computation, because the ensembles dedicate early predictions to small networks. Even though CONST has the best early predictions as in Table 3a, it is still better to deploy small networks. Second, because the ï¬nal prediction of each network is kept for a long period, AdaLoss leads to signiï¬cantly better EANNs than CONST does, thanks to the superior ï¬nal predictions from AdaLoss. Finally, though EANNs delay computation of large networks, it actually appears closer to the OPT, because of accuracy saturation. Hence, EANNs should be considered when performance saturation is severe.
EANN on ILSVRC. Huang et al. (2017a) and Zamir et al. (2017) use ensembles of networks of lin- early growing sizes as baseline anytime predictors. However, in Fig. 5b, an EANN using ResANNs of depths 26, 50 and 101 outperforms the linear ensembles of ResNets and DenseNets signiï¬cantly on ILSVRC. In particular, this drastically reduces the gap between ensembles and the state-of-the- art anytime predictor MSDNet (Huang et al., 2017a). Comparing ResANN 50 and the EANN, we note that the EANN achieves better early accuracy but delays ï¬nal predictions. As the accuracy is not saturated by ResANN 26, the delay appears signiï¬cant. Hence, EANNs may not be the best when the performance is not saturated or when the constant fraction of extra cost is critical.
# 5.4 Data-set Difï¬culty versus Adaptive Weights
In Fig. 5c, we plot the ï¬nal AdaLoss weights of the same ResANN model (25,32) on CIFAR10, CIFAR100, and SVHN, in order to study the effects of the data-sets on the weights. We observe that from the easiest data-set, SVHN, to the hardest, CIFAR100, the weights are more concentrated on the ï¬nal layers. This suggests that AdaLoss can automatically decide that harder data-sets need more concentrated ï¬nal weights to have near-optimal ï¬nal performance, whereas on easy data-sets, more efforts are directed to early predictions. Hence, AdaLoss weights may provide information for practitioners to design and choose models based on data-sets.
# 6 Conclusion and Discussion
This work devises simple adaptive weights, AdaLoss, for training anytime predictions in DNNs. We provide multiple theoretical motivations for such weights, and show experimentally that adap-
5The ResANNs have c = 32 and n = 7, 13, 25, so that they form an EANN with an exponential base b â 2. By proposition 4.1, the average cost inï¬ation is E[C] â 2.44 for b = 2, so that the EANN should compete against the OPT of n = 20, using 2.44 times of original costs.
8
tive weights enable small ANNs to outperform large ANNs with the commonly used non-adaptive constant weights. Future works on adaptive weights includes examining AdaLoss for multi-task problems and investigating its âï¬rst-orderâ variants that normalize the losses by individual gradient norms to address unknown offsets of losses as well as the unknown scales. We also note that this work can be combined with orthogonal works in early-exit budgeted predictions (Guan et al., 2017; Bolukbasi et al., 2017) for saving average test computation.
# Acknowledgements
This work was conducted in part through collaborative participation in the Robotics Consortium sponsored by the U.S Army Research Laboratory under the Collaborative Technology Alliance Pro- gram, Cooperative Agreement W911NF-10-2-0016. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the ofï¬cial policies, either expressed or implied, of the Army Research Laboratory of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwith- standing any copyright notation herein.
# References
Ba, L. J. and Caruana, R. Do deep nets really need to be deep? In Proceedings of NIPS, 2014.
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, 2009.
Boddy, Mark and Dean, Thomas. Solving time-dependent planning problems. In Proceedings of the 11th International Joint Conference on Artiï¬cial Intelligence - Volume 2, IJCAIâ89, pp. 979â984, 1989.
Bolukbasi, Tolga, Wang, Joseph, Dekel, Ofer, and Saligrama, Venkatesh. Adaptive neural networks for fast test-time prediction. In ICML, 2017.
Cai, Zhaowei, Saberian, Mohammad J., and Vasconcelos, Nuno. Learning Complexity-Aware Cascades for Deep Pedestrian Detection. In International Conference on Computer Vision (ICCV), 2015.
Chen, Minmin, Weinberger, Kilian Q., Chapelle, Olivier, Kedem, Dor, and Xu, Zhixiang. Classiï¬er Cascade for Minimizing Feature Evaluation Cost. In AISTATS, 2012.
Chen, Qifeng and Koltun, Vladlen. Photographic image synthesis with cascaded reï¬nement networks. In ICCV, 2017.
Grubb, Alexander and Bagnell, J. Andrew. SpeedBoost: Anytime Prediction with Uniform Near-Optimality. In AISTATS, 2012.
Guan, Jiaqi, Liu, Yang, Liu, Qiang, and Peng, Jian. Energy-efï¬cient amortized inference with cascaded deep classiï¬ers. In arxiv preprint, arxiv.org/abs/1710.03368, 2017.
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016.
Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. In Deep Learning and Representation Learning Workshop, NIPS, 2014.
Horvitz, Eric J. Reasoning about beliefs and actions under computational resource constraints. In Proceedings of the Third Conference on Uncertainty in Artiï¬cial Intelligence, UAIâ87, pp. 429â447, 1987.
Hu, Hanzhang, Grubb, Alexander, Hebert, Martial, and Bagnell, J. Andrew. Efï¬cient feature group sequencing for anytime linear prediction. In UAI, 2016.
Huang, G., Chen, D., Li, T., Wu, F., van der Maaten, L., and Weinberger, K. Q. Multi-scale dense convolutional networks for efï¬cient prediction. In arxiv preprint: 1703.09844, 2017a.
Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q., and van der Maaten, Laurens. Densely connected convolu- tional networks. In Computer Vision and Pattern Recognition (CVPR), 2017b.
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Binarized neural networks. In NIPS, 2016.
9
Iandola, Forrest N., Han, Song, Moskewicz, Matthew W., Ashraf, Khalid, Dally, William J., and Keutzer, Kurt. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5mb model size. In arxiv preprint: 1602.07360, 2016.
Karayev, Sergey, Baumgartner, Tobias, Fritz, Mario, and Darrell, Trevor. Timely Object Recognition. Conference and Workshop on Neural Information Processing Systems (NIPS), 2012. In
Krizhevsky, Alex. Learning multiple layers of features from tiny images. Technical report, 2009.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1097â1105, 2012.
Larsson, G., Maire, M., and Shakhnarovich, G. Fractalnet: Ultra-deep neural networks without residuals. In International Conference on Learning Representations (ICLR), 2017.
Lee, Chen-Yu, Xie, Saining, Gallagher, Patrick W., Zhang, Zhengyou, and Tu, Zhuowen. Deeply-supervised nets. In AISTATS, 2015.
Lefakis, Leonidas and Fleuret, Francois. Joint Cascade Optimization Using a Product of Boosted Classiï¬ers. In Advances in Neural Information Processing Systems (NIPS), 2010.
Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning ï¬lters for efï¬cient convnets. In ICLR, 2017.
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. Learning efï¬cient convolutional networks through network slimming. In arxiv preprint:1708.06519, 2017.
Misra, Ishan, Shrivastava, Abhinav, Gupta, Abhinav, and Hebert, Martial. Cross-stitch networks for multi-task learning. In Computer Vision and Pattern Recognition (CVPR), 2016.
Nan, Feng and Saligrama, Venkatesh. Dynamic model selection for prediction under a budget. In NIPS, 2017.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
Odena, A., Lawson, D., and Olah, C. Changing model behavior at test-time using reinforcement. In Arxive preprint: 1702.07780, 2017.
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. Xnor-net: Imagenet classiï¬cation using binary convo- lutional neural networks. In ECCV, 2016.
Ren, Shaoqing, He, Kaiming, Girshick, Ross B., and Sun, Jian. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), 2015.
Reyzin, Lev. Boosting on a budget: Sampling for feature-efï¬cient prediction. In the 28th International Con- ference on Machine Learning (ICML), 2011.
Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei-Fei, Li. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015.
Szegedy, Christian, Ioffe, Sergey, Vanhoucke, Vincent, and Alemi, Alex. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, 2017.
Veit, Andreas and Belongie, Serge. Convolutional networks with adaptive computation graphs. arXiv preprint arXiv:1711.11503, 2017.
Viola, Paul A. and Jones, Michael J. Rapid Object Detection using a Boosted Cascade of Simple Features. In Computer Vision and Pattern Recognition (CVPR), 2001.
Wang, Xin, Yu, Fisher, Dou, Zi-Yi, and Gonzalez, Joseph E. Skipnet: Learning dynamic routing in convolu- tional networks. arXiv preprint arXiv:1711.09485, 2017.
Weinberger, K.Q., Dasgupta, A., Langford, J., Smola, A., and Attenberg, J. Feature Hashing for Large Scale Multitask Learning. In ICML, 2009.
10
Xie, Saining and Tu, Zhuowen. Holistically-nested edge detection. In ICCV, 2015.
Xie, Saining, Girshick, Ross, Dollr, Piotr, Tu, Zhuowen, and He, Kaiming. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017.
Xu, Z., Weinberger, K., and Chapelle, O. The Greedy Miser: Learning under Test-time Budgets. In Proceedings of the 28th International Conference on Machine Learning (ICML), 2012.
Xu, Z., Kusner, M., Huang, G., and Weinberger, K. Q. Anytime Representation Learning. In Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
Xu, Z., Kusner, M. J., Weinberger, K. Q., Chen, M., and Chapelle, O. Classiï¬er cascades and trees for mini- mizing feature evaluation cost. Journal of Machine Learning Research, 2014.
Zamir, Amir R., Wu, Te-Lin, Sun, Lin, Shen, William, Malik, Jitendra, and Savarese, Silvio. Feedback net- works. In Computer Vision and Pattern Recognition (CVPR), 2017.
Zhao, Hengshuang, Shi, Jianping, Qi, Xiaojuan, Wang, Xiaogang, and Jia, Jiaya. Pyramid scene parsing network. In Computer Vision and Pattern Recognition (CVPR), 2017.
Zilberstein, Shlomo. Using anytime algorithms in intelligent systems. AI Magazine, 17(3):73â83, 1996.
11
# A Sketch of Proof of Proposition 4.1
Proof. For each budget consumed x, we compute the cost xâ of the optimal that EANN is competi- tive against. The goal is then to analyze the ratio C = =. The first ANN in EANN has depth 1. The optimal and the result of EANN are the same. Now assume EANN is on depth z of ANN number n +1 for n > 0, which has depth bâ. (Case 1) For z < b"~!, EANN reuse the result from the end of ANN number n. The cost spent is w=2z4 an b= 24 we The optimal we compete has cost of the last ANN, which is b"~1 The ratio satisfies:
, z 1 1 x/x +14 - C= a/x b-1 or-l(b- 1) pr-1 1 1 nâ00 1 <24 24 =" b=1 b1(6-1) b-1
1 b â 1 Furthermore, since C increases with z,
.
Ez~uniform(0,b»â*)[C] yn 1 <p bh" 414+â-âd < [ zi + tia Zz | - =15 . o+ Tq o
(Case 2) For bnâ1 < z ⤠bn, EANN outputs anytime results from ANN number n + 1 at depth z. The cost is still x = z + bnâ1
, pre C=a/r=1+ Zb=1) 1 nâ00 1 <24 t 24 : en Ps st b-1
1 b â 1 Furthermore, since C decreases with z,
Ez.uniform(b»-1,b")[C] 1 <(bâ1) te" (2+ ry bâ br 4 24 d I. b-1 20-1) -| noo blnb (1p
Finally, since case 1 and case 2 happen with probability 1 b and (1 â 1 1 b â 1
b ), we have
sup B C = 2 + (4)
and
EBâ¼U nif orm(0,L)[C] ⤠1 â 1 2b + 1 b â 1 + ln b b â 1 . (5)
We also note that with large b, supB C â 2 and E[C] â 1 from above.
If we form a sequence of regular networks that grow exponentially in depth instead of ANN, then the worst case happen right before a new prediction is produced. Hence the ratio between the consumed budget and the cost of the optimal that the current anytime prediction can compete, C, right before the number n + 1 network is completed, is
i=1 bi bnâ1 nâââââââ b2 b â 1 = 2 + (b â 1) + 1 b â 1 ⥠4.
Note that (b â 1) + 1 bâ1 ⥠2 and the inequality is tight at b = 2. Hence we know supB C is at least 4. Furthermore, the expected value of C, assume B is uniformly sampled such that the interruption
12
happens on the (n + 1)" network, is: 1 bâ
x + bnâ1 bâ1 bnâ1 b â 1 2 1 bn E[C] = 0 nâââââââ 1.5 + â + dx 1 b â 1 ⥠1.5 + â 2 â 2.91.
The inequality is tight at b = 1+ few networks, we know the overall expectation EBâ¼U nif orm(0,L)[C] approaches 1.5 + bâ1 which is at least 1.5 +
# B Additional Details of AdaLoss for Experiments
Prevent tiny weights. In practice, early ¢; could be poor estimates of ¢;, and we may have a feed-back loop where large losses incur small weights, and in turn, results in poorly optimized large losses. To prevent such loops, we mix the adaptive weights with the constant weights. More precisely, we regularize Eq. [3|with the arithmetic mean of the losses:
L min >? (a(1 â 7) mé;(4) + 7é:(9)), (6) i=1
where a > 0 and y > 0 are hyper parameters. In practice, since DNNs often have elaborate learning rate schedules that assume By = 1, we choose a = min, ¢;(@) at each iteration to scale the max weight to 1. We choose y = 0.05 from validation. Future works may consider more complex schemes where the weights start as constant weights and morph into AdaLoss by gradually reducing y from | to 0.
Extra ï¬nal weights. In our experiments, we often ï¬nd that the penultimate layers have better accuracy relative to the OPT than the ï¬nal layers on CIFAR, as suggested in Fig. 2a. We believe this is because neighboring losses in an ANN are highly correlated, so that a layer can indirectly beneï¬t from the high weights of its neighbors. The ï¬nal loss is then at disadvantage due to its lack of successors. To remedy this, we can give the ï¬nal loss extra weights, which turns the geometric mean in Eq. 3 into a weighted geometric mean. This is also equivalent to having a distribution of test-time interruption, where the interruption happens at all layers equally likely, except on the ï¬nal layer. In our experiments, we do not use extra ï¬nal weights on CIFAR10, CIFAR100 and SVHN to keep the weights simple, and we double the ï¬nal weight on ILSVRC because the ï¬nal accuracy there is critical for comparing against other non-anytime networks.
# C Implementation Details of ANNs
CIFAR and SVHN ResANNs. For CIFAR10, CIFAR100 (Krizhevsky, 2009), and SVHN (Netzer et al., 2011), ResANN follow (He et al., 2016) to have three blocks, each of which has n residual units. Each of such basic residual units consists of two 3x3 convolutions, which are interleaved by BN-ReLU. A pre-activation (BN-ReLU) is applied to the input of the residual units. The result of the second 3x3 conv and the initial input are added together as the output of the unit. The auxiliary predictors each applies a BN-ReLU and a global average pooling on its input feature map, and applies a linear prediction. The auxiliary loss is the cross-entropy loss, treating the linear prediction results as logits. For each (n, c) pair such that n < 25, we set the anytime prediction period s to be 1, i.e., every residual block leads to an auxiliary prediction. We set the prediction period s = 3 for n = 25.
ResANNs on ILSVRC. Residual blocks for ILSVRC are bottleneck blocks, which consists of a chain of 1x1 conv, 3x3 conv and 1x1 conv. These convolutions are interleaved by BN-ReLU, and pre-activation BN-ReLU is also applied. Again, the output of the unit is the sum of the input feature map and the result of the ï¬nal conv. ResANN50 and 101 are augmented from ResNet50 and 101 (He et al., 2016), where we add BN-ReLU, global pooling and linear prediction to every two bottleneck residual units for ResNet50, and every three for ResNet101. We create ResANN26 for creating EANN on ILSVRC, and ResANN26 has four blocks, each of which has two bottleneck residual units. The prediction period is every two units, using the same linear predictors.
13
DenseANNs on ILSVRC. We augment DenseNet169 (Huang et al., 2017b) to create DenseANN 169. DenseNet169 has 82 dense layers, each of which has a 1x1 conv that project concatenation of previous features to 4k channels, where k is the growth rate (Huang et al., 2017b), followed by a 3x3 conv to generate k channels of features for the dense layer. The two convs are interleaved by BN-ReLU, and a pre-activation BN-ReLU is used for each layer. The 82 layers are organized into four blocks of size 6, 12, 32 and 32. Between each neighboring blocks, a 1x1 conv followed by BN-ReLU-2x2-average-pooling is applied to shrink the existing feature maps by half in the hight, width, and channel dimensions. We add linear anytime predictions every 14 dense layers, starting from layer 12 (1-based indexing). The original DenseNet paper (Huang et al., 2017b) mentioned that they use drop-out with keep rate 0.9 after each conv in CIFAR and SVHN, but we found drop-out to be detrimental to performance on ILSVRC.
MSDNet on ILSVRC. MSDNet38 is described in the appendix of (Huang et al., 2017a). We set the four blocks to have 10, 9, 10 and 9 layers, and drop the feature maps of the ï¬nest resolution after each block as suggest in the original paper. We successfully reproduced the published results to 24.3% error rate on ILSVRC using our Tensorï¬ow implementation. We used the original published results for MSDNet38+CONST in the main text. We use MSDNet32, which has four blocks of 6, 6, 10, and 10 layers, for the small network that uses AdaLoss. We predict using MSDNet32 every seven layers, starting at the fourth layer (1-based indexing).
14 | {
"id": "1711.09485"
} |
1708.04782 | StarCraft II: A New Challenge for Reinforcement Learning | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | http://arxiv.org/pdf/1708.04782 | Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich Küttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, Rodney Tsing | cs.LG, cs.AI | Collaboration between DeepMind & Blizzard. 20 pages, 9 figures, 2
tables | null | cs.LG | 20170816 | 20170816 | 7 1 0 2
g u A 6 1 ] G L . s c [
1 v 2 8 7 4 0 . 8 0 7 1 : v i X r a
# StarCraft II: A New Challenge for Reinforcement Learning
Oriol Vinyals Timo Ewalds Sergey Bartunov Petko Georgiev Alexander Sasha Vezhnevets Michelle Yeo Alireza Makhzani Heinrich K ¨uttler John Agapiou Karen Simonyan Julian Schrittwieser John Quan Hado van Hasselt DeepMind Tom Schaul Stephen Gaffney David Silver Stig Petersen Timothy Lillicrap
Kevin Calderone Paul Keet Anthony Brunasso David Lawrence Anders Ekermo Jacob Repp Blizzard Rodney Tsing
# Abstract
This paper introduces SC2LE (StarCraft II Learning Environment), a reinforce- ment learning environment based on the game StarCraft II. This domain poses a new grand challenge for reinforcement learning, representing a more difï¬cult class of problems than considered in most prior work. It is a multi-agent problem with multiple players interacting; there is imperfect information due to a partially observed map; it has a large action space involving the selection and control of hundreds of units; it has a large state space that must be observed solely from raw input feature planes; and it has delayed credit assignment requiring long-term strategies over thousands of steps. We describe the observation, action, and reward speciï¬cation for the StarCraft II domain and provide an open source Python-based interface for communicating with the game engine. In addition to the main game maps, we provide a suite of mini-games focusing on different elements of Star- Craft II gameplay. For the main game maps, we also provide an accompanying dataset of game replay data from human expert players. We give initial baseline results for neural networks trained from this data to predict game outcomes and player actions. Finally, we present initial baseline results for canonical deep rein- forcement learning agents applied to the StarCraft II domain. On the mini-games, these agents learn to achieve a level of play that is comparable to a novice player. However, when trained on the main game, these agents are unable to make signiï¬- cant progress. Thus, SC2LE offers a new and challenging environment for explor- ing deep reinforcement learning algorithms and architectures.
# Introduction
Recent progress in areas such as speech recognition [7], computer vision [16], and natural language processing [38] can be attributed to the resurgence of deep learning [17], which provides a power- ful toolkit for non-linear function approximation using neural networks. These techniques have also proven successful in reinforcement learning problems, yielding signiï¬cant successes in Atari [20], the game of Go [32], three-dimensional virtual environments [3] and simulated robotics domains [18, 29]. Many of the successes have been stimulated by the availability of simulated domains with an appropriate level of difï¬culty. Benchmarks have been critical to measuring and therefore advanc- ing deep learning and reinforcement learning (RL) research [4, 20, 28, 8]. It is therefore important to ensure the availability of domains that are beyond the capabilities of current methods in one or more dimensions.
In this paper we introduce SC2LE1 (StarCraft II Learning Environment), a challenging domain for reinforcement learning, based on the StarCraft II video game. StarCraft is a real-time strategy (RTS) game that combines fast paced micro-actions with the need for high-level planning and execution. Over the previous two decades, StarCraft I and II have been pioneering and enduring e-sports,2 with millions of casual and highly competitive professional players. Defeating top human players therefore becomes a meaningful and measurable long-term objective.
From a reinforcement learning perspective, StarCraft II also offers an unparalleled opportunity to explore many challenging new frontiers. First, it is a multi-agent problem in which several players compete for inï¬uence and resources. It is also multi-agent at a lower-level: each player controls hundreds of units, which need to collaborate to achieve a common goal. Second, it is an imperfect information game. The map is only partially observed via a local camera, which must be actively moved in order for the player to integrate information. Furthermore, there is a âfog-of-warâ, ob- scuring the unvisited regions of the map, and it is necessary to actively explore the map in order to determine the opponentâs state. Third, the action space is vast and diverse. The player selects actions among a combinatorial space of approximately 108 possibilities (depending on the game resolution), using a point-and-click interface. There are many different unit and building types, each with unique local actions. Furthermore, the set of legal actions varies as the player progresses through a tree of possible technologies. Fourth, games typically last for many thousands of frames and actions, and the player must make early decisions (such as which units to build) with consequences that may not be seen until much later in the game (when the playersâ armies meet), leading to a rich set of challenges in temporal credit assignment and exploration.
This paper introduces an interface intended to make RL in StarCraft straightforward: observations and actions are deï¬ned in terms of low resolution grids of features; rewards are based on the score from the StarCraft II engine against the built-in computer opponent; and several simpliï¬ed mini- games are also provided in addition to the full game maps. Future releases will extend the interface for the full challenge of StarCraft II: observations and actions will expose RGB pixels; agents will be ranked by the ï¬nal win/loss outcome in multi-player games; and evaluation will be restricted to full game maps used in competitive human play.
In addition, we provide a large dataset based on game replays recorded from human players, which will increase to millions of replays as people play the game. We believe that the combination of the interface and this dataset will provide a useful benchmark to test not only existing and new RL algorithms, but also interesting aspects of perception, memory and attention, sequence prediction, and modelling uncertainty, all of which are active areas of machine learning research.
Several environments [1, 34, 33] already exist for reinforcement learning in the original version of StarCraft. Our work differs from these previous environments in several regards: it focuses on the newer version StarCraft II; observations and actions are based on the human user interface rather than being programmatic; and it is directly supported by the game developers, Blizzard Entertain- ment, on Windows, Mac, and Linux.
The current best artiï¬cial StarCraft bots, based on the built-in AI or research on previous environ- ments, can be defeated by even amateur players [cf. 6, and later versions of the AIIDE competition]. This fact, coupled with StarCraftâs interesting set of game-play properties and large player base, makes it an ideal research environment for exploring deep reinforcement learning algorithms.
# 2 Related Work
Computer games provide a compelling solution to the issue of evaluating and comparing different learning and planning approaches on standardised tasks, and is an important source of challenges for research in artiï¬cial intelligence (AI). These games offer multiple advantages: 1. They have clear objective measures of success; 2. Computer games typically output rich streams of observational data, which are ideal inputs for deep networks; 3. They are externally deï¬ned to be difï¬cult and interesting for a human to play. This ensures that the challenge itself is not tuned by the researcher to make the problem easier for the algorithms being developed; 4. Games are designed to be run anywhere with the same interface and game dynamics, making it easy to share a challenge precisely
# 1Pronounced: âschoolâ. 2https://en.wikipedia.org/wiki/Professional_StarCraft_competition
2
Actions select_rect(pl, p2) or build supply (p3) or ... SC2LE St StarCraft Il Binary StarCraft II API Agent resources available_actions -1/0/+1 build_queue Non-spatial Screen Minimap Reward features features features
Figure 1: The StarCraft II Learning Environment, SC2LE, shown with its components plugged into a neural agent.
with other researchers; 5. In some cases a pool of avid human players exists, making it possible to benchmark against highly skilled individuals. 6. Since games are simulations, they can be controlled precisely, and run at scale.
A well known example of games driving reinforcement learning research is the Arcade Learning Environment (ALE [4]), which allows easy and replicable experiments with Atari video games. This standardised set of tasks has been an incredible boon to recent research in AI. Scores on games in this environment can be compared across publications and algorithms, allowing for direct measurement and comparison. The ALE is a prominent example in a rich tradition of video game benchmarks for AI [31], including Super Mario [36], Ms Pac-Man [27], Doom [14], Unreal Tournament [11], as well as general video game-playing frameworks [30, 5] and competitions [24].
The genre of RTS games has attracted a large amount of AI research, including on the original StarCraft (Broodwar). We recommend the surveys by Ontanon et al. [22] and Robertson & Watson [26] for an overview. Many of those research directions focus on speciï¬c aspects of the game (e.g., build order, or combat micro-management) or speciï¬c AI techniques (e.g., MCTS planning). We are not aware of efforts to solve full games with an end-to-end RL approach. Tackling full versions of RTS games has seemed daunting because of the rich input and output spaces as well as the very sparse reward structure (i.e., game outcome).
The standard API for StarCraft thus far has been BWAPI [1], and related wrappers [33]. Simpliï¬ed versions of RTS games have also been developed for AI research, most notably microRTS3 or the more recent ELF [35]. Previous work has applied RL approaches to the Wargus RTS game with reduced state and action spaces [12], and learning based agents have also been explored in micro- management mini-games [23, 37], and learning game outcome or build orders from replay data [9, 13].
# 3 The SC2LE Environment
The main contribution of our paper is the release of SC2LE, which exposes StarCraft II as a re- search environment. The release consists of three sub-components: a Linux StarCraft II binary, the StarCraft II API, and PySC2 (see ï¬gure 1).
# 3https://github.com/santiontanon/microrts
3
The StarCraft II API4 allows programmatic control of StarCraft II. The API can be used to start a game, get observations, take actions, and review replays. This API into the normal game is available on Windows and Mac OS, but we also provide a limited headless build that runs on Linux especially for machine learning and distributed use cases. Using this API we built PySC25, an open source environment that is optimised for RL agents. PySC2 is a Python environment that wraps the StarCraft II API to ease the interaction between Python rein- forcement learning agents and StarCraft II. PySC2 deï¬nes an action and observation speciï¬cation, and includes a random agent and a handful of rule-based agents as examples. It also includes some mini-games as challenges and visualisation tools to understand what the agent can see and do.
StarCraft II updates the simulation 16 (at ânormal speedâ) or 22.4 (at âfast speedâ) times per second. The game is mostly deterministic, but it does have some randomness mainly for cosmetic reasons; the two main random elements are weapon speed and update order. These sources of randomness can be removed/mitigated by setting a random seed.
We now describe the environment which was used for all of the experiments in this paper.
# 3.1 Full Game Description and Reward Structure
In the full 1v1 game of StarCraft II, two opponents spawn on a map which contains resources and other elements such as ramps, bottlenecks, and islands. To win a game, a player must: 1. Accumulate resources (minerals and vespene gas), 2. Construct production buildings, 3. Amass an army, and 4. Eliminate all of the opponentâs buildings. A game typically lasts from a few minutes to one hour, and early actions taken in the game (e.g., which buildings and units are built) have long term consequences. Players have imperfect information since they can typically only see the portion of the map where they have units. If they want to understand and react to their opponentâs strategy they must send units to scout. As we describe later in this section, the action space is also quite unique and challenging.
Most people play online against other human players. The most common games are 1v1, but team games are possible too (2v2, 3v3 or 4v4), as are more complicated games with unbalanced teams or more than two teams. Here we focus on the 1v1 format, the most popular form of competitive StarCraft, but may consider more complicated situations in the future.
StarCraft II includes a built-in AI which is based on a set of handcrafted rules and comes with 10 lev- els of difï¬culty (the three strongest of which cheat by getting extra resources or privileged vision). Unfortunately, the fact that they are rule-based means their strategies are fairly narrow and thus eas- ily exploitable. Nevertheless, they are a reasonable ï¬rst challenge for a purely learned approach like the baselines we investigate in sections 4 and 5; they play far better than random, play very quickly with little compute, and offer consistent baselines to compare against.
We deï¬ne two different reward structures: ternary 1 (win) / 0 (tie) / â1 (loss) received at the end of a game (with all-zero rewards during the game), and Blizzard score. The ternary win/tie/loss score is the real reward that we care about. The Blizzard score is the score seen by players on the victory screen at the end of the game. While players can only see this score at the end of the game, we provide access to the running Blizzard score at every step during the game so that the change in score can be used as a reward for reinforcement learning. It is computed as the sum of current resources and upgrades researched, as well as units and buildings currently alive and being built. This means that the playerâs cumulative reward increases with more mined resources, decreases when losing units/buildings, and all other actions (training units, building buildings, and researching) do not affect it. The Blizzard score is not zero-sum since it is player-centric, it is far less sparse than the ternary reward signal, and it correlates to some extent with winning or losing.
# 3.2 Observations
StarCraft II uses a game engine which renders graphics in 3D. Whilst utilising the underlying game engine which simulates the whole environment, the StarCraft II API does not currently render RGB pixels. Rather, it generates a set of âfeature layersâ, which abstract away from the RGB images seen
4https://github.com/Blizzard/s2client-proto 5https://github.com/deepmind/pysc2
4
ini pee eae scram stn mop
Figure 2: The PySC2 viewer shows a human interpretable view of the game on the left, and coloured versions of the feature layers on the right. For example, terrain height, fog-of-war, creep, camera location, and player identity, are shown in the top row of feature layers. A video can be found at https://youtu.be/-fKUyT14G-8.
during human play, while maintaining the core spatial and graphical concepts of StarCraft II (see Figure 2).
Thus, the main observations come as sets of feature layers which are rendered at N à M pixels (where N and M are conï¬gurable, though in our experiments we always used N = M ). Each of these layers represents something speciï¬c in the game, for example: unit type, hit points, owner, or visibility. Some of these (e.g., hit points, height map) are scalars, while others (e.g., visibility, unit type, owner) are categorical. There are two sets of feature layers: the minimap is a coarse representation of the state of the entire world, and the screen is a detailed view of a subsection of the world corresponding to the playerâs on-screen view, and in which most actions are executed. Some features (e.g., owner or visibility) exist for both the screen and minimap, while others (e.g., unit type and hit points) exist only on the screen. See the environment documentation6 for a complete description of all observations provided.
In addition to the screen and minimap, the human interface for the game provides various non-spatial observations. These include the amount of gas and minerals collected, the set of actions currently available (which depends on game context, e.g., which units are selected), detailed information about selected units, build queues, and units in a transport vehicle. These observations are also exposed by PySC2, and are fully described in the environment documentation. The audio channel is not exposed as a wave form but important notiï¬cations will be exposed as part of the observations.
In the retail game engine the screen is rendered with a full 3D perspective camera at high resolution. This leads to complicated observations with units getting smaller as they get âhigherâ on the screen, and with more world real estate being visible in the back than the front. To simplify this, feature layers are rendered via a camera that uses a top down orthographic projection. This means that each pixel in a feature layer corresponds to precisely the same amount of world real estate, and as a consequence all units will be the same size regardless where they are in view. Unfortunately, it also means the feature layer rendering does not quite match what a human would see. An agent sees a little more in the front and a little less in the back. This does mean some actions that humans make in replays cannot be fully represented.
In future releases we will expose a rendered API allowing agents to play from RGB pixels. This will allow us to study the effects of learning from raw pixels versus learning from feature layers and make closer comparisons to human play. In the mean time, we played the game with feature layers to verify that agents are not severely handicapped. Though the game-play experience is obviously
# 6https://github.com/deepmind/pysc2/blob/master/docs/environment.md
5
altered we found that a resolution of N, M ⥠64 is sufï¬cient to allow a human player to select and individually control small units such as Zerglings. The reader is encouraged to try this using pysc2 play7. See also Figure 2.
# 3.3 Actions
We designed the environment action space to mimic the human interface as closely as possible whilst maintaining some of the conventions employed in other RL environments, such as Atari [4]. Figure 3 shows a short sequence of actions as produced by a player and by an agent.
Many basic manoeuvres in the game are compound actions. For example, to move a selected unit across the map a player must ï¬rst choose to move it by pressing m, then possibly choose to queue the action by holding shift, then click a point on the screen or minimap to execute the action. Instead of asking agents to produce those 3 key/mouse presses as a sequence of three separate actions we give it as an atomic compound function action: move screen(queued, screen). More formally, an action a is represented as a composition of a function identiï¬er a0 and a sequence of arguments which that function identiï¬er requires: a1, a2, . . . , aL. For in- stance, consider selecting multiple units by drawing a rectangle. The intended action is then select rect(select add, (x1, y1), (x2, y2)). The ï¬rst argument select add is binary. The other arguments are integers that deï¬ne coordinates â their allowed range is the same as the resolution of the observations. This action is fed to the environment in the form [select rect, [[select add], [x1, y1], [x2, y2]]].
To represent the full action space we deï¬ne approximately 300 action-function identiï¬ers with 13 possible types of arguments (ranging from binary to specifying a point on the discretised 2D screen). See the environment documentation for a more detailed speciï¬cation and description of the actions available through PySC2, and Figure 3 for an example of a sequence of actions.
In StarCraft, not all the actions are available in every game state. For example, the move command is only available if a unit is selected. Human players can see which actions are available in the âcommand cardâ on the screen. Similarly, we provide a list of available actions via the observations given to the agent at each step. Taking an action that is not available is considered an error, so agents should ï¬lter their action choices so that only legal actions are taken.
Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with player skill, with professional players often spiking above 500 APM. In all our RL experiments, we act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate players.
We believe these early design choices make our environment a promising testbed for developing complex RL agents. In particular, the ï¬xed-size feature layer input space and human-like action space are natural for neural network based agents. This is in contrast to other recent work [33, 23], where the game is accessed on a unit-per-unit basis and actions are individually speciï¬ed to each unit. While there are advantages to both interface styles, PySC2 offers the following:
⢠Learning from human replays becomes simpler.
⢠We do not require unrealistic/super-human actions per minute to issue instructions individ- ually to each unit.
⢠The game was designed to be played with this UI, and the balance between strategic high level decisions, managing your economy, and controlling the army makes the game more interesting.
# 3.4 Mini-Games Task Description
To investigate elements of the game in isolation, and to provide further ï¬ne-grained steps towards playing the full game, we built several mini-games. These are focused scenarios on small maps that have been constructed with the purpose of testing a subset of actions and/or game mechanics with a clear reward structure. Unlike the full game where the reward is just win/lose/tie, the reward
# 7https://github.com/deepmind/pysc2/blob/master/pysc2/bin/play.py
6
Left_Click_Hold (p1) r | Press B- B Human Actions IDLE IDLE Y @ Release (p2) r | Left_Click (p3) r | Agent Actions no_op select_rect(p1, p2) build_supply(p3) no_op Base Base Base Base action Point Point action Point Point action Point Point action Point Point roo @ oop oop noop @ i i rectangle rectangle rectangle rectangle Available Actions toe @» ERB! mae @>e lity mae Opti a ei Build @| . Buld > supply supply
Figure 3: Comparison between how humans act on StarCraft II and the actions exposed by PySC2. We designed the action space to be as close as possible to human actions. The ï¬rst row shows the game screen, the second row the human actions, the third row the logical action taken in PySC2, and the fourth row the actions a exposed by the environment (and, in red, what the agent selected at each time step). Note that the ï¬rst two columns do not feature the âbuild supplyâ action, as it is not yet available to the agent in those situations as a worker has to be selected ï¬rst.
structure for mini-games can reward particular behaviours (as deï¬ned in a corresponding .SC2Map ï¬le).
We encourage the community to build modiï¬cations or new mini-games with the powerful StarCraft Map Editor. This allows for more than just designing a broad range of smaller challenge domains. It permits sharing identical setups and evaluations with other researchers and obtaining directly comparable evaluation scores. The restricted action sets, custom reward functions and/or time limits are deï¬ned directly in the resulting .SC2Map ï¬le, which is easy to share. We therefore encourage users to use this method of deï¬ning new tasks, rather than customising on the agent side.
The seven mini-games that we are releasing are as follows:
⢠MoveToBeacon: The agent has a single marine that gets +1 each time it reaches a beacon. This map is a unit test with a trivial greedy strategy.
⢠CollectMineralShards: The agent starts with two marines and must select and move them to pick up mineral shards spread around the map. The more efï¬ciently it moves the units, the higher the score.
⢠FindAndDefeatZerglings: The agent starts with 3 marines and must explore a map to ï¬nd and defeat individual Zerglings. This requires moving the camera and efï¬cient exploration.
⢠DefeatRoaches: The agent starts with 9 marines and must defeat 4 roaches. Every time it defeats all of the roaches it gets 5 more marines as reinforcements and 4 new roaches spawn. The reward is +10 per roach killed and â1 per marine killed. The more marines it can keep alive, the more roaches it can defeat.
⢠DefeatZerglingsAndBanelings: The same as DefeatRoaches, except the opponent has Zer- glings and Banelings, which give +5 reward each when killed. This requires a different strategy because the enemy units have different abilities.
⢠CollectMineralsAndGas: The agent starts with a limited base and is rewarded for the total resources collected in a limited time. A successful agent must build more workers and expand to increase its resource collection rate.
7
⢠BuildMarines: The agent starts with a limited base and is rewarded for building marines. It must build workers, collect resources, build Supply Depots, build Barracks, and then train marines. The action space is limited to the minimum action set needed to accomplish this goal.
All mini-games have a ï¬xed time limit and are described in more detail online: https://github. com/deepmind/pysc2/blob/master/docs/mini_games.md.
# 3.5 Raw API
StarCraft II also has a raw API, which is similar to the Broodwar API (BWAPI [1]). In this case, the observations are a list of all visible units on the map along with the properties (unit type, owner, coordinates, health, etc.), but without any visual component. Fog-of-war still exists, but there is no camera, so you can see all visible units simultaneously. This is a simpler and more precise represen- tation, but it does not correspond to how humans perceive the game. For the purposes of comparing against humans this is considered âcheatingâ since it offers signiï¬cant additional information.
Using the raw API, actions control units or groups of units individually by a unit identiï¬er. There is no need to select individuals or groups of units before issuing actions. This allows much more precise actions than the human interface allows, and thus yields the possibility of super-human behaviour via this API.
Although we have not used any data from the raw API to train our agents, it is included in the release in order to support other use cases. PySC2 uses it for visualization while both Blizzardâs SC2 API examples8 and CommandCenter9 use it to for rule-based agents.
# 3.6 Performance
We can often run the environment faster than real time. Observations are rendered at a speed that depends on several factors: the map complexity, the screen resolution, the number of non-rendered frames per action, and the number of threads.
For complex maps (e.g., full ladder maps) the computation is dominated by simulation speed. Taking actions less often, allowing for fewer rendered frames, reduces the compute, but diminishing returns kicks in fairly quickly meaning there is little gain above 8 steps per action. Given little time is spent rendering, a higher resolution does not hurt. Running more instances in parallel threads scales quite well.
For simpler maps (e.g., CollectMineralShards) the world simulation is quick, so rendering the ob- servations dominates. In this case increasing the frames per action and decreasing the resolution can have a large effect. The bottleneck then becomes the Python interpreter, negating gains above roughly 4 threads with a single interpreter.
With a resolution of 64 Ã 64 and acting at a rate of 8 frames per action, the single-threaded speed of a ladder map varies from 200â700 game steps per wall-clock second, which is more than an order of magnitude faster than real-time. The exact speeds depends on multiple factors, including: the stage of the game, the number of units in play, and the computer it runs on. On CollectMineralShards the same settings permit 1600â2000 game steps per wall-clock second.
# 4 Reinforcement Learning: Baseline Agents
This section provides baseline results that serve to calibrate the map difï¬culty, and demonstrate that established RL algorithms can learn useful policies, at least on the mini-games, but also that many challenges remain. For the mini-games we additionally provide scores for two human players: a DeepMind game tester (novice level) and a StarCraft GrandMaster (professional level) (see Table 1).
8https://github.com/Blizzard/s2client-api 9https://github.com/davechurchill/CommandCenter
8
# 4.1 Learning Algorithm
Our reinforcement learning agents are built using a deep neural network with parameters 6, which defines a policy 79. At time step ¢ the agent receives observations s;, selects an action a, with probability 79 (a,|s;), and then receives a reward r; from the environment. The goal of the agent is to maximise the return G; = Sro 7"rt4n41, Where 7 is a discount factor. For notational clarity we assume that policy is conditioned only on the observation s;, but without loss of generality it might be conditioned on all previous states, e.g., via a hidden memory state as we describe below.
The parameters of the policy are learnt using Asynchronous Advantage Actor Critic (A3C), as de- scribed by Mnih et al. [21], which was shown to produce state-of-the-art results on a diverse set of environments. A3C is a policy gradient method, which performs an approximate gradient ascent on E [Gt]. The A3C gradient is deï¬ned as follows:
(Gt â vo(s+)) Vo log m9 (az|sz) +8 (Gt â vo(se))Vove(St) +n>> mo(a\sz) log me(alse), (1) SS a , f - a policy gradient value estimation gradient entropy regularisation
where vg(s) is a value function estimate of the expected return E [G; | se = = s] produced by the same network. Instead of the full return, we use an n-step return G; = an 7 Kr ead + Y"vo(St¢n) in the gradient above, where n is a hyper-parameter. The last term regularises the policy towards larger entropy, which promotes exploration, and 8 and 7 are hyper-parameters that trade off the importance of the loss components. For details we refer the reader to the original paper and the references therein.
# 4.2 Policy Representation
As described in section 3, the API exposes actions as a nested list a which contains a function identiï¬er a0 and a set of arguments. Since all arguments including pixel coordinates on screen and minimap are discrete, a naive parametrisation of a policy Ïθ(a|s) would require millions of values to specify the joint distribution over a, even for a low spatial resolution. We could instead represent the policy in an auto-regressive manner, utilising the chain rule10:
L s) = [[ 7o(a' as", 8). (2) l=0
This representation, if implemented efficiently, is arguably simpler as it transforms the problem of choosing a full action a to a sequence of decisions for each argument a!. In the straightfor- ward RL baselines reported here, we make a further simplification and use policies that choose the aot identifier, 2°, and all the arguments, a!, independently from one another: so, 79(a|s) Te 29 ⢠). Note that, depending on the function identifier a°, the number of required arguments Lis pt Adi Some actions (e.g., the no-op action) do not require any arguments, whereas others (e.g., move_screen(x, y)) do. See Figure|3|for an example.
In line with the human UI, we ensure that unavailable actions are never chosen by our agents. To do so we mask out the function identiï¬er choice a0 such that only the available subset can be sampled. We implement this by masking out actions and renormalising the probability distribution over a0.
# 4.3 Agent Architectures
This section presents several agent architectures with the purpose of producing straightforward base- lines. We take established architectures from the literature [20, 21] and adapt them to ï¬t the speciï¬cs of the environment, in particular the action space. Figure 4 illustrates the proposed architectures.
10Note that for the auto-regressive case one could use an arbitrary permutation over arguments to deï¬ne an order in which the chain rule is applied. But there is also a ânaturalâ ordering over arguments that can be used since decisions about where to click on a screen depend on the purpose of the click, that is, the identiï¬er of the function being called.
9
(a) Atari-net (b) FullyConv
Figure 4: Network architectures of the basic agents considered in the paper.
Input pre-processing All the baseline agents share the same pre-processing of input feature lay- ers. We embed all feature layers containing categorical values into a continuous space, which is equivalent to using a one-hot encoding in the channel dimension followed by a 1 Ã 1 convolution. We also re-scale numerical features with a logarithmic transformation as some of them such as hit-points or minerals might attain substantially high values.
Atari-net Agent The ï¬rst baseline is a simple adaptation of the architecture successfully used for the Atari [4] benchmark and DeepMind Lab environments [3]. It processes screen and minimap feature layers with the same convolutional network as in [21] â two layers with 16, 32 ï¬lters of size 8, 4 and stride 4, 2 respectively. The non-spatial features vector is processed by a linear layer with a tanh non-linearity. The results are concatenated and sent through a linear layer with a ReLU activation. The resulting vector is then used as input to linear layers that output policies over the action function id a0 and each action-function argument {al}L l=0 independently. For spatial actions (screen or minimap coordinates) we independently model policies to select (discretised) x and y coordinates.
FullyConv Agent Convolutional networks for reinforcement learning (such as the Atari-net base- line above) usually reduce the spatial resolution of the input with each layer and ultimately ï¬nish with a fully connected layer that discards spatial structure completely. This allows spatial informa- tion to be abstracted away before actions are inferred. In StarCraft, though, a major challenge is to infer spatial actions (i.e. clicking on the screen and minimap). As these spatial actions act within the same space as the inputs, it might be detrimental to discard the spatial structure of the input.
Here we propose a fully convolutional network agent, which predicts spatial actions directly through a sequence of resolution-preserving convolutional layers. The network we propose has no stride and uses padding at every layer, thereby preserving the resolution of the spatial information in the input. For simplicity, we assume the screen and minimap inputs have the same resolution. We pass screen and minimap observations through separate 2-layer convolutional networks with 16, 32 ï¬lters of size 5 à 5, 3 à 3 respectively. The state representation is then formed by the concatenation of the screen and minimap network outputs, as well as the broadcast vector statistics, along the channel dimension. Note that this is likely non-optimal since the screen and minimap do not have the same spatial extent â future work could improve on this arrangement. To compute the baseline and policies over categorical (non-spatial) actions, the state representation is ï¬rst passed through a fully-connected layer with 256 units and ReLU activations, followed by fully-connected linear layers. Finally, a policy over spatial actions is obtained using 1 à 1 convolution of the state representation with a single output channel. See Figure 4 for a visual representation of this computation.
FullyConv LSTM Agent Both of the above baselines are feed-forward architectures and therefore have no memory. While this is sufï¬cient for some tasks, we cannot expect it to be enough for the full complexity of StarCraft. Here we introduce a baseline architecture based on a convolutional LSTM. We follow the fully-convolutional agentâs pipeline described above and simply add a convolutional
10
LSTM module after the minimap and screen feature channels are concatenated with the non-spatial features.
Random agents We use two random baselines. Random policy is an agent that picks uniformly at random among all valid actions, which highlights the difï¬culty of stumbling onto a successful episode in a very large action space. The random search baseline is based on the FullyConv agent and works by taking many independent, randomly initialised policy networks (with a low softmax temperature that induces near-deterministic actions), evaluating each for 20 episodes and keeping the one with the highest mean score. This is complementary in that it samples in policy space rather than action space.
# 4.4 Results
In A3C, we truncate the trajectory and run backpropagation after K = 40 forward steps of a network or if a terminal signal is received. The optimisation process runs 64 asynchronous threads using shared RMSProp. For each method, we ran 100 experiments, each using randomly sampled hyper- parameters. Learning rate was sampled from a form(10â5, 10â3) interval. The learning rate was linearly annealed from a sampled value to half the initial rate for all agents. We use an independent entropy penalty of 10â3 for the action function and each action-function argument. We act at a ï¬x rate every 8 game steps, which is equivalent to about three actions per second or 180 APM. All experiments were run for 600M steps (or 8Ã600M game steps).
# 4.4.1 Full Game
AbyssalReef ternary score AbyssalReef Blizzard score -0.4 5500 05 5000 4500 -0.6 4000 ââ Atari-net -0.7 3500 â FullyConv 0.8 3000 â FullyConvLSTM 2500 0.9 Pe mph f\ 2000 -1.0 1500 OM 100M 200M 300M 400M 500M 600M OM 100M 200M 300M 400M 500M 600M
Figure 5: Performance on the full game of the best hyper-parameters versus the easy built-in AI player as the opponent (TvT on the Abyssal Reef LE ladder map): 1. Using outcome (-1 = lose, 0 = tie, 1 = win) as the reward; 2. Using the native game score provided by Blizzard as the reward. Notably, baseline agents do not learn to win even a single game. Architectures: (a) the original Atari architecture used for DQN, (b) a network which uses a convnet to preserve spatial information for screen and minimap actions, (c) same as in (b) but with a Convolutional LSTM at one layer. Lines are smoothed for visibility.
For experiments on the full game, we selected the Abyssal Reef LE ladder map used in ranked online games as well as in professional matches. The agent played against the easiest built-in AI in a Terran versus Terran match-up. Maximum game length was set to 30 minutes, after which a tie was declared, and the episode terminated.
Results of the experiments are shown on Figure 5. Unsurprisingly, none of the agents trained with sparse ternary rewards developed a viable strategy for the full game. The most successful agent, based on the fully convolutional architecture without memory, managed to avoid constant losses by using the Terran ability to lift and move buildings out of attack range. This makes it difï¬cult for the easy AI to win within the 30 minute time limit.
Agents trained with the Blizzard score converged to trivial strategies that avoid distracting workers from mining minerals. Most agents converged to simply preserving the initial mining process with- out building further units or structures (this behaviour was also observed in the economic mini-game proposed below).
These results suggest that the full game of StarCraft II is indeed a challenging RL domain, especially without access to other sources of information such as human replays.
11
4.4.2 Mini-Games
MoveToBeacon
30 25 20 15 10 5 0 a . OM 100M 200M 300M 400M 500M 600M FindAndDefeatZerglings
120
CollectMineralShards
100 80 60 40 20 0 OM 100M 200M 300M 400M 500M 600M DefeatRoaches
60
140
50 40 30 20 10 ie) -10 OM 100M 200M 300M 400M 500M 600M
# OM
100M
200M
300M
400M
500M
120
# DefeatZerglingsAndBanelings
4500
# CollectMineralsAndGas
100 80 60 40 20 ie) om 100M 200M 300M 400M 500M 600M 16 BuildMarines 14 12 10 8 6 4 2
4000 3500 3000 2500 2000 1500 1000 500 ie) â OM 100M 200M 300M 400M 500M 600M
ââ
# Atari-net best mean
ââ
â
# FullyConv best mean FullyConvLSTM best mean
0
om 100M 200M 300M 400M 500M 600M
Figure 6: Training process for baseline agent architectures. Displayed lines are mean scores as a function of game steps. The three network architectures are the same as used in Figure 5. Faint lines show all 100 runs with different hyper-parameters; the solid line is the run with the best mean. Lines are smoothed for visibility.
12
600M
Table 1: Aggregated results for human baselines and agents on mini-games. All agents were trained for 600M steps. MEAN corresponds to the average agent performance, BEST MEAN is the average performance of the best agent across different hyper-parameters, MAX corresponds to the maximum observed individual episode score.
S G N AGENT RANDOM POLICY RANDOM SEARCH DEEPMIND HUMAN PLAYER STARCRAFT GRANDMASTER ATARI-NET FULLYCONV FULLYCONV LSTM METRIC MEAN MAX MEAN MAX MEAN MAX MEAN MAX BEST MEAN MAX BEST MEAN MAX BEST MEAN MAX N O C A E B O T E V O M 1 6 25 29 26 28 28 28 25 33 26 45 26 35 S D R A H S L A R E N M T C E L L O C I 17 35 32 57 133 142 177 179 96 131 103 134 104 137 S G N I L G R E Z T A E F E D D N A D N I F 4 19 21 33 46 49 61 61 49 59 45 56 44 57 S E H C A O R T A E F E D 1 46 51 241 41 81 215 363 101 351 100 355 98 373 I L E N A B D N A S G N I L G R E Z T A E F E D 23 118 55 159 729 757 727 848 81 352 62 251 96 444 S A G D N A S L A R E N M T C E L L O C I S E N I R A M D L I U B 12 < 1 5 750 8 2318 46 3940 138 6880 142 6952 133 7566 7566 133 3356 < 1 20 3505 3 3978 42 4130 6 3351 62 3995
As described in section 3, one can avoid the complexity of the full game by deï¬ning a set of mini- games which focus on certain aspects of the game (see section 3 for a high-level description of each mini-game).
We trained our agents on each mini-game. The aggregated training results are shown in Figure 6 and the ï¬nal results with comparisons to human baselines can be found in Table 1. A video showcasing our agents can also be found at https://youtu.be/6L448yg0Sm0.
Overall, fully convolutional agents performed the best across the non-human baselines. Somewhat surprisingly, the Atari-net agent appeared to be quite a strong competitor on mini-games involv- ing combat, namely FindAndDefeatZerlings, DefeatRoaches and DefeatZerlingsAndBanelings. On CollectMineralsAndGas, only the best Convolutional agent learned to increase the initial resource income by producing more worker units and assigning them to mining.
We found BuildMarines to be the most strategically demanding mini-game and perhaps the closest of all to the full game of StarCraft. The best results on this game were achieved by FullyConv LSTM and Random Search, while Atari-Net failed to learn a strategy to consistently produce marines during each episode. It should be noted that, without the restrictions on action space imposed by this map, it would be signiï¬cantly more diffucult to learn a to produce marines in this mini-game.
All agents performed sub-optimally when compared against the GrandMaster player, except for in simplest MoveToBeacon mini-game, which only requires good mechanics and reaction time â which artiï¬cial agents are expected to be good at. However, in some games like DefeatRoaches and FindAndDefeatZerglings, our agents did fare well versus the DeepMind game tester.
13
The results of our baseline agents demonstrate that even relatively simple mini-games present inter- esting challenges for existing RL algorithms.
# 5 Supervised Learning from Replays
Game replays are a crucial resource used by professional and amateur players alike, who learn new strategies, ï¬nd critical mistakes made in a game, or simply enjoy watching others play as a form of entertainment. Replays are especially important in StarCraft because of hidden information: the fog-of-war hides all of the opponentâs units unless they are within view of one of your own. Thus, among professional players it is standard practice to review and analyse every game they play, even when they win.
The use of supervised data such as replays or human demonstrations has been successful in robotics [2, 25], the game of Go [19, 32], and Atari [10]. It has also been used in the context of StarCraft I (e.g., [13]), though not to train a policy over basic actions, but rather to discover build orders. StarCraft II provides the opportunity to collect and learn from a large and growing set of human replays. Whereas there has been no central and standardised mechanism for collecting replays for StarCraft I, large numbers of anonymised StarCraft II games are readily available via Blizzardâs online 1v1 ladder. As well, more games will be added to this set on a regular basis as a relatively stable player pool plays new games.
Learning from replays should be useful to bootstrap or complement reinforcement learning. In iso- lation, it could also serve as a benchmark for sequence modelling or memory architectures having to deal with long term correlations. Indeed, to understand a game as it unfolds, one must integrate in- formation across many time steps efï¬ciently. Furthermore, due to partial observability, replays could also be used to study models of uncertainty such as (but not limited to) variational autoencoders [15]. Finally, comparing performance on outcome/action prediction may help guide the design of neural architectures with suitable inductive biases for RL in the domain.
In the rest of this section, we provide baselines using the architectures described in Section 4, but using a set of 800K games to learn both a value function (i.e., predicting the winner of the game from game observations), and a policy (i.e., predicting the action taken from game observations). The games contain all possible matchups in StarCraft II (i.e., we do not restrict the agent to play a single race).
8000 10°, 7000 : 10°k : â 107 i* 6000 ae £10 ec 5000, = = 3B 10% 5 4000 a. © 10 a 3000 10° 2000 38 107 % 1000 10° = 0 100 200 300 400 500 600 700 0 50.100 150.200 250-300 APM Action index in sorted order
10°, 10°k 107 i* ae £10 = 3B 10% a. © 10 a 10° 107 % 10° = 0 50.100 150.200 250-300 Action index in sorted order
8000 7000 : : â 6000 ec 5000, = 5 4000 3000 2000 38 1000 0 100 200 300 400 500 600 700 APM
Figure 7: Statistics of the replay set we used for supervised training of our policy and value nets. (Left) Distribution of player rating (MMR) as a function of APM. (Right) Distribution of actions sorted by probability of usage by human players.
Figure 7 shows statistics for the replays we used. We summarize some of the most interesting stats here: 1. The skill level of players, measured by the Match Making Rating (MMR), varies from casual gamer, to high-end amateur, on through to professionals. 2. The average number of Actions Per Minute (APM) is 153, and mean MMR is 3789. 3. The replays are not ï¬ltered, and instead all ârankedâ league games played on BattleNet are used 11. 4. Less than one percent are Masters level replays from top players. 5. We also show the distribution of actions sorted by their frequency of use by human players. The most frequent action, taken 43% of the time, is moving the camera.
# 11http://wiki.teamliquid.net/starcraft2/Battle.net_Leagues
14
6. Overall, the action distribution has a heavy tail with a few commonly used actions (e.g., move camera, select rectangle, attack screen) and a large number of actions that are used infrequently (e.g., building an engineering bay).
We train dual-headed networks that predict both the game outcome (1 = win vs. 0 = loss or tie), and the action taken by the player at each time step. Sharing the body of the network makes it necessary to balance the weights for the two loss functions, but it also allows value and policy predictions to inform one another. We did not make ties a separate game outcome class in the supervised training setup, since the number of ties in the dataset is very low (< 1%) compared to victory and defeat
# 5.1 Value Predictions
Predicting the outcome of a game is a challenging task. Even professional StarCraft II commentators often fail to predict the winner despite having a full access to the game state (i.e., not being limited by partial observability). Value functions that accurately predict game outcomes are desirable because they can be used to alleviate the challenge of learning from sparse rewards. From given state, a well trained value function can suggest which neighbouring states would be worth moving into long before seeing the game outcome.
Our setup for supervised learning begins with the straightforward baseline architectures described in Section 4: Atari-net and FullyConv. The networks do not take into account previous observations, i.e., they predict the outcome from a single frame (this is clearly sub-optimal). Furthermore, the observation does not include any privileged information: an agent has to produce value predictions based only on what it can see at any given time step (i.e. fog-of-war is enabled). Thus, if the opponent has managed to secretly produce many units that are very effective against the army that the agent has built, it may mistakenly believe that its position is stronger than it is.
0.70 0.70 a 50-65 0-65 a 0.60 S 0.60 o 0.55 $0.55 â i- o â i- 0.50 Atari-net = 0.50 Atari-net â FullyConv 3 â FullyConv 0.45 ââ arFullyConv iz o45 ââ arFullyConv 0.40 0.40 0.0 02 04 06 08 10 1.2 14 <3 5 7 9 11 13 15 17 19 20+ Observed training frames 1e7 In-game time (min)
# u © 5 oO 2 § S
Figure 8: The accuracy of predicting the outcome of StarCraft games using a network that operates on the screen and minimap feature planes as well as the scalar player stats. (Left) Train curves for three different network architectures. (Right) Accuracy over game time. At the beginning of the game (before 2 minutes), the network has 50% accuracy (equivalent to chance). This is expected since the outcome is less clear earlier in the game. By the 15 minute mark, the network is able to correctly predict the winner 65% of the time.
The networks proposed in Section 4 produce the action identiï¬er and its arguments independently. However, the accuracy of predicting a point on the screen can be improved by conditioning on the base action, e.g., building an extra base versus moving an army. Thus, in addition to the Atari-net and FullyConv architecture, we have arFullyConv which uses the auto-regressive policy introduction introduced in Section 4.2, i.e. using the function identiï¬er a0 and previously sampled arguments a<l to model a policy over the current argument al.
Networks are trained for 200k steps of gradient descent on all possible match-ups in StarCraft II. We trained with mini-batches of 64 observations taken at random from all replays uniformly across time. Observations are sampled with a step multiplier of 8, consistent with the RL setup. The reso- lution of both screen and minimap is 64 à 64. Each observation consists of the screen and minimap spatial feature layers as well as player stats such as food cap and number of collected minerals that human players see on the screen. We use 90% of the replays as training set, and a ï¬xed test set of 0.5M frames drawn from the rest of the 10% of the replays. The agent performance is evaluated continuously against this test set as training progresses.
15
Figure 8 shows average accuracy over training step as well as accuracy of a trained model as a function of game time. A random baseline would correct approximately 50% of the time since the game is well balanced across all race pairs, and tying is extremely rare. As training progresses, the FullyConv architecture achieves an accuracy of 64%. Also, as the game progresses, value predic- tion becomes more accurate, as seen in Figure 8 (Right). This mirrors the results of prior work on StarCraft I [9].
# 5.2 Policy Predictions
TOP 1 ACCURACY TOP 5 ACCURACY ATARI-NET FULLYCONV ARFULLYCONV RANDOM ACTION 37.8% 37.9% 37.7% 4.3% SCREEN MINIMAP 19.8% 25.7% 25.9% 0.0% 1.2% 9.5% 10.5% 0.0% ACTION 87.2% 88.2% 87.4% 29.5% SCREEN MINIMAP 55.6% 62.3% 62.7% 1.0% 2.9% 18.5% 22.1% 1.0%
Table 2: Policy top 1 and top 5 accuracies for the base actions and screen/minimap arguments. arFullyConv refers to the autoregressive version of FullyConv. The random baseline is a arFullyConv with randomly initialised weights.
The same network trained to predict values had a separate output designed to predict the action issued by the user. We sometimes refer to this part of the network as the policy since it can be readily deployed to play the game.
There are many schemes one might employ to train networks to imitate human behaviour from re- plays. Here we use a simple approach that connects straightforwardly with the RL work in Section 4. When training our policy we sampled observations at a ï¬xed step multiplier of 8 frames. We take the ï¬rst action issued within each 8 frames as the learning target for the policy. If no action was taken during that period, we take the target to be a âno-opâ, i.e., a special action which has no effect.
When humans play StarCraft II, only a subset of all possible actions are available at any given time. For example, âbuilding a marineâ is enabled only if barracks are currently selected. Networks should not need to learn to avoid illegal actions since this information is readily available. Thus, during training, we ï¬lter out actions that would not be available to a human player. To do so, we take the union of all available actions for the past 8 frames and apply a mask that sets the probability of all unavailable actions to near zero.
Note that, as previously mentioned, we trained the policy to play all possible matchups. Thus, in principle, the agent can play any race. However, for consistency with the reinforcement learning agents studied in Section 4, we report in-game metrics in the single Terran versus Terran matchup.
Table 2 shows how different architectures perform in terms of accuracy at predicting the action identiï¬er, the screen, and the minimap argument. As expected, both FullyConv and arFullyConv architectures perform much better for spatial arguments. As well, the arFullyConv architecture out- performs FullyConv, presumably because it knows which action identiï¬er the argument will be used for.
When we directly plug the policy trained with supervised learning into the game, it is able to produce more units and play better as a function of observed replay data, as shown in Figure 9 and in the video at https://youtu.be/WEOzide5XFc. It also outperforms all agents trained in Section 4 on the simpler mini-game of BuildMarines, which has a restricted action space, even though the supervised policy is playing an unrestricted, full 1v1 game. These results suggest that supervised imitation learning may be a promising direction for bootstrapping StarCraft II agents. Future work should look to improve imitation initialised policies by training directly with reinforcement learning on the objective we really care about â i.e., the game outcome.
# 6 Conclusions & Future Work
This paper introduces StarCraft II as a new challenge for deep reinforcement learning research. We provide details for a freely available Python interface to play the game as well as human replay data from ranked games collected via Blizzardâs ofï¬cial BattleNet ladder. With this initial release
16
21.0. ptari-net a. 12) Atari-net < 5 og) FullyConv 3 10/ââ FullyConv e ââ arFullyConv 2 g|ââ arFullyConv © 0.6 S$ . co} > 20.4 E = o we} B02 5 vo 2 = 2 0.0 0.0 0.2 04 06 08 10 12 14 0 02 04 06 O08 10 12 Observed training frames le7 Observed training frames le7
Figure 9: The probability of building army units as training the policy nets progresses over the training data. The game setup is Terran vs. Terran. (Left) Probability of building any army units in a game. (Right) Average number of army units built per game.
we describe supervised learning results on the human replay data for policy and value networks. We also also describe results for straightforward baseline RL agents on seven mini-games and on the full game.
We regard the mini-games primarily as unit tests. That is, an RL agent should be able to achieve human level performance on these with relative ease if it is to have a chance to succeed on the full game. It may be instructive to build additional mini-games, but we take the full game â evaluated on the ï¬nal outcome â as the most interesting problem, and hope ï¬rst and foremost to encourage research that will lead to its solution.
While performance on some mini-games is close to expert human play, we ï¬nd, as expected, that current state-of-the-art baseline agents cannot learn to win against the easiest built-in AI on the full game. This is true not only when the game outcome (i.e., -1, 0, 1) is used as the reward signal, but also when a shaping reward is provided at each timestep (i.e., the native game score provided by Blizzard). In this sense, our provided environment presents a challenge that is at once canonical, externally deï¬ned, and completely intractable for off-the-shelf baseline algorithms.
This release simpliï¬es several aspects of the game as it is played by humans: 1. the observations are preprocessed before they are given to the agent, 2. the action space has been simpliï¬ed to be more easily used by RL agents instead of using the keyboard and mouse-click setup used by humans, 3. it is played in lock-step so that agents can compute for as long as they need at each time-step rather than being real-time, and 4. the full game only allows play against the built-in AI. However, we consider the real challenge to build agents that can play the best human players on their own turf, that is with RGB pixel observations and strict time limits. Therefore, future releases may relax the simpliï¬cations above, as well as enable self-play, moving us towards the goal of training agents that humans consider to be fair opponents.
# Contributions
Blizzard:
⢠StarCraft II Binary
⢠StarCraft II API: https://github.com/Blizzard/s2client-proto
⢠Replays
DeepMind:
⢠PySC2: https://github.com/deepmind/pysc2
All the agents and experiments in the paper
17
14
# Acknowledgements
We would like to thank many at Blizzard, especially Tommy Tran, Tyler Plass, Brian Song, Tom van Dijck, and Greg Risselada, the Grandmaster. We would also like to thank the DeepMind team, especially Nal Kalchbrenner, Ali Eslami, Jamey Stevenson, Adam Cain and our esteemed game testers Amir Sadik & Sarah York. We also would like to thank David Churchill for his early feedback on the Raw API, for building CommandCenter, and for comments on the manuscript.
# References
[1] The Brood War API. http://bwapi.github.io/, 2017.
[2] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469â483, 2009.
[3] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, et al. DeepMind Lab. arXiv preprint arXiv:1612.03801, 2016.
[4] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253â 279, 2013.
[5] Nadav Bhonker, Shai Rozenberg, and Itay Hubara. Playing SNES in the retro learning envi- ronment. arXiv preprint arXiv:1611.02205, 2016.
[6] Michael Buro and David Churchill. Real-time strategy game competitions. AI Magazine, 33 (3):106, 2012.
[7] George E Dahl, Dong Yu, Li Deng, and Alex Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, 20(1):30â42, 2012.
[8] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learn- ing, pages 1329â1338, 2016.
[9] Graham Kurtis Stephen Erickson and Michael Buro. Global state evaluation in StarCraft. In AIIDE, 2014.
[10] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Andrew Sendonaris, Gabriel Dulac-Arnold, Ian Osband, and John Agapiou. Learning from demonstra- tions for real world reinforcement learning. arXiv preprint arXiv:1704.03732, 2017.
[11] Philip Hingston. A Turing test for computer game bots. IEEE Transactions on Computational Intelligence and AI in Games, 1(3):169â186, 2009.
[12] Ulit Jaidee and H´ector MuËnoz-Avila. Classq-l: A q-learning algorithm for adversarial real- time strategy games. In Eighth Artiï¬cial Intelligence and Interactive Digital Entertainment Conference, 2012.
[13] Niels Justesen and Sebastian Risi. Learning macromanagement in StarCraft from replays using deep learning. arXiv preprint arXiv:1707.03743, 2017.
[14] MichaÅ Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A Doom-based AI research platform for visual reinforcement learning. In Compu- tational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1â8. IEEE, 2016.
[15] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations, 2014.
[16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
18
[17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â 444, 2015.
[18] Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, page 0278364917710318, 2016.
[19] Chris J Maddison, Aja Huang, Ilya Sutskever, and David Silver. Move evaluation in Go using deep convolutional neural networks. arXiv preprint arXiv:1412.6564, 2014.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016.
[22] Santiago Ontan´on, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. A survey of real-time strategy game AI research and competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in games, 5(4):293â311, 2013.
[23] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. arXiv preprint arXiv:1703.10069, 2017.
[24] Diego Perez, Spyridon Samothrakis, Julian Togelius, Tom Schaul, Simon Lucas, Adrien Cou¨etoux, Jeyull Lee, Chong-U Lim, and Tommy Thompson. The 2014 general video game playing competition. Computational Intelligence and AI in Games, 2015.
[25] Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin Riedmiller. Data-efï¬cient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv:1704.03073, 2017.
[26] Glen Robertson and Ian Watson. A review of real-time strategy game AI. AI Magazine, 35(4): 75â104, 2014.
[27] Philipp Rohlfshagen and Simon M Lucas. Ms Pac-man versus ghost team CEC 2011 com- petition. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 70â77. IEEE, 2011.
[28] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei- Fei Li. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[29] Andrei A Rusu, Matej Vecerik, Thomas Roth¨orl, Nicolas Heess, Razvan Pascanu, and Raia arXiv preprint Sim-to-real robot learning from pixels with progressive nets. Hadsell. arXiv:1610.04286, 2016.
[30] Tom Schaul. A video game description language for model-based or interactive learning. In Conference on Computational Intelligence in Games (IEEE-CIG), pages 1â8. IEEE, 2013.
[31] Tom Schaul, Julian Togelius, and J¨urgen Schmidhuber. Measuring intelligence through games. arXiv preprint arXiv:1109.1314, 2011.
[32] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529 (7587):484â489, 2016.
[33] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth´ee Lacroix, Zem- ing Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learning research on real-time strategy games. arXiv preprint arXiv:1611.00625, 2016.
19
[34] Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and Larry Zitnick. ELF: An ex- tensive, lightweight and ï¬exible research platform for real-time strategy games. arXiv preprint arXiv:1707.01067, 2017.
[35] Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and Larry Zitnick. Elf: An exten- sive, lightweight and ï¬exible research platform for real-time strategy games. arXiv preprint arXiv:1707.01067, 2017.
[36] Julian Togelius, Sergey Karakovskiy, and Robin Baumgarten. The 2009 Mario AI competition. In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1â8. IEEE, 2010.
[37] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration for deep deterministic policies for StarCraft micromanagement. In International Conference on Learning Representations, 2017.
[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
20 | {
"id": "1611.00625"
} |
1708.03888 | Large Batch Training of Convolutional Networks | A common way to speed up training of large convolutional networks is to add
computational units. Training is then performed using data-parallel synchronous
Stochastic Gradient Descent (SGD) with mini-batch divided between computational
units. With an increase in the number of nodes, the batch size grows. But
training with large batch size often results in the lower model accuracy. We
argue that the current recipe for large batch training (linear learning rate
scaling with warm-up) is not general enough and training may diverge. To
overcome this optimization difficulties we propose a new training algorithm
based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet
up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in
accuracy. | http://arxiv.org/pdf/1708.03888 | Yang You, Igor Gitman, Boris Ginsburg | cs.CV | null | null | cs.CV | 20170813 | 20170913 | 7 1 0 2
p e S 3 1 ] V C . s c [
3 v 8 8 8 3 0 . 8 0 7 1 : v i X r a
Technical Report
# LARGE BATCH TRAINING OF CONVOLUTIONAL NET- WORKS
# Yang You â Computer Science Division University of California at Berkeley youyang@cs.berkeley.edu
# Igor Gitman Computer Science Department Carnegie Mellon University igitman@andrew.cmu.edu
# Boris Ginsburg NVIDIA bginsburg@nvidia.com
# ABSTRACT
A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. But training with large batch size often results in the lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome this optimization difï¬culties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy.
# INTRODUCTION
Training of large Convolutional Neural Networks (CNN) takes a lot of time. The brute-force way to speed up CNN training is to add more computational power (e.g. more GPU nodes) and train network using data-parallel Stochastic Gradient Descent, where each worker receives some chunk of global mini-batch (see e.g. Krizhevsky (2014) or Goyal et al. (2017) ). The size of a chunk should be large enough to utilize the computational resources of the worker. So scaling up the number of workers results in the increase of batch size. But using large batch may negatively impact the model accuracy, as was observed in Krizhevsky (2014), Li et al. (2014), Keskar et al. (2016), Hoffer et al. (2017),..
Increasing the global batch while keeping the same number of epochs means that you have fewer iterations to update weights. The straight-forward way to compensate for a smaller number of iterations is to do larger steps by increasing the learning rate (LR). For example, Krizhevsky (2014) suggests to linearly scale up LR with batch size. However using a larger LR makes optimization more difï¬cult, and networks may diverge especially during the initial phase. To overcome this difï¬culty, Goyal et al. (2017) suggested doing a "learning rate warm-up": training starts with a small "safe" LR, which is slowly increased to the target "base" LR. With a LR warm-up and a linear scaling rule, Goyal et al. (2017) successfully trained Resnet-50 with batch B=8K (see also Cho et al. (2017)). Linear scaling of LR with a warm-up is the "state-of-the art" recipe for large batch training.
We tried to apply this linear scaling and warm-up scheme to train Alexnet on Imagenet (Deng et al. (2009)), but scaling stopped after B=2K since training diverged for large LR-s. For B=4K the accuracy dropped from the baseline 57.6% ( for B=256) to 53.1%, and for B=8K the accuracy decreased to 44.8%. To enable training with a large LR, we replaced Local Response Normalization layers in Alexnet with Batch Normalization (BN). We will refer to this modiï¬cation of AlexNet as AlexNet-BN throughout the rest of the paper. BN improved both model convergence for large LR as well as accuracy: for B=8K the accuracy gap was decreased from 14% to 2.2%.
âWork was performed when Y.You and I.Gitman were NVIDIA interns
1
# Technical Report
To analyze the training stability with large LRs we measured the ratio between the norm of the layer weights and norm of gradients update. We observed that if this ratio is too high, the training may become unstable. On other hand, if the ratio is too small, then weights donât change fast enough. This ratio varies a lot between different layers, which makes it necessary to use a separate LR for each layer. Thus we propose a novel Layer-wise Adaptive Rate Scaling (LARS) algorithm. There are two notable differences between LARS and other adaptive algorithms such as ADAM (Kingma & Ba (2014)) or RMSProp (Tieleman & Hinton (2012)): ï¬rst, LARS uses a separate learning rate for each layer and not for each weight, which leads to better stability. And second, the magnitude of the update is controlled with respect to the weight norm for better control of training speed. With LARS we trained Alexnet-BN and Resnet-50 with B=32K without accuracy loss.
# 2 BACKGROUND
The training of CNN is done using Stochastic Gradient (SG) based methods. At each step t a mini- batch of B samples xi is selected from the training set. The gradients of loss function âL(xi, w) are computed for this subset, and networks weights w are updated based on this stochastic gradient: 1 B
The computation of SG can be done in parallel by N units, where each unit processes a chunk of the mini-batch with B N samples. Increasing the mini-batch permits scaling to more nodes without reducing the workload on each unit. However, it was observed that training with a large batch is difï¬cult. To maintain the network accuracy, it is necessary to carefully adjust training hyper-parameters (learning rate, momentum etc).
Krizhevsky (2014) suggested the following rules for training with large batches: when you increase the batch B by k, you should also increase LR by k while keeping other hyper-parameters (momentum, weight decay, etc) unchanged. The logic behind linear LR scaling is straight-forward: if you increase B by k while keeping the number of epochs unchanged, you will do k fewer steps. So it seems natural to increase the step size by k. For example, letâs take k = 2. The weight updates for batch size B after 2 iterations would be:
Wig = WwW, -A* FOL PH, wr) + So VEC) 141) (2)
j=1 The weight update for the batch B2 = 2 â B with learning rate λ2:
1 2B Wry1 = Wt â Ag * xB VE wr) (3)
will be similar if you take λ2 = 2 â λ, assuming that âL(xj, wt+1) â L(xj, wt) .
Using the "linear LR scaling" Krizhevsky (2014) trained AlexNet with batch B=1K with minor (â 1%) accuracy loss. The scaling of Alexnet above 2K is difï¬cult, since the training diverges for larger LRs. It was observed that linear scaling works much better for networks with Batch Normalization (e.g. Codreanu et al. (2017)). For example Chen et al. (2016) trained the Inception model with batch B=6400, and Li (2017) trained Resnet-152 for B=5K.
The main obstacle for scaling up batch is the instability of training with high LR. Hoffer et al. (2017) tried to use less aggressive "square root scaling" of LR with special form of Batch Normalization ("Ghost Batch Normalization") to train Alexnet with B=8K, but still the accuracy (53.93%) was much worse than baseline 58%. To overcome the instability during initial phase, Goyal et al. (2017) proposed to use LR warm-up: training starts with small LR, and then LR is gradually increased to the target. After the warm-up period (usually a few epochs), you switch to the regular LR policy ("multi-steps", polynomial decay etc). Using LR warm-up and linear scaling Goyal et al. (2017) trained Resnet-50 with batch B=8K without loss in accuracy. These recipes constitute the current state-of-the-art for large batch training, and we used them as the starting point of our experiments
Another problem related to large batch training is so called "generalization gap", observed by Keskar et al. (2016). They came to conclusion that "the lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function." They tried a few methods to improve the generalization with data augmentation and warm-starting with small batch, but they did not ï¬nd a working solution.
2
Technical Report
# 3 ANALYSIS OF ALEXNET TRAINING WITH LARGE BATCH
We used BVLC1 Alexnet with batch B=512 as baseline. Model was trained using SGD with momentum 0.9 with initial LR=0.01 and the polynomial (power=2) decay LR policy for 100 epochs. The baseline accuracy is 58% (averaged over last 5 epochs). Next we tried to train Alexnet with B=4K by using larger LR. In our experiments we changed the base LR from 0.01 to 0.08, but training diverged with LR > 0.06 even with warm-up 2. The best accuracy for B=4K is 53.1%, achieved for LR=0.05. For B=8K we couldnât scale-up LR either, and the best accuracy is 44.8% , achieved for LR=0.03 (see Table 1(a) ).
To stabilize the initial training phase we replaced Local Response Normalization layers with Batch Normalization (BN). We will refer to this model as Alexnet-BN 3. The baseline accuracy for Alexnet- BN with B=512 is 60.2%. 4 With BN we could use large LR-s even without warm-up. For B=4K the best accuracy 58.9% was achieved for LR=0.18, and for B=8K the best accuracy 58% was achieved for LR=0.3. We also observed that BN signiï¬cantly widen the range of LRs with good accuracy.
Table 1: Alexnet and Alexnet-BN: B=4K and 8K. BN makes it possible to use larger learning rates.
0.02 0.04 0.05 0.06 0.07 0.02 0.03 0.04 0.05 0.02 0.16 0.18 0.21 0.30 0.23 0.30 0.32 0.41 60.2 58.1 58.9 58.5 57.1 57.6 58.0 57.7 56.5
Still there is a 2.2% accuracy loss for B=8K. To check if it is related to the "generalization gap" (Keskar et al. (2016)), we looked at the loss gap between training and testing (see Fig. 1). We did not ï¬nd the signiï¬cant difference in the loss gap between B=256 and B=8K. We conclude that in this case the accuracy loss is not related to a generalization gap, and it is caused by the low training.
AlexNet with Batch Normalization and poly LR (power=2) ââ Batch=512, Base LR=0.02 ââ Batch=8192, Base LR=0.32 os [Test Loss - Train Loss| 00 3 2 By 3 3 wo Eo Epochs
Figure 1: Alexnet-BN: Gap between training and testing loss
1https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet 2LR starts from 0.001 and is linearly increased it to the target LR during 2.5 epochs 3 https://github.com/borisgin/nvcaffe-0.16/tree/caffe-0.16/models/alexnet_bn 4 Alexnet-BN baseline was trained using SGD with momentum=0.9, weight decay=0.0005 for 128 epochs. We used polynomial (power 2) decay LR policy with base LR=0.02.
3
Technical Report
# 4 LAYER-WISE ADAPTIVE RATE SCALING (LARS)
The standard SGD uses the same LR λ for all layers: wt+1 = wt â λâL(wt). When λ is large, the update ||λ â âL(wt)|| can become larger than ||w||, and this can cause the divergence. This makes the initial phase of training highly sensitive to the weight initialization and to initial LR. We found that the ratio the L2-norm of weights and gradients ||w||/||âL(wt)|| varies signiï¬cantly between weights and biases, and between different layers. For example, letâs take AlexNet-BN after one iteration (Table 2, "*.w" means layer weights, and "*.b" - biases). The ratio ||w||/||âL(w)|| for the 1st convolutional layer ("conv1.w") is 5.76, and for the last fully connected layer ("fc6.w") - 1345. The ratio is high during the initial phase, and it is rapidly decrease after few epochs (see Figure 2).
Table 2: AlexNet-BN: The norm of weights and gradients at 1st iteration.
Layer ||w|| ||âL(w)|| ||w|| ||âL(w)|| Layer ||w|| ||âL(w)|| ||w|| ||âL(w)|| conv1.b 1.86 0.22 8.48 conv5.b 6.65 0.09 73.6 conv1.w conv2.b 5.546 0.165 33.6 0.098 0.017 5.76 conv5.w 0.16 0.0002 69 fc6.b 30.7 0.26 117 conv2.w conv3.b 0.16 0.002 83.5 9.40 0.135 69.9 fc6.w 6.4 0.005 1345 fc7.b 20.5 0.30 68 conv3.w conv4.b 0.196 0.0015 127 8.15 0.109 74.6 fc7.w 6.4 0.013 489 fc8.b 20.2 0.22 93 fc8.w 0.316 0.016 19
If LR is large comparing to the ratio for some layer, then training may becomes unstable. The LR "warm-up" attempts to overcome this difficulty by starting from small LR, which can be safely used for all layers, and then slowly increasing it until weights will grow up enough to use larger LRs. We would like to use different approach. We use local LR \! for each layer 1: Aw) =7* A! * VL(w!) (4)
# t = γ â λl â âL(wl t)
where γ is a global LR. Local LR λl is deï¬ned for each layer through "trust" coefï¬cient η < 1:
λl = η à ||wl|| ||âL(wl)|| (5)
The η deï¬nes how much we trust the layer to change its weights during one update 5. Note that now the magnitude of the update for each layer doesnât depend on the magnitude of the gradient anymore, so it helps to partially eliminate vanishing and exploding gradient problems. This deï¬nition can be easily extended for SGD to balance the local learning rate and the weight decay term β:
λl = η à ||wl|| ||âL(wl)|| + β â ||wl|| (6)
Algorithm 1 SGD with LARS. Example with weight decay, momentum and polynomial LR decay. Parameters: base LR γ0, momentum m, weight decay β, LARS coefï¬cient η, number of steps T Init: t = 0, v = 0. Init weight wl while t < T for each layer l do
Parameters: base LR 7, momentum m, weight decay 3, LARS coefficient 7, number of steps Init: ¢ = 0, v = 0. Init weight w/, for each layer | while t < T for each layer | do g; â VL(w!) (obtain a stochastic gradient for the current mini-batch) yo * (1- 4)? (compute the global learning rate) L ical l Ne TCHIESEIICAL (compute the local LR 2°) Via H mv; + Ye41 * A! * (Gg; + Bw}) (update the momentum) why, â w; â V4, (update the weights) end while
The network training for SGD with LARS are summarized in the Algorithm 1. One can ï¬nd more implementation details at https://github.com/borisgin/nvcaffe-0.16
The local LR strongly depends on the layer and batch size (see Figure. 2 )
5 One can consider LARS as a private case of block-diagonal re-scaling from Lafond et al. (2017).
4
# Technical Report
AlexNet-BN with LARS, Layer 1: Convolutional, Weight W5 â Batch 256 ââ Batch 1024 wo â Batch 8192 2 5 g om 10.0 £ E 1s s 50 25 0.0 0 20 40 60 80 400 Epochs
AlexNet-BN with LARS, Layer 1: Convolutional, Bias os â Batch 256 ââ Batch 1024 os ââ Batch 8192 2 ge a £03 = Gor on ry) 20 40 60 80 400 Epochs
(a) Local LR, conv1-weights
(b) Local LR, conv1-bias
AlexNet-BN with LARS, Layer 5: Convolutional, Weight âe â Batch 256 150 â Batch 1024 ââ Batch 8192 @ 125 £ © © 00 £ Eos o o A 0.50 0.25 o.00 fy 20 40 Epochs 60 80
AlexNet-BN with LARS, Layer 5: Convolutional, Bias â Batch 256 0.08 â Batch 1024 ââ Batch 8192 o £ © 0.06 4 £ © 004 o o A 0.02 0.00 fy 20 40 60 80 100 Epochs
(c) Local LR , conv5-weights (d) Local LR, conv5-bias
Figure 2: LARS: local LR for different layers and batch sizes
# 5 TRAINING WITH LARS
We re-trained Alexnet and Alexnet-BN with LARS for batches up to 32K 6. For B=8K the accuracy of both networks matched the baseline B=512 (see Figure 3). Alexnet-BN trained with B=16K lost 0.9% in accuracy, and trained with B=32K lost 2.6%.
Table 3: Alexnet and Alexnet-BN: Training with LARS
# (a) Alexnet (warm-up for 2 epochs)
# (b) Alexnet-BN (warm-up for 5 epochs)
(b) Alexnet-BN (warm-up for 5 epochs)
Batch 512 4K 8K 16K 32K LR 2 10 10 14 TBD accuracy,% 58.7 58.5 58.2 55.0 TBD Batch LR 2 512 10 4K 14 8K 23 16K 22 32K accuracy,% 60.2 60.4 60.1 59.3 57.8
6 Models have been trained for 100 epochs using SGD with momentum=0.9, weight decay=0.0005, polyno- mial (p=2) decay LR policy, and LARS coefï¬cient η = 0.001. Training have been done on NVIDIA DGX1. To emulate large batches (B=16K and 32K) we used iter_size parameter to partition mini-batch into smaller chunks. The weights update is done after gradients for the last chunk are computed.
5
Technical Report
AlexNet-BN for ImageNet 0.6 ° a © ES ° © Top-1 Test Accuracy Top-1 Test Accuracy oo â Batch=512 ââ Batch=8192 0.0 Oo 20 40 60 80 100 Epochs
AlexNet-BN for ImageNet 0.6 © a ° FS ° Top-1 Test Accuracy on â Batch=512, Baseline ââ Batch=8192, LARS 0.0 Oo 20 40 60 80 100 Epochs
(a) Training without LARS (b) Training with LARS
Figure 3: LARS: Alexnet-BN with B=8K
There is a relatively wide interval of base LRs which gives the "best" accuracy. for example, for Alexnet-BN with B=16K LRs from [13;22] give the accuracy â 59.3, for B=32k, LRs from [17,28] give â 57.5
Alexnet-BN: Accurcay vs Base LR âBatch=16K âBatch=32K Base LR (LARS)
Figure 4: Alexnet-BN, B=16K and 32k: Accuracy as function of LR
Next we retrained Resnet-50, ver.1 from He et al. (2016) with LARS. As a baseline we used B=256 with corresponding top-1 accuracy 73%. 7
Table 4: ResNet50 with LARS.
Batch 256 8K 16K 32K LR policy poly(2) LARS+poly(2) LARS+poly(2) LARS+poly(2) γ 0.2 0.6 2.5 2.9 warm-up N/A 5 5 5 accuracy, % 73.0 72.7 73.0 72.3
7 Note that our baseline 73% is lower than the published state-of-the-art 75% Goyal et al. (2017) and Cho et al. (2017) for few reasons. We trained with the minimal data augmentation (pre-scale images to 256x256 and use random 224x224 crop with horizontal ï¬ip). During testing we used one model and 1 central crop. The state-of- the art accuracy 75% was achieved with more extensive data augmentation during testing, and with multi-model, multi-crop testing. For more details see log ï¬les https://people.eecs.berkeley.edu/â¼youyang/publications/batch.
6
Technical Report
08 ImageNet by ResNet50 without Data Augmentation 2 a 2 a 2 @ Top-1 Test Accuracy ° FS 0.2 â Batch=32k, LR=2.9, warmup, LARS â Batch=16k, LR=2.5, warmup, LARS 01 â Batch=8k, LR=6.4, warmup 4 â Batch=256, LR=0.2 °° 20 40 60 80 100 Epochs
Figure 5: Scaling ResNet-50 up to B=32K with LARS.
All networks have been trained using SGD with momentum 0.9 and weight decay=0.0001 for 90 epochs. We used LARS and warm-up for 5 epochs with polynomial decay (power=2) LR policy.
We found that with LARS we can scale up Resnet-50 up to batch B=32K with almost the same (-0.7%) accuracy as baseline
# 6 LARGE BATCH VS NUMBER OF STEPS
As one can see from Alexnet-BN exmaple for B=32K, even training with LARS and using large LR does not reach baseline accuracy. But the accuracy can be recovered completely by just training longer. We argue that when batch very large, the stochastic gradients become very close to true gradients, so increasing the batch does not give much additional gradient information comparing to smaller batches.
Table 5: Alexnet-BN, B=32K: Accuracy vs Training duration
Num of epochs 100 125 150 175 200 accuracy, % 57.8 59.2 59.5 59.5 59.9
# 7 CONCLUSION
Large batch is a key for scaling up training of convolutional networks. The existing approach for large-batch training, based on using large learning rates, leads to divergence, especially during the initial phase, even with learning rate warm-up. To solve these optimization difï¬culties we proposed the new algorithm, which adapts the learning rate for each layer (LARS). Using LARS, we extended scaling of Alexnet and Resnet-50 to B=32K. Training of these networks with batch above 32K without accuracy loss is still open problem.
# REFERENCES
Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed synchronous sgd. arXiv preprint arXiv:1604.00981, 2016.
7
# Technical Report
Minsik Cho, Ulrich Finkler, Sameer Kumar, David Kung, Vaibhav Saxena, and Dheeraj Sreedhar. Powerai ddl. arXiv preprint arXiv:1708.02188, 2017.
Valeriu Codreanu, Damian Podareanu, and Vikram Saletore. Blog: Achieving deep learning training in less than 40 minutes on imagenet-1k with scale-out intel® xeonâ¢/xeon phi⢠architectures. blog https://blog.surf:nl/en/imagenet- 1k-training-on-intel-xeon-phi-in-less-than-40-minutes/, 2017.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. IEEE, 2009.
Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770â778, 2016.
Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen- eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
Jean Lafond, Nicolas Vasilache, and Léon Bottou. Diagonal rescaling for neural networks. arXiv preprint arXiv:1705.09319v1, 2017.
Mu Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, CMU, 2017.
Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Smola. Efï¬cient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 661â670. ACM, 2014.
Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop, coursera: Neural networks for machine learning. University of Toronto, Tech. Rep, 2012.
8 | {
"id": "1609.04836"
} |
1708.02556 | Multi-Generator Generative Adversarial Nets | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators. | http://arxiv.org/pdf/1708.02556 | Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170808 | 20171027 | 7 1 0 2
t c O 7 2 ] G L . s c [
4 v 6 5 5 2 0 . 8 0 7 1 : v i X r a
# MGAN: TRAINING GENERATIVE ADVERSARIAL NETS WITH MULTIPLE GENERATORS
Quan Hoang University of Massachusetts-Amherst Amherst, MA 01003, USA qhoang@umass.edu
Tu Dinh Nguyen, Trung Le, Dinh Phung PRaDA Centre, Deakin University Geelong, Australia {tu.nguyen,trung.l,dinh.phung}@deakin.edu.au
# ABSTRACT
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classiï¬er, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribu- tion as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classiï¬er speciï¬es which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as ï¬nal output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generatorsâ distributions and the empirical data distribu- tion is minimal, whilst the JSD among generatorsâ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efï¬ciently scale to large-scale datasets. We conduct extensive exper- iments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and ap- pealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.
1
# INTRODUCTION
Generative Adversarial Nets (GANs) (Goodfellow et al., 2014) are a recent novel class of deep generative models that are successfully applied to a large variety of applications such as image, video generation, image inpainting, semantic segmentation, image-to-image translation, and text-to-image synthesis, to name a few (Goodfellow, 2016). From the game theory metaphor, the model consists of a discriminator and a generator playing a two-player minimax game, wherein the generator aims to generate samples that resemble those in the training data whilst the discriminator tries to distinguish between the two as narrated in (Goodfellow et al., 2014). Training GAN, however, is challenging as it can be easily trapped into the mode collapsing problem where the generator only concentrates on producing samples lying on a few modes instead of the whole data space (Goodfellow, 2016).
Many GAN variants have been recently proposed to address this problem. They can be grouped into two main categories: training either a single generator or many generators. Methods in the former
1
include modifying the discriminatorâs objective (Salimans et al., 2016; Metz et al., 2016), modifying the generatorâs objective (Warde-Farley & Bengio, 2016), or employing additional discriminators to yield more useful gradient signals for the generators (Nguyen et al., 2017; Durugkar et al., 2016). The common theme in these variants is that generators are shown, at equilibrium, to be able to recover the data distribution, but convergence remains elusive in practice. Most experiments are conducted on toy datasets or on narrow-domain datasets such as LSUN (Yu et al., 2015) or CelebA (Liu et al., 2015). To our knowledge, only Warde-Farley & Bengio (2016) and Nguyen et al. (2017) perform quantitative evaluation of models trained on much more diverse datasets such as STL-10 (Coates et al., 2011) and ImageNet (Russakovsky et al., 2015).
Given current limitations in the training of single-generator GANs, some very recent attempts have been made following the multi-generator approach. Tolstikhin et al. (2017) apply boosting tech- niques to train a mixture of generators by sequentially training and adding new generators to the mixture. However, sequentially training many generators is computational expensive. Moreover, this approach is built on the implicit assumption that a single-generator GAN can generate very good images of some modes, so reweighing the training data and incrementally training new gener- ators will result in a mixture that covers the whole data space. This assumption is not true in practice since current single-generator GANs trained on diverse datasets such as ImageNet tend to generate images of unrecognizable objects. Arora et al. (2017) train a mixture of generators and discrimina- tors, and optimize the minimax game with the reward function being the weighted average reward function between any pair of generator and discriminator. This model is computationally expen- sive and lacks a mechanism to enforce the divergence among generators. Ghosh et al. (2017) train many generators by using a multi-class discriminator that, in addition to detecting whether a data sample is fake, predicts which generator produces the sample. The objective function in this model punishes generators for generating samples that are detected as fake but does not directly encourage generators to specialize in generating different types of data.
We propose in this paper a novel approach to train a mixture of generators. Unlike aforementioned multi-generator GANs, our proposed model simultaneously trains a set of generators with the objec- tive that the mixture of their induced distributions would approximate the data distribution, whilst encouraging them to specialize in different data modes. The result is a novel adversarial architecture formulated as a minimax game among three parties: a classiï¬er, a discriminator, and a set of gener- ators. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classiï¬er speciï¬es which generator a sample comes from. We term our proposed model as Mixture Generative Adversarial Nets (MGAN). We provide analysis that our model is optimized towards minimizing the Jensen-Shannon Divergence (JSD) between the mixture of distributions in- duced by the generators and the data distribution while maximizing the JSD among generators.
Empirically, our proposed model can be trained efï¬ciently by utilizing parameter sharing among generators, and between the classiï¬er and the discriminator. In addition, simultaneously training many generators while enforcing JSD among generators helps each of them focus on some modes of the data space and learn better. Trained on CIFAR-10, each generator learned to specialize in generating samples from a different class such as horse, car, ship, dog, bird or airplane. Overall, the models trained on the CIFAR-10, STL-10 and ImageNet datasets successfully generated diverse, recognizable objects and achieved state-of-the-art Inception scores (Salimans et al., 2016). The model trained on the CIFAR-10 even outperformed GANs trained in a semi-supervised fashion (Salimans et al., 2016; Odena et al., 2016).
In short, our main contributions are: (i) a novel adversarial model to efï¬ciently train a mixture of generators while enforcing the JSD among the generators; (ii) a theoretical analysis that our objective function is optimized towards minimizing the JSD between the mixture of all generatorsâ distributions and the real data distribution, while maximizing the JSD among generators; and (iii) a comprehensive evaluation on the performance of our method on both synthetic and real-world large-scale datasets of diverse natural scenes.
2
x discriminator D | G,(z) | a ars â distinguish between u~ Mult(m) rome ry xand G,(z) z~P, Tied parameters Ne (z) y v || G2.@) >| |... I Tied parameters nN rs : Tiedparameters Â¥ y : , : LY oj which generator was used? v v Gx(z) x~ Paata classifier C
Figure 1: MGANâs architecture with K generators, a binary discriminator, a multi-class classiï¬er.
# 2 GENERATIVE ADVERSARIAL NETS
Given the discriminator D and generator G, both parameterized via neural networks, training GAN can be formulated as the following minimax objective function:
min G max D Exâ¼Pdata(x) [log D (x)] + Ezâ¼Pz [log (1 â D (G (z)))] (1)
where x is drawn from data distribution Pdata, z is drawn from a prior distribution Pz. The mapping G (z) induces a generator distribution Pmodel in data space. GAN alternatively optimizes D and G using stochastic gradient-based learning. As a result, the optimization order in 1 can be reversed, causing the minimax formulation to become maximin. G is therefore incentivized to map every z to a single x that is most likely to be classiï¬ed as true data, leading to mode collapsing problem. Another commonly asserted cause of generating less diverse samples in GAN is that, at the optimal point of D, minimizing G is equivalent to minimizing the JSD between the data and model distributions, which has been empirically proven to prefer to generate samples around only a few modes whilst ignoring other modes (Husz´ar, 2015; Theis et al., 2015).
# 3 PROPOSED MIXTURE GANS
We now present our main contribution of a novel approach that can effectively tackle mode collapse in GAN. Our idea is to use a mixture of many distributions rather than a single one as in the standard GAN, to approximate the data distribution, and simultaneously we enlarge the divergence of those distributions so that they cover different data modes.
To this end, an analogy to a game among K generators G1:K, a discriminator D and a classiï¬er C can be formulated. Each generator Gk maps z to x = Gk (z), thus inducing a single distribution PGk ; and K generators altogether induce a mixture over K distributions, namely Pmodel in the data space. An index u is drawn from a multinomial distribution Mult (Ï) where Ï = [Ï1, Ï2, ..., ÏK] is the coefï¬cients of the mixture; and then the sample Gu (z) is used as the output. Here, we use a predeï¬ned Ï and ï¬x it instead of learning. The discriminator D aims to distinguish between this sample and the training samples. The classiï¬er C performs multi-class classiï¬cation to classify samples labeled by the indices of their corresponding generators. We term this whole process and our model the Mixture Generative Adversarial Nets (MGAN).
Fig. 1 illustrates the general architecture of our proposed MGAN, where all components are param- eterized by neural networks. Gk (s) tie their parameters together except the input layer, whilst C and D share parameters except the output layer. This parameter sharing scheme enables the networks to leverage their common information such as features at low-level layers that are close to the data layer, hence helps to train model effectively. In addition, it also minimizes the number of parameters and adds minimal complexity to the standard GAN, thus the whole process is still very efï¬cient.
More formally, D, C and G1:K now play the following multi-player minimax optimization game:
# min G1:K,C
# max D
J (G1:K, C, D) = Exâ¼Pdata [log D (x)] + Exâ¼Pmodel [log (1 â D (x))]
âB {e TKEx~Pe, [log Ce i} (2) k=1
3
where Ck (x) is the probability that x is generated by Gk and β > 0 is the diversity hyper-parameter. The ï¬rst two terms show the interaction between generators and the discriminator as in the standard GAN. The last term should be recognized as a standard softmax loss for a multi-classiï¬cation set- ting, which aims to maximize the entropy for the classiï¬er. This represents the interaction between generators and the classiï¬er, which encourages each generator to produce data separable from those produced by other generators. The strength of this interaction is controlled by β. Similar to GAN, our proposed network can be trained by alternatively updating D, C and G1:K. We refer to Ap- pendix A for the pseudo-code and algorithms for parameter learning for our proposed MGAN.
3.1 THEORETICAL ANALYSIS
Assuming all C, D and G1:K have enough capacity, we show below that at the equilibrium point of the minimax problem in Eq. (2), the JSD between the mixture induced by G1:K and the data distribution is minimal, i.e. pdata = pmodel, and the JSD among K generators is maximal, i.e. two arbitrary generators almost never produce the same data. In what follows we present our mathemat- ical statement and the sketch of their proofs. We refer to Appendix B for full derivations. Proposition 1. For ï¬xed generators G1, G2, ..., GK and their mixture weights Ï1, Ï2, ..., ÏK, the 1:K and Dâ for J (G1:K, C, D) in Eq. (2) are: optimal solution C â = C â
Ci (x) = TPG (and D* (x) = Paata (X) jai TDG; (X) Ddata (X) + Pmodet (x)
C â
Proof. It can be seen that the solution C â k is a general case of Dâ when D classiï¬es samples from two distributions with equal weight of 1/2. We refer the proofs for Dâ to Prop. 1 in (Goodfellow et al., 2014), and our proof for C â k to Appendix B in this manuscript.
Based on Prop. we further show that at the equilibrium point of the minimax problem in Eq. @). the optimal generator G* = [G7, ...,G@] induces the generated distribution p*, 7.) (x) = an TPax. (x) which is as closest as possible to the true data distribution paata (x) while main- taining the mixture components pg; (x)(s) as furthest as possible to avoid the mode collapse.
taining the mixture components pGâ Theorem 2. At the equilibrium point of the minimax problem in Eq. (2), the optimal Gâ, Dâ, and C â satisfy
# k
G* = argmin (2-JSD (Paata||Pmodet) â 8» ISDx (Pa,, Pas,-; Pax)) G Cr (x) _ Psi (x) and D* (x) _ Pdata (x) vet 7 [PGs (x) Pdata (X) + Pmodet (xX)
Proof. Substituting C â follows: 1:K and Dâ into Eq. (2), we reformulate the objective function for G1:K as
Pdata (X) Paata (X) + Pmodet (x) {nto [os] k=1 jai IPG; (x) L(Gik) = Ex~Piata [oe ] Beran le xP] Pdata (X) + Pmodet (X) K Kk : Pa, (x) =2- JSD (Paatal|Pmodet) â log 4 â 8 TrEx~ Pg, |log âeâ"-â | ? â B D__ tr log te K =2- ISD (Paatal|Pmodet) â 8 ISD (PG; Pap, ---»Pay) â log 4 â B > me log 7 (4) k=1
Since the last two terms in Eq. (4) are constant, that concludes our proof.
This theorem shows that progressing towards the equilibrium is equivalently to minimizing JSD (Paatal|Pmoaet) While maximizing JSD, (Po,,Po,,..-, Po). In the next theorem, we fur- ther clarify the equilibrium point for the specific case wherein the data distribution has the form
4
(3)
Pdata (X) = a TkQk (x) where the mixture components q, (x)(s) are well-separated in the sense that Ex, [qj (x)] = 0 for 7 4 k, ie., for almost everywhere x, if qx (x) > 0 then 4 (x) = 0, Fk. Theorem 3. [f the data distribution has the form: Pdata(X) = an Tek ture components qy, (x)(8) are well-separated, the minimax problem in Eq. problem in Eq. has the following solution: (x) where the mix- or the optimization
# pGâ
, and the corresponding objective value of the optimization problem in Eq. is âBH (7m) = âp an 7, log x where H (7) is the Shannon entropy.
Proof. Please refer to our proof in Appendix B of this manuscript.
Thm. 3] explicitly offers the optimal solution for the specific case wherein the real data are gen- erated from a mixture distribution whose components are well-separated. This further reveals that if the mixture components are well-separated, by setting the number of generators as the number of mixtures in data and maximizing the divergence between the generated components Pa, (x)(s), we can exactly recover the mixture components gq, (x)(s) using the generated com- ponents pq, (x)(s), hence strongly supporting our motivation when developing MGAN. In prac- tice, C, D, and Gj. are parameterized by neural networks and are optimized in the parameter space rather than in the function space. As all generators Gj. share the same objective func- tion, we can efficiently update their weights using the same backpropagation passes. Empirically, we set the parameter 7, = zVk ⬠{1,...,K}, which further minimizes the objective value âBH(m) = â8 an Tr log = w.t.t 7 in Thm.|3} To simplify the computational graph, we as- sume that each generator is sampled the same number of times in each minibatch. In addition, we adopt the non-saturating heuristic proposed in (Goodfellow et al. (2014) to train G'.« by maximizing log D (Gy (z)) instead of minimizing log D (1 â Gy (z
# 4 RELATED WORK
Recent attempts to address the mode collapse by modifying the discriminator include minibatch discrimination (Salimans et al., 2016), Unrolled GAN (Metz et al., 2016) and Denoising Feature Matching (DFM) (Warde-Farley & Bengio, 2016). The idea of minibatch discrimination is to al- low the discriminator to detect samples that are noticeably similar to other generated samples. Al- though this method can generate visually appealing samples, it is computationally expensive, thus normally used in the last hidden layer of discriminator. Unrolled GAN improves the learning by unrolling computational graph to include additional optimization steps of the discriminator. It could effectively reduce the mode collapsing problem, but the unrolling step is expensive, rendering it unscalable up to large-scale datasets. DFM augments the objective function of generator with one of a Denoising AutoEncoder (DAE) that minimizes the reconstruction error of activations at the penultimate layer of the discriminator. The idea is that gradient signals from DAE can guide the generator towards producing samples whose activations are close to the manifold of real data activa- tions. DFM is surprisingly effective at avoiding mode collapse, but the involvement of a deep DAE adds considerable computational cost to the model.
An alternative approach is to train additional discriminators. D2GAN (Nguyen et al., 2017) employs two discriminators to minimize both Kullback-Leibler (KL) and reverse KL divergences, thus plac- ing a fair distribution across the data modes. This method can avoid the mode collapsing problem to a certain extent, but still could not outperform DFM. Another work uses many discriminators to boost the learning of generator (Durugkar et al., 2016). The authors state that this method is robust to mode collapse, but did not provide experimental results to support that claim.
Another direction is to train multiple generators. The so-called MIX+GAN (Arora et al., 2017) is related to our model in the use of mixture but the idea is very different. Based on min-max theorem (Neumann, 1928), the MIX+GAN trains a mixture of multiple generators and discriminators with
5
different parameters to play mixed strategies in a min-max game. The total reward of this game is computed by weighted averaging rewards over all pairs of generator and discriminator. The lack of parameter sharing renders this method computationally expensive to train. Moreover, there is no mechanism to enforce the divergence among generators as in ours.
Some attempts have been made to train a mixture of GANs in a similar spirit with boosting algo- rithms. Wang et al. (2016) propose an additive procedure to incrementally train new GANs on a subset of the training data that are badly modeled by previous generators. As the discriminator is expected to classify samples from this subset as real with high conï¬dence, i.e. D (x) is high, the subset can be chosen to include x where D (x) is larger than a predeï¬ned threshold. Tolstikhin et al. (2017), however, show that this heuristic fails to address the mode collapsing problem. Thus they propose AdaGAN to introduce a robust reweighing scheme to prepare training data for the next GAN. AdaGAN and boosting-inspired GANs in general are based on the assumption that a single- generator GAN can learn to generate impressive images of some modes such as dogs or cats but fails to cover other modes such as giraffe. Therefore, removing images of dogs or cats from the training data and train a next GAN can create a better mixture. This assumption is not true in practice as current single-generator GANs trained on diverse data sets such as ImageNet (Russakovsky et al., 2015) tend to generate images of unrecognizable objects.
The most closely related to ours is MAD-GAN (Ghosh et al., 2017) which trains many generators and uses a multi-class classiï¬er as the discriminator. In this work, two strategies are proposed to ad- dress the mode collapse: (i) augmenting generatorâs objective function with a user-deï¬ned similarity based function to encourage different generators to generate diverse samples, and (ii) modifying dis- criminatorâs objective functions to push different generators towards different identiï¬able modes by separating samples of each generator. Our approach is different in that, rather than modifying the discriminator, we use an additional classiï¬er that discriminates samples produced by each generator from those by others under multi-class classiï¬cation setting. This nicely results in an optimiza- tion problem that maximizes the JSD among generators, thus naturally enforcing them to generate diverse samples and effectively avoiding mode collapse.
# 5 EXPERIMENTS
In this section, we conduct experiments on both synthetic data and real-world large-scale datasets. The aim of using synthetic data is to visualize, examine and evaluate the learning behaviors of our proposed MGAN, whilst using real-world datasets to quantitatively demonstrate its efï¬cacy and scalability of addressing the mode collapse in a much larger and wider data space. For fair comparison, we use experimental settings that are identical to previous work, and hence we quote the results from the latest state-of-the-art GAN-based models to compare with ours.
We use TensorFlow (Abadi et al., 2016) to implement our model and will release the code after publication. For all experiments, we use: (i) shared parameters among generators in all layers except for the weights from the input to the ï¬rst hidden layer; (ii) shared parameters between discriminator and classiï¬er in all layers except for the weights from the penultimate layer to the output; (iii) Adam optimizer (Kingma & Ba, 2014) with learning rate of 0.0002 and the ï¬rst-order momentum of 0.5; (iv) minibatch size of 64 samples for training discriminators; (v) ReLU activations (Nair & Hinton, 2010) for generators; (vi) Leaky ReLU (Maas et al., 2013) with slope of 0.2 for discriminator and classiï¬er; and (vii) weights randomly initialized from Gaussian distribution N (0, 0.02I) and zero biases. We refer to Appendix C for detailed model architectures and additional experimental results.
# 5.1 SYNTHETIC DATA
In the ï¬rst experiment, following (Nguyen et al., 2017) we reuse the experimental design proposed in (Metz et al., 2016) to investigate how well our MGAN can explore and capture multiple data modes. The training data is sampled from a 2D mixture of 8 isotropic Gaussian distributions with a covariance matrix of 0.02I and means arranged in a circle of zero centroid and radius of 2.0. Our purpose of using such small variance is to create low density regions and separate the modes.
We employ 8 generators, each with a simple architecture of an input layer with 256 noise units drawn from isotropic multivariate Gaussian distribution N (0, I), and two fully connected hidden
6
cd * * * ey *#, 4 es fa % s * Sa *. + ° ee ta. i ¥, * Ea * * Seg Kaye ata e*s * a*y ate CZ * cd * * +* * * * gt ee Fy t *,* ey *,*
a
(a) Symmetric KL divergence. (b) Wasserstein distance. (c) Evolution of data (in blue) generated by GAN, UnrolledGAN, D2GAN and our MGAN from the top row to the bottom, respectively. Data sampled from the true mixture of 8 Gaussians are red.
5K 10k 18k 20K 25K step
Figure 2: The comparison of our MGAN and GANâs variants on 2D synthetic dataset.
layers with 128 ReLU units each. For the discriminator and classiï¬er, one hidden layer with 128 ReLU units is used. The diversity hyperparameter β is set to 0.125.
Fig. 2c shows the evolution of 512 samples generated by our model and baselines through time. It can be seen that the regular GAN generates data collapsing into a single mode hovering around the valid modes of data distribution, thus reï¬ecting the mode collapse in GAN as expected. At the same time, UnrolledGAN (Metz et al., 2016), D2GAN (Nguyen et al., 2017) and our MGAN distribute data around all 8 mixture components, and hence demonstrating the abilities to successfully learn multimodal data in this case. Our proposed model, however, converges much faster than the other two since it successfully explores and neatly covers all modes at the early step 15K, whilst two baselines produce samples cycling around till the last steps. At the end, our MGAN captures data modes more precisely than UnrolledGAN and D2GAN since, in each mode, the UnrolledGAN generates data that concentrate only on several points around the modeâs centroid, thus seems to produce fewer samples than ours whose samples fairly spread out the entire mode, but not exceed the boundary whilst the D2GAN still generates many points scattered between two adjacent modes.
Next we further quantitatively compare the quality of generated data. Since we know the true dis- tribution Pdata in this case, we employ two measures, namely symmetric Kullback-Leibler (KL) divergence and Wasserstein distance. These measures compute the distance between the normalized histograms of 10,000 points generated from the model to true Pdata. Figs. 2a and 2b again clearly demonstrate the superiority of our approach over GAN, UnrolledGAN and D2GAN w.r.t both dis- tances (lower is better); notably the Wasserstein distances from ours and D2GANâs to the true distri- bution almost reduce to zero, and at the same time, our symmetric KL metric is signiï¬cantly better than that of D2GAN. These ï¬gures also show the stability of our MGAN (black curves) and D2GAN (red curves) during training as they are much less ï¬uctuating compared with GAN (green curves) and UnrolledGAN (blue curves).
Lastly, we perform experiments with different numbers of generators. The MGAN models with 2, 3, 4 and 10 generators all successfully explore 8 modes but the models with more generators generate fewer points scattered between adjacent modes. We also examine the behavior of the diversity coefï¬cient β by training the 4-generator model with different values of β. Without the JSD force (β = 0), generated samples cluster around one mode. When β = 0.25, the JSD force is weak and generated data cluster near 4 different modes. When β = 0.75 or 1.0, the JSD force is too strong and causes the generators to collapse, generating 4 increasingly tight clusters. When β = 0.5, generators successfully cover all of the 8 modes. Please refer to Appendix C.1 for experimental details.
7
5.2 REAL-WORLD DATASETS
Next we train our proposed method on real-world databases from natural scenes to investigate its performance and scalability on much more challenging large-scale image data.
Datasets. We use 3 widely-adopted datasets: CIFAR-10 (Krizhevsky & Hinton, 2009), STL-10 (Coates et al., 2011) and ImageNet (Russakovsky et al., 2015). CIFAR-10 contains 50,000 32Ã32 training images of 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. STL-10, subsampled from ImageNet, is a more diverse dataset than CIFAR-10, containing about 100,000 96Ã96 images. ImageNet (2012 release) presents the largest and most diverse consisting of over 1.2 million images from 1,000 classes. In order to facilitate fair comparison with the baselines in (Warde-Farley & Bengio, 2016; Nguyen et al., 2017), we follow the procedure of (Krizhevsky et al., 2012) to resize the STL-10 and ImageNet images down to 48Ã48 and 32Ã32, respectively.
Evaluation protocols. For quantitative evaluation, we adopt the Inception score proposed in (2016), which computes exp (Ex [AL (p (y|x) ||p (y))]) where p (y|x) is the conditional abel distribution for the image x estimated by the reference Inception model (Szegedy et al.[2015). This metric rewards good and varied samples and is found to be well-correlated with human judg- ment (Salimans et al.| {2016). We use the code provided in to compute the Inception scores for 10 partitions of 50,000 randomly generated samples. For qualitative demonstra- tion of image quality obtained by our proposed model, we show samples generated by the mixture as well as samples produced by each generator. Samples are randomly drawn rather than cherry-picked. a
Model architectures. Our generator and discriminator architectures closely follow the DCGANâs design (Radford et al., 2015). The only difference is we apply batch normalization (Ioffe & Szegedy, 2015) to all layers in the networks except for the output layer. Regarding the classiï¬er, we empir- ically ï¬nd that our proposed MGAN achieves the best performance (i.e., fast convergence rate and high inception score) when the classiï¬er shares parameters of all layers with the discriminator ex- cept for the output layer. The reason is that this parameter sharing scheme would allow the classiï¬er and discriminator to leverage their common features and representations learned at every layer, thus helps to improve and speed up the training progress. When the parameters are not tied, the model learns slowly and eventually yields lower performance.
During training we observe that the percentage of active neurons chronically declined (see Ap- pendix C.2). One possible cause is that the batch normalization center (offset) is gradually shifted to the negative range, thus deactivating up to 45% of ReLU units of the generator networks. Our ad-hoc solution for this problem is to ï¬x the offset at zero for all layers in the generator networks. The rationale is that for each feature map, the ReLU gates will open for about 50% highest inputs in a minibatch across all locations and generators, and close for the rest.
We also experiment with other activation functions of generator networks. First we use Leaky ReLU and obtain similar results with using ReLU. Then we use MaxOut units (Goodfellow et al., 2013) and achieves good Inception scores but generates unrecognizable samples. Finally, we try SeLU (Klambauer et al., 2017) but fail to train our model.
Hyperparameters. Three key hyperparameters of our model are: number of generators K, coef- ï¬cient β controlling the diversity and the minibatch size. We use a minibatch size of [128/K] for each generator, so that the total number of samples for training all generators is about 128. We train models with 4 generators and 10 generators corresponding with minibatch sizes of 32 and 12 each, and ï¬nd that models with 10 generators performs better. For ImageNet, we try an additional setting with 32 generators and a minibatch size of 4 for each. The batch of 4 samples is too small for updating sufï¬cient statistics of a batch-norm layer, thus we drop batch-norm in the input layer of each generator. This 32-generator model, however, does not obtain considerably better results than the 10-generator one. Therefore in what follows we only report the results of models with 10 generators. For the diversity coefï¬cient β, we observe no signiï¬cant difference in Inception scores when varying the value of β but the quality of generated images declines when β is too low or too high. Generated samples by each generator vary more when β is low, and vary less but become less realistic when β is high. We ï¬nd a reasonable range for β to be (0.01, 1.0), and ï¬nally set to 0.01 for CIFAR-10, 0.1 for ImageNet and 1.0 for STL-10.
8
Inception results. We now report the Inception scores obtained by our MGAN and baselines in Tab. 1. It is worthy to note that only models trained in a completely unsupervised manner without label information are included for fair comparison; and DCGANâs and D2GANâs results on STL- 10 are available only for the models trained on 32Ã32 resolution. Overall, our proposed model outperforms the baselines by large margins and achieves state-of-the-art performance on all datasets. Moreover, we would highlight that our MGAN obtains a score of 8.33 on CIFAR-10 that is even better than those of models trained with labels such as 8.09 of Improved GAN (Salimans et al., 2016) and 8.25 of AC-GAN (Odena et al., 2016). In addition, we train our model on the original 96Ã96 resolution of STL-10 and achieve a score of 9.79±0.08. This suggests the MGAN can be successfully trained on higher resolution images and achieve the higher Inception score.
Table 1: Inception scores on different datasets. âââ denotes unavailable result.
Model Real data WGAN (Arjovsky et al., 2017) MIX+WGAN (Arora et al., 2017) Improved-GAN (Salimans et al., 2016) ALI (Dumoulin et al., 2016) BEGAN (Berthelot et al., 2017) MAGAN (Wang et al., 2017) GMAN (Durugkar et al., 2016) DCGAN (Radford et al., 2015) DFM (Warde-Farley & Bengio, 2016) D2GAN (Nguyen et al., 2017) MGAN CIFAR-10 11.24±0.16 3.82±0.06 4.04±0.07 4.36±0.04 5.34±0.05 5.62 5.67 6.00±0.19 6.40±0.05 7.72±0.13 7.15±0.07 8.33±0.10 STL-10 26.08±0.26 â â â â â â â 7.54 8.51±0.13 7.98 9.22±0.11 ImageNet 25.78±0.47 â â â â â â â 7.89 9.18±0.13 8.25 9.32±0.10
Image generation. Next we present samples randomly generated by our proposed model trained on the 3 datasets for qualitative assessment. Fig. 3a shows CIFAR-10 32Ã32 images containing a wide range of objects in such as airplanes, cars, trucks, ships, birds, horses or dogs. Similarly, STL- 10 48Ã48 generated images in Fig. 3b include cars, ships, airplanes and many types of animals, but with wider range of different themes such as sky, underwater, mountain and forest. Images generated for ImageNet 32Ã32 are diverse with some recognizable objects such as lady, old man, birds, human eye, living room, hat, slippers, to name a few. Fig. 4a shows several cherry-picked STL-10 96Ã96 images, which demonstrate that the MGAN is capable of generating visually appealing images with complicated details. However, many samples are still incomplete and unrealistic as shown in Fig. 4b, leaving plenty of room for improvement.
(a) CIFAR-10 32Ã32. (b) STL-10 48Ã48. (c) ImageNet 32Ã32.
Figure 3: Images generated by our proposed MGAN trained on natural image datasets. Due to the space limit, please refer to the appendix for larger plots.
Finally, we investigate samples generated by each generator as well as the evolution of these samples through numbers of training epochs. Fig. 5 shows images generated by each of the 10 generators in our MGAN trained on CIFAR-10 at epoch 20, 50, and 250 of training. Samples in each row corre-
9
(a) Cherry-picked samples. (b) Incomplete, unrealistic samples.
Figure 4: Images generated by our MGAN trained on the original 96Ã96 STL10 dataset.
spond to a different generator. Generators start to specialize in generating different types of objects as early as epoch 20 and become more and more consistent: generator 2 and 3 in ï¬ying objects (birds and airplanes), generator 4 in full pictures of cats and dogs, generator 5 in portraits of cats and dogs, generator 8 in ships, generator 9 in car and trucks, and generator 10 in horses. Generator 6 seems to generate images of frog or animals in a bush. Generator 7, however, collapses in epoch 250. One possible explanation for this behavior is that images of different object classes tend to have different themes. Lastly, Wang et al. (2016) noticed one of the causes for non-convergence in GANs is that the generators and discriminators constantly vary; the generators at two consecutive epochs of training generate signiï¬cantly different images. This experiment demonstrates the effect of the JSD force in preventing generators from moving around the data space.
(a) Epoch #20. (b) Epoch #50. (c) Epoch #250.
Figure 5: Images generated by our MGAN trained on CIFAR10 at different epochs. Samples in each row from the top to the bottom correspond to a different generator.
# 6 CONCLUSION
We have presented a novel adversarial model to address the mode collapse in GANs. Our idea is to approximate data distribution using a mixture of multiple distributions wherein each distribution captures a subset of data modes separately from those of others. To achieve this goal, we propose a minimax game of one discriminator, one classiï¬er and many generators to formulate an optimization problem that minimizes the JSD between Pdata and Pmodel, i.e., a mixture of distributions induced by the generators, whilst maximizes JSD among such generator distributions. This helps our model
10
generate diverse images to better cover data modes, thus effectively avoids mode collapse. We term our proposed model Mixture Generative Adversarial Network (MGAN).
The MGAN can be efï¬ciently trained by sharing parameters between its discriminator and clas- siï¬er, and among its generators, thus our model is scalable to be evaluated on real-world large- scale datasets. Comprehensive experiments on synthetic 2D data, CIFAR-10, STL-10 and ImageNet databases demonstrate the following capabilities of our model: (i) achieving state-of-the-art Incep- tion scores; (ii) generating diverse and appealing recognizable objects at different resolutions; and (iv) specializing in capturing different types of objects by the generators.
# REFERENCES
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 5
Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. 1
Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017. 1, 4, 1
David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversar- ial networks. arXiv preprint arXiv:1703.10717, 2017. 1
Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artiï¬cial intelli- gence and statistics, pp. 215â223, 2011. 1, 5.2
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi- etro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. 1
Ishan Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673, 2016. 1, 4, 1
Arnab Ghosh, Viveka Kulharia, Vinay Namboodiri, Philip HS Torr, and Puneet K Dokania. Multi- agent diverse generative adversarial networks. arXiv preprint arXiv:1704.02906, 2017. 1, 4
Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. 1, B
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor- mation processing systems, pp. 2672â2680, 2014. 1, 3.1, 3.1, B
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013. 5.2
Ferenc Husz´ar. How (not) to train your generative model: Scheduled sampling, likelihood, adver- sary? arXiv preprint arXiv:1511.05101, 2015. 2
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448â456, 2015. 5.2
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 5
G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. arXiv preprint arXiv:1706.02515, 2017. 5.2
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. 5.2
11
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012. 5.2
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730â3738, 2015. 1
Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural net- work acoustic models. In Proc. ICML, volume 30, 2013. 5
Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. 1, 4, 5.1
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807â814, 2010. 5
J v Neumann. Zur theorie der gesellschaftsspiele. Mathematische annalen, 100(1):295â320, 1928. 4
Tu Dinh Nguyen, Trung Le, Hung Vu, and Dinh Phung. Dual discriminator generative adversarial nets. In Advances in Neural Information Processing Systems 29 (NIPS), pp. accepted, 2017. 1, 4, 5.1, 5.2, 1
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxil- iary classiï¬er gans. arXiv preprint arXiv:1610.09585, 2016. 1, 5.2
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. 5.2, 1
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015. 1, 4, 5.2
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234â2242, 2016. 1, 4, 5.2, 5.2, 1
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1â9, 2015. 5.2
Lucas Theis, A¨aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. 2
Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, and Bernhard Sch¨olkopf. Adagan: Boosting generative models. arXiv preprint arXiv:1701.02386, 2017. 1, 4
Ruohan Wang, Antoine Cully, Hyung Jin Chang, and Yiannis Demiris. Magan: Margin adaptation for generative adversarial networks. arXiv preprint arXiv:1704.03817, 2017. 1
Yaxing Wang, Lichao Zhang, and Joost van de Weijer. Ensembles of generative adversarial net- works. arXiv preprint arXiv:1612.00991, 2016. 4, 5.2
David Warde-Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. 2016. 1, 4, 5.2, 1
Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. 1
12
# A APPENDIX: FRAMEWORK
In our proposed method, generators G1, G2, ... GK are deep convolutional neural networks param- eterized by θG. These networks share parameters in all layers except for the input layers. The input layer for generator Gk is parameterized by the mapping fθG,k (z) that maps the sampled noise z to the ï¬rst hidden layer activation h. The shared layers are parameterized by the mapping gθG (h) that maps the ï¬rst hidden layer to the generated data. The pseudo-code of sampling from the mixture is described in Alg. 1. Classiï¬er C and classiï¬er D are also deep convolutional neural networks that are both parameterized by θCD. They share parameters in all layers except for the last layer. The pseudo-code of alternatively learning θG and θCD using stochastic gradient descend is described in Alg. 2.
Algorithm 1 Sampling from MGANâs mixture of generators. 1: Sample noise z from the prior Pz. 2: Sample a generator index u from Mult (Ï1, Ï2, ..., ÏK) with predeï¬ned mixing probability Ï =
(Ï1, Ï2, ..., ÏK).
3: h = fθG,u (z) 4: x = gθG (h) 5: Return generated data x and the index u.
Algorithm 2 Alternative training of MGAN using stochastic gradient descent. 1: for number of training iterations do 2:
Sample a minibatch of M data points (xâ),x(), ...,x()) from the data distribution Pjgia.
3:
&
5:
6:
7:
8:
# x) x) son xO)
Sample a minibatch of N generated data points (x x) x) son xO) and N indices (u1, Ua, ..., wn) from the current mixture. N Lo= 8S loss, (x) Lp =~ ome log D (x(â¢) â x ne log [1 -D (x) Update classifier C and discriminator D by descending along their gradient: Vocn (Lo + Lp). Sample a minibatch of N generated data points (x 11) 4) soy x')) and N indices (u1, U2, ..., wn) from the current mixture. La =-$ DX, logD (x) ~ 25% log Cu, (x ) Update the mixture of generators G' by ascending along its gradient: Vo,Lg.
# N
11) 4) soy x'))
)
# â β N
# n=1 log D
# n=1 log Cun
# 9: 10: end for
# B APPENDIX: PROOFS FOR SECTION 3.1
Proposition 1 (Prop. 1 restated). For ï¬xed generators G1, G2, ..., GK and mixture weights Ï1, Ï2, ..., ÏK, the optimal classiï¬er C â = C â
C* (x) = TPG (x) â an Tâ¢5jPG; (x) Pdata (x) D* xX) SS () Paata (X) + Pmoaet (X)
Proof. The optimal Dâ was proved in Prop. 1 in (Goodfellow, 2016). This section shows a similar proof for the optimal C â. Assuming that C â can be optimized in the functional space, we can calculate the functional derivatives of J (G, C, D)with respect to each Ck (x) for k â {2, ..., K}
13
and set them equal to zero:
37 5 K K 5C;, (x) = maTOC) / (nr (x) log (: _ dC 9) + do TPs (x) log C;, 5) dx TPG, (X) â MPC, =) - - 5 (7a ern °
Setting δJ (G,C,D) to 0 for k â {2, ..., K}, we get:
# δCk(x)
ma, (x) _ mapa,(x) __ tKP GK () Ct (x) Cy(x) CK) wsPC.) _ yesults from Eq. 4 due to the fact that >}, jai TIPG; (x)
Ï1pG1 (x) 1 (x) Ï2pG2 (x) 2 (x) = = ... = C â C â C â (6)
Ch (x) = wsPC.) _ yesults from Eq. 4 due to the fact that >}, C# (x) = 1. jai TIPG; (x)
Reformulation of L (G1:K). Replacing the optimal C â and Dâ into Eq. (2), we can reformulate the objective function for the generator as follows:
L (G1:K) = J (G, C â, Dâ)
= J (G,C*, D*) Paata (X) |] + En log Ddata (X) + Pmodet (X) x~Pmodet 108 â TRDG, (%) {9° mar c sisi (7) k=1 j=1 TiPG (x) Pmodet (x) = Exw lo â_ 4 â Panta 6 Pdata (X) + Pmodel (X)
The sum of the first two terms in Eq. was shown a (Goodfellow et al.| 2014) to be 2 - JSD (Paatal|Pmoaet) â log 4. The last io B{*} of Eq. (7) is related to the orto D for the K dis- tributions:
Kk TeDG, (X += Som Bx~ Pe ae k=1 >â jai TPG; (x) K K K = Ss ThEx~ Po, flog pa, ( -y ThEx~ Pe, | log Ss TPG; (x) | + Ss Tr log Tr j= k=l K = So mH pa,) +H Dane (x) + So me log a k=l k=1 K = JSDx (Pays Pays Pax) + So me log (8) k=1
where H (P ) is the Shannon entropy for distribution P . Thus, L (G1:K) can be rewritten as:
K L (Gx) = â log 4 + 2-ISD (Paatal|Pmodet) â 8 - ISDx (Pax; Pos + Pax) â BY) me log me k=1
If the data distribution has the form: Paata (X) = ian
k=1 Ïkqk (x) Theorem 3 (Thm. 3 restated). where the mixture components qk (x)(s) are well-separated, the minimax problem in Eq. (2) or the optimization problem in Eq. (3) has the following solution:
Daw (x) = de (x), Vk = 1,...,K and pmodet ( -> Tk (X) = Paata (x)
, and the corresponding objective value of the optimization problem in Eq. (3) is âBH (7m) = âB we 17k log The
14
Proof. We ï¬rst recap the optimization problem for ï¬nding the optimal Gâ:
min (2 ISD (Paata||Pmodet) â 8» ISDz (Pay, Paz,» Pax)
The JSD in Eq. (8) is given by:
K TT, x ISD (Poy, Peas Pox) = So tHEx~Pe, [ive sen) - S melog mt, (9) k=1 ae 1 73PG; (x) k=1
The i-th expectation in Eq. (9) can be derived as follows:
xn Pe, [i see] < Exx Pe, [log 1] < 0 Vint T5PG; (x)
and the equality occurs if Sa = 1 almost everywhere or equivalently for almost every x Tj except for those in a zero measure set, we have:
pa, (x) > 0 => pa; (x) =0, Vj Fk (10)
Therefore, we obtain the following inequality:
K K 1 ISDw (Pers Pass Pax) < â Yo me log te = Sm log â = H(z) 7 :
and the equality occurs if for almost every x except for those in a zero measure set, we have:
Vk: pa, (x) > 0 => pa; (x) =0, Vj Ak
It follows that
2-JISD (Paata||Pmodet) â 8 -ISDx (Pay, Pas,---, Pax) > 0 â BH (mr) = â8H (x)
and we peak the minimum if pGk = qk, âk since this solution satisï¬es both
Pmodet (X) = yma (x) = Paata (x)
and the conditions depicted in Eq. (10). That concludes our proof.
# C APPENDIX: ADDITIONAL EXPERIMENTS
C.1 SYNTHETIC 2D GAUSSIAN DATA
The true data is sampled from a 2D mixture of 8 Gaussian distributions with a covariance matrix 0.02I and means arranged in a circle of zero centroid and radius 2.0. We use a simple architecture of 8 generators with two fully connected hidden layers and a classiï¬er and a discriminator with one shared hidden layer. All hidden layers contain the same number of 128 ReLU units. The input layer of generators contains 256 noise units sampled from isotropic multivariate Gaussian distribution N (0, I). We do not use batch normalization in any layer. We refer to Tab. 2 for more speciï¬cations of the network and hyperparameters. âSharedâ is short for parameter sharing among generators or between the classiï¬er and the discriminator. Feature maps of 8/1 in the last layer for C and D means that two separate fully connected layers are applied to the penultimate layer, one for C that outputs 8 logits and another for D that outputs 1 logit.
The effect of the number of generators on generated samples. Fig. 6 shows samples produced by MGANs with different numbers of generators trained on synthetic data for 25,000 epochs. The model with 1 generator behaves similarly to the standard GAN as expected. The models with 2, 3 and 4 generators all successfully cover 8 modes, but the ones with more generators draw fewer points scattered between adjacent modes. Finally, the model with 10 generators also covers 8 modes wherein 2 generators share one mode and one generator hovering around another mode.
15
Table 2: Network architecture and hyperparameters for 2D Gaussian data.
Operation G (z) : z â¼ N (0, I) Fully connected Fully connected Fully connected C (x) , D (x) Fully connected Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants Feature maps Nonlinearity 256 128 128 2 2 128 8/1 8 512 128 25,000 0.2 0.0002 β = 0.125 ReLU ReLU Linear Leaky ReLU Softmax/Sigmoid Shared? à â â â à Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.02I), 0
(a) 1 generator. (b) 2 generators. (c) 3 generators. (d) 4 generators. (e) 10 generators.
# a Pig. e
a * * me & fe ¥
oe #: * ~
Figure 6: Samples generated by MGAN models trained on synthetic data with 2, 3, 4 and 10 gener- ators. Generated data are in blue and data samples from the 8 Gaussians are in red.
The effect of β on generated samples. To examine the behavior of the diversity coefï¬cient β, Fig. 7 compares samples produced by our MGAN with 4 generators after 25,000 epochs of training with different values of β. Without the JSD force (β = 0), generated samples cluster around one mode. When β = 0.25, generated data clusters near 4 different modes. When β = 0.75 or 1.0, the JSD force is too strong and causes the generators to collapse, generating 4 increasingly tight clusters. When β = 0.5, generators successfully cover all of the 8 modes.
C.2 REAL-WORLD DATASETS
Fixing batch normalization center. During training we observe that the percentage of active neurons, which we deï¬ne as ReLU units with positive activation for at least 10% of samples in the minibatch, chronically declined. Fig. 8a shows the percentage of active neurons in generators trained on CIFAR-10 declined consistently to 55% in layer 2 and 60% in layer 3. Therefore, the quality of generated images, after reaching the peak level, started declining. One possible cause is that the batch normalization center (offset) is gradually shifted to the negative range as shown in the histogram in Fig. 8b. We also observe the same problem in DCGAN. Our ad-hoc solution for this problem, i.e., we ï¬x the offset at zero for all layers in the generator networks. The rationale is that for each feature map, the ReLU gates will open for about 50% highest inputs in a minibatch across all locations and generators, and close for the rest. Therefore, batch normalization can keep ReLU units alive even when most of their inputs are otherwise negative, and introduces a form of competition that encourages generators to âspecializeâ in different features. This measure signiï¬cantly improves performance but does not totally solve the dying ReLUs problem. We ï¬nd that late in the training, the input to generatorsâ ReLU units became more and more right-skewed, causing the ReLU gates to open less and less often.
16
(a) β = 0 (b) β = 0.25 (c) β = 0.5 (d) β = 0.75 (e) β = 1.0
& i Â¥ 8 â3
â 7 â Bs â a ah %
- * Pg . 8 sy ¥
ES = ee & "i * f
ge # & * & Ae |e
Figure 7: Samples generated by MGAN models trained on synthetic data with different values of diversity coefï¬cient β. Generated data are in blue and data samples from the 8 Gaussians are in red.
(a) % of active neurons in layer 2 and 3. (b) Histogram of batch normalization centers in layer 2 (left) and 3 (right).
Figure 8: Observation of activate neuron rates and batch normalization centers in MGANâs genera- tors trained on CIFAR-10.
Experiment settings. For the experiments on three large-scale natural scene datasets (CIFAR- 10, STL-10, ImageNet), we closely followed the network architecture and training procedure of DCGAN. The speciï¬cations of our models trained on CIFAR-10, STL-10 48Ã48, STL-10 96Ã96 and ImageNet datasets are described in Tabs. (3, 4, 5, 6), respectively. âBNâ is short for batch normalization and âBN centerâ is short for whether to learn batch normalizationâs center or set it at zero. âSharedâ is short for parameter sharing among generators or between the classiï¬er and the discriminator. Feature maps of 10/1 in the last layer for C and D means that two separate fully connected layers are applied to the penultimate layer, one for C that outputs 10 logits and another for D that outputs 1 logit. Finally, Figs. (9, 10, 11, 12, 13) respectively are the enlarged version of Figs. (3a, 3b, 3c, 4a, 4b) in the main manuscript.
# Table 3: Network architecture and hyperparameters for the CIFAR-10 dataset.
Operation Kernel Strides Feature maps BN? BN center? Nonlinearity Shared? G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 250 0.2 0.0002 β = 0.01 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã512 256 128 3 32Ã32Ã3 128 256 512 10/1 â â â à â â â à à à à à â â â à ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid à â â â â â â à Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0
17
Table 4: Network architecture and hyperparameters for the STL-10 48Ã48 dataset.
Operation Kernel Strides Feature maps BN? BN center? Nonlinearity G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 250 0.2 0.0002 β = 1.0 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã1024 512 256 128 3 48Ã48Ã3 128 256 512 1024 10/1 â â â â à â â â â à à à à à à â â â â à ReLU ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0 Shared? à â â â â â â â â Ã
Table 5: Network architecture and hyperparameters for the STL96Ã96 dataset.
Operation Kernel Strides Feature maps BN? BN center? Nonlinearity G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 250 0.2 0.0002 β = 1.0 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã2046 1024 512 256 128 3 32Ã32Ã3 128 256 512 1024 2048 10/1 â â â â â à â â â â â à à à à à à à â â â â â à ReLU ReLU ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0 Shared? à â â â â â â â â â â Ã
18
Table 6: Network architecture and hyperparameters for the ImageNet dataset.
Operation Kernel Strides Feature maps BN? BN center? Nonlinearity G (z) : z â¼ Uniform [â1, 1] Fully connected Transposed convolution Transposed convolution Transposed convolution C (x) , D (x) Convolution Convolution Convolution Fully connected Number of generators Batch size for real data Batch size for each generator Number of iterations Leaky ReLU slope Learning rate Regularization constants 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 5Ã5 10 64 12 50 0.2 0.0002 β = 0.1 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 2Ã2 100 4Ã4Ã512 256 128 3 32Ã32Ã3 128 256 512 10/1 â â â à â â â à à à à à â â â à ReLU ReLU ReLU Tanh Leaky ReLU Leaky ReLU Leaky ReLU Softmax/Sigmoid Optimizer Adam(β1 = 0.5, β2 = 0.999) Weight, bias initialization N (µ = 0, Ï = 0.01), 0 Shared? à â â â â â â Ã
Figure 9: Images generated by MGAN trained on the CIFAR-10 dataset.
19
Figure 10: Images generated by MGAN trained on the rescaled 48Ã48 STL-10 dataset.
20
Figure 11: Images generated by MGAN trained on the rescaled 32Ã32 ImageNet dataset.
21
Figure 12: Cherry-picked samples generated by MGAN trained on the 96Ã96 STL-10 dataset.
22
Figure 13: Incomplete, unrealistic samples generated by MGAN trained on the 96Ã96 STL-10 dataset.
23 | {
"id": "1703.00573"
} |
1708.02182 | Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2. | http://arxiv.org/pdf/1708.02182 | Stephen Merity, Nitish Shirish Keskar, Richard Socher | cs.CL, cs.LG, cs.NE | null | null | cs.CL | 20170807 | 20170807 | 2017
7 1 0 2
g u A 7
# [cs.CL]
# ] L C . s c [
1 v 2 8 1 2 0 . 8 0 7 1 : v i X r a
# Regularizing and Optimizing LSTM Language Models
# Stephen Merity 1 Nitish Shirish Keskar 1 Richard Socher 1
# Abstract
Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence including machine translation, learning tasks, language modeling, and question answering. In this paper, we consider the speciï¬c problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM- based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to- hidden weights as a form of recurrent regulariza- tion. Further, we introduce NT-ASGD, a vari- ant of the averaged stochastic gradient method, wherein the averaging trigger is determined us- ing a non-monotonic condition as opposed to be- ing tuned by the user. Using these and other reg- ularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In ex- ploring the effectiveness of a neural cache in con- junction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.
# 1. Introduction
A naïve application of dropout (Srivastava et al., 2014) to an RNNâs hidden state is ineffective as it disrupts the RNNâs ability to retain long term dependencies (Zaremba et al., 2014). Gal & Ghahramani (2016) propose overcoming this problem by retaining the same dropout mask across multiple time steps as opposed to sampling a new binary mask at each timestep. Another approach is to regularize the network through limiting updates to the RNNâs hidden state. One such approach is taken by Semeniuta et al. (2016) wherein the authors drop updates to network units, speciï¬cally the input gates of the LSTM, in lieu of the units themselves. This is reminiscent of zone- out (Krueger et al., 2016) where updates to the hidden state may fail to occur for randomly selected neurons.
Instead of operating on the RNNâs hidden states, one can regularize the network through restrictions on the recur- rent matrices as well. This can be done either through restricting the capacity of the matrix (Arjovsky et al., 2016; Wisdom et al., 2016; Jing et al., 2016) or through element-wise interactions (Balduzzi & Ghifary, 2016; Bradbury et al., 2016; Seo et al., 2016).
Other forms of regularization explicitly act upon activa- tions such as batch normalization (Ioffe & Szegedy, 2015), recurrent batch normalization (Cooijmans et al., 2016), and layer normalization (Ba et al., 2016). These all introduce additional training parameters and can complicate the train- ing process while increasing the sensitivity of the model.
Effective regularization techniques for deep learning have been the subject of much research in recent years. Given the over-parameterization of neural networks, general- ization performance crucially relies on the ability to regularize the models sufï¬ciently. Strategies such as dropout (Srivastava et al., 2014) and batch normalization (Ioffe & Szegedy, 2015) have found great success and are now ubiquitous in feed-forward and convolutional neural networks. Naïvely applying these approaches to the case of recurrent neural networks (RNNs) has not been highly successful however. Many recent works have hence been focused on the extension of these regularization strategies to RNNs; we brieï¬y discuss some of them below.
In this work, we investigate a set of regularization strategies that are not only highly effective but which can also be used with no modiï¬cation to existing LSTM implementations. The weight-dropped LSTM applies recurrent regulariza- tion through a DropConnect mask on the hidden-to-hidden recurrent weights. Other strategies include the use of randomized-length backpropagation through time (BPTT), embedding dropout, activation regularization (AR), and temporal activation regularization (TAR).
As no modiï¬cations are required of the LSTM implemen- tation these regularization strategies are compatible with black box libraries, such as NVIDIA cuDNN, which can be many times faster than naïve LSTM implementations.
1Salesforce Research, Palo Alto, USA. Correspondence to: Stephen Merity <smerity@salesforce.com>.
Effective methods for training deep recurrent networks have also been a topic of renewed interest. Once a model
Regularizing and Optimizing LSTM Language Models
has been deï¬ned, the training algorithm used is required to not only ï¬nd a good minimizer of the loss function but also converge to such a minimizer rapidly. The choice of the optimizer is even more important in the context of reg- ularized models since such strategies, especially the use of dropout, can impede the training process. Stochastic gradient descent (SGD), and its variants such as Adam (Kingma & Ba, 2014) and RMSprop (Tieleman & Hinton, 2012) are amongst the most popular training methods. These methods iteratively reduce the training loss through scaled (stochastic) gradient steps. In particular, Adam has been found to be widely applicable despite requiring less tuning of its hyperparameters. In the context of word-level language modeling, past work has empirically found that SGD outperforms other methods in not only the ï¬nal loss but also in the rate of convergence. This is in agreement with recent evidence pointing to the insufï¬ciency of adap- tive gradient methods (Wilson et al., 2017).
vent the use of black box RNN implementations that may be many times faster due to low-level hardware-speciï¬c op- timizations.
We propose the use of DropConnect (Wan et al., 2013) on the recurrent hidden to hidden weight matrices which does not require any modiï¬cations to an RNNâs formu- lation. As the dropout operation is applied once to the weight matrices, before the forward and backward pass, the impact on training speed is minimal and any standard RNN implementation can be used, including inï¬exible but highly optimized black box LSTM implementations such as NVIDIAâs cuDNN LSTM.
By performing DropConnect on the hidden-to-hidden weight matrices [U i, U f , U o, U c] within the LSTM, we can prevent overï¬tting from occurring on the recurrent connec- tions of the LSTM. This regularization technique would also be applicable to preventing overï¬tting on the recurrent weight matrices of other RNN cells.
Given the success of SGD, especially within the language modeling domain, we investigate the use of averaged SGD (ASGD) (Polyak & Juditsky, 1992) which is known to have superior theoretical guarantees. ASGD carries out itera- tions similar to SGD, but instead of returning the last iterate as the solution, returns an average of the iterates past a cer- tain, tuned, threshold T . This threshold T is typically tuned and has a direct impact on the performance of the method. We propose a variant of ASGD where T is determined on the ï¬y through a non-monotonic criterion and show that it achieves better training outcomes compared to SGD.
As the same weights are reused over multiple timesteps, the same individual dropped weights remain dropped for the entirety of the forward and backward pass. The result is similar to variational dropout, which applies the same dropout mask to recurrent connections within the LSTM by performing dropout on ht 1, except that the dropout is applied to the recurrent weights. DropConnect could also be used on the non-recurrent weights of the LSTM [W i, W f , W o] though our focus was on preventing over- ï¬tting on the recurrent connection.
# 2. Weight-dropped LSTM
# 3. Optimization
We refer to the mathematical formulation of the LSTM,
it = Ï(W ixt + U iht â ft = Ï(W f xt + U f ht 1) â ot = Ï(W oxt + U oht 1) â Ëct = tanh(W cxt + U cht â ct = it â Ëct + ft â +Ëct 1 ht = ot â tanh(ct)
SGD is among the most popular methods for training deep learning models across various modalities including com- puter vision, natural language processing, and reinforce- ment learning. The training of deep networks can be posed as a non-convex optimization problem
min w 1 N N X i=1 fi(w),
where [W i, W f , W o, U i, U f , U o] are weight matrices, xt is the vector input to the timestep t, ht is the current ex- posed hidden state, ct is the memory cell state, and â is element-wise multiplication.
where fi is the loss function for the ith data point, w are the weights of the network, and the expectation is taken over the data. Given a sequence of learning rates, γk, SGD iteratively takes steps of the form
Preventing overï¬tting within the recurrent connections of an RNN has been an area of extensive research in language modeling. The majority of previous recurrent regulariza- tion techniques have acted on the hidden state vector ht 1, most frequently introducing a dropout operation between timesteps, or performing dropout on the update to the mem- ory state ct. These modiï¬cations to a standard LSTM pre-
wk+1 = wk â γk Ëâf (wk), (1)
where the subscript denotes the iteration number and the Ëâ denotes a stochastic gradient that may be computed on a minibatch of data points. SGD demonstrably performs well in practice and also possesses several attractive theoretical properties such as linear convergence (Bottou et al., 2016), saddle point avoidance (Panageas & Piliouras, 2016) and
Regularizing and Optimizing LSTM Language Models
better generalization performance (Hardt et al., 2015). For the speciï¬c task of neural language modeling, tradition- ally SGD without momentum has been found to outperform other algorithms such as momentum SGD (Sutskever et al., 2013), Adam (Kingma & Ba, 2014), Adagrad (Duchi et al., 2011) and RMSProp (Tieleman & Hinton, 2012) by a sta- tistically signiï¬cant margin.
Motivated by this observation, we investigate averaged SGD (ASGD) to further improve the training process. ASGD has been analyzed in depth theoretically and many surprising results have been shown including its asymp- totic second-order convergence (Polyak & Juditsky, 1992; Mandt et al., 2017). ASGD takes steps identical to equa- tion (1) but instead of returning the last iterate as the solu- K i=T wi, where K is the total num- tion, returns ber of iterations and T < K is a user-speciï¬ed averaging trigger.
SGD to a neighborhood around a solution. In the case of SGD, certain learning-rate reduction strategies such as the step-wise strategy analogously reduce the learning rate by a ï¬xed quantity at such a point. A common strategy em- ployed in language modeling is to reduce the learning rates by a ï¬xed proportion when the performance of the modelâs primary metric (such as perplexity) worsens or stagnates. Along the same lines, one could make a triggering decision based on the performance of the model on the validation set. However, instead of averaging immediately after the validation metric worsens, we propose a non-monotonic criterion that conservatively triggers the averaging when the validation metric fails to improve for multiple cycles; see Algorithm 1. Given that the choice of triggering is irre- versible, this conservatism ensures that the randomness of training does not play a major role in the decision. Anal- ogous strategies have also been proposed for learning-rate reduction in SGD (Keskar & Saon, 2015).
Algorithm 1 Non-monotonically Triggered ASGD (NT- ASGD) Inputs: Initial point w0, learning rate γ, logging interval L, non-monotone interval n. 1: Initialize k â 0, t â 0, T â 0, logs â [] 2: while stopping criterion not met do 3:
Compute stochastic gradient Ëâf (wk) and take SGD step (1). if mod(k, L) = 0 and T = 0 then Compute validation perplexity v. if t > n and v >
4: 5: 6:
logs[l] then min n, t ··· â l ,t â{ }
Set T â k
7: 8: 9: 10: 11: 12: end while return P (k â
end if Append v to logs t â t + 1
9: Append v to logs
# k i=T wi T +1)
Despite its theoretical appeal, ASGD has found limited practical use in training of deep networks. This may be in part due to unclear tuning guidelines for the learning-rate schedule γk and averaging trigger T . If the averaging is triggered too soon, the efï¬cacy of the method is impacted, and if it is triggered too late, many additional iterations may be needed to converge to the solution. In this section, we describe a non-monotonically triggered variant of ASGD (NT-ASGD), which obviates the need for tuning T . Fur- ther, the algorithm uses a constant learning rate throughout the experiment and hence no further tuning is necessary for the decay scheduling.
While the algorithm introduces two additional hyperparam- eters, the logging interval L and non-monotone interval n, we found that setting L to be the number of iterations in an epoch and n = 5 worked well across various models and data sets. As such, we use this setting in all of our NT- ASGD experiments in the following section and demon- strate that it achieves better training outcomes as compared to SGD.
# 4. Extended regularization techniques
In addition to the regularization and optimization tech- niques above, we explored additional regularization tech- niques that aimed to improve data efï¬ciency during training and to prevent overï¬tting of the RNN model.
# 4.1. Variable length backpropagation sequences
Given a ï¬xed sequence length that is used to break a data set into ï¬xed length batches, the data set is not efï¬ciently used. To illustrate this, imagine being given 100 elements to perform backpropagation through with a ï¬xed backprop- agation through time (BPTT) window of 10. Any element divisible by 10 will never have any elements to backprop into, no matter how many times you may traverse the data set. Indeed, the backpropagation window that each element receives is equal to i mod 10 where i is the elementâs in- dex. This is data inefï¬cient, preventing 1 10 of the data set from ever being able to improve itself in a recurrent fash- ion, and resulting in 8 10 of the remaining elements receiving only a partial backpropagation window compared to the full possible backpropagation window of length 10.
Ideally, averaging needs to be triggered when the SGD it- erates converge to a steady-state distribution (Mandt et al., 2017). This is roughly equivalent to the convergence of
To prevent such inefï¬cient data usage, we randomly select the sequence length for the forward and backward pass in two steps. First, we select the base sequence length to be
Regularizing and Optimizing LSTM Language Models
seq with probability p and seq 2 with probability 1 â p, where p is a high value approaching 1. This spreads the start- ing point for the BPTT window beyond the base sequence length. We then select the sequence length according to N (seq, s), where seq is the base sequence length and s is the standard deviation. This jitters the starting point such that it doesnât always fall on a speciï¬c word divisible by seq or seq 2 . From these, the sequence length more efï¬ciently uses the data set, ensuring that when given enough epochs all the elements in the data set experience a full BPTT win- dow, while ensuring the average sequence length remains around the base sequence length for computational efï¬- ciency.
During training, we rescale the learning rate depending on the length of the resulting sequence compared to the original speciï¬ed sequence length. The rescaling step is necessary as sampling arbitrary sequence lengths with a ï¬xed learning rate favors short sequences over longer ones. This linear scaling rule has been noted as important for training large scale minibatch SGD without loss of accu- racy (Goyal et al., 2017) and is a component of unbiased truncated backpropagation through time (Tallec & Ollivier, 2017).
# 4.2. Variational dropout
In standard dropout, a new binary dropout mask is sampled each and every time the dropout function is called. New dropout masks are sampled even if the given connection is repeated, such as the input x0 to an LSTM at timestep t = 0 receiving a different dropout mask than the input x1 fed to the same LSTM at t = 1. A variant of this, variational dropout (Gal & Ghahramani, 2016), samples a binary dropout mask only once upon the ï¬rst call and then to repeatedly use that locked dropout mask for all repeated connections within the forward and backward pass.
While we propose using DropConnect rather than varia- tional dropout to regularize the hidden-to-hidden transition within an RNN, we use variational dropout for all other dropout operations, speciï¬cally using the same dropout mask for all inputs and outputs of the LSTM within a given forward and backward pass. Each example within the mini- batch uses a unique dropout mask, rather than a single dropout mask being used over all examples, ensuring di- versity in the elements dropped out.
the dropout occurs on the embedding matrix that is used for a full forward and backward pass, this means that all occurrences of a speciï¬c word will disappear within that pass, equivalent to performing variational dropout on the connection between the one-hot embedding and the embed- ding lookup.
# 4.4. Weight tying
Weight tying (Inan et al., 2016; Press & Wolf, 2016) shares the weights between the embedding and softmax layer, sub- stantially reducing the total parameter count in the model. The technique has theoretical motivation (Inan et al., 2016) and prevents the model from having to learn a one-to-one correspondence between the input and output, resulting in substantial improvements to the standard LSTM language model.
# 4.5. Independent embedding size and hidden size
In most natural language processing tasks, both pre- trained and trained word vectors are of relatively low dimensionalityâfrequently between 100 and 400 dimen- sions in size. Most previous LSTM language models tie the dimensionality of the word vectors to the dimensional- ity of the LSTMâs hidden state. Even if reducing the word embedding size was not beneï¬cial in preventing overï¬t- ting, the easiest reduction in total parameters for a language model is reducing the word vector size. To achieve this, the ï¬rst and last LSTM layers are modiï¬ed such that their in- put and output dimensionality respectively are equal to the reduced embedding size.
# 4.6. Activation Regularization (AR) and Temporal Activation Regularization (TAR)
L2-regularization is often used on the weights of the net- work to control the norm of the resulting model and reduce overï¬tting. In addition, L2 decay can be used on the in- dividual unit activations and on the difference in outputs of an RNN at different time steps; these strategies labeled as activation regularization (AR) and temporal activation regularization (TAR) respectively (Merity et al., 2017). AR penalizes activations that are signiï¬cantly larger than 0 as a means of regularizing the network. Concretely, AR is deï¬ned as
# 4.3. Embedding dropout
α L2(m â ht)
Following Gal & Ghahramani (2016), we employ embed- ding dropout. This is equivalent to performing dropout on the embedding matrix at a word level, where the dropout is broadcast across all the word vectorâs embedding. The re- maining non-dropped-out word embeddings are scaled by 1 pe where pe is the probability of embedding dropout. As
where m is the dropout mask, L2(·) = k·k2, ht is the out- put of the RNN at timestep t, and α is a scaling coefï¬cient. TAR falls under the broad category of slowness regulariz- ers (Hinton, 1989; Földiák, 1991; Luciw & Schmidhuber, 2012; Jonschkowski & Brock, 2015) which penalize the model from producing large changes in the hidden state.
â
Regularizing and Optimizing LSTM Language Models
Using the notation from AR, TAR is deï¬ned as
β L2(ht â ht+1)
where β is a scaling coefï¬cient. As in Merity et al. (2017), the AR and TAR loss are only applied to the output of the ï¬nal RNN layer as opposed to being applied to all layers.
the recurrent weight matrices. For WT2, we increase the input dropout to 0.65 to account for the increased vocabu- lary size. For all experiments, we use AR and TAR values of 2 and 1 respectively, and tie the embedding and soft- max weights. These hyperparameters were chosen through trial and error and we expect further improvements may be possible if a ï¬ne-grained hyperparameter search were to be conducted. In the results, we abbreviate our approach as AWD-LSTM for ASGD Weight-Dropped LSTM.
# 5. Experiment Details
For evaluating the impact of these approaches, we perform language modeling over a preprocessed version of the Penn Treebank (PTB) (Mikolov et al., 2010) and the WikiText-2 (WT2) data set (Merity et al., 2016).
PTB: The Penn Treebank data set has long been a central data set for experimenting with language modeling. The data set is heavily preprocessed and does not contain capital letters, numbers, or punctuation. The vocabulary is also capped at 10,000 unique words, quite small in comparison to most modern datasets, which results in a large number of out of vocabulary (OoV) tokens.
WT2: WikiText-2 is sourced from curated Wikipedia ar- ticles and is approximately twice the size of the PTB data set. The text is tokenized and processed using the Moses tokenizer (Koehn et al., 2007), frequently used for machine translation, and features a vocabulary of over 30,000 words. Capitalization, punctuation, and numbers are retained in this data set.
All experiments use a three-layer LSTM model with 1150 units in the hidden layer and an embedding of size 400. The loss was averaged over all examples and timesteps. All em- bedding weights were uniformly initialized in the interval [â0.1, 0.1] and all other weights were initialized between [â 1 âH
For training the models, we use the NT-ASGD algorithm discussed in the previous section for 750 epochs with L equivalent to one epoch and n = 5. We use a batch size of 80 for WT2 and 40 for PTB. Empirically, we found rel- atively large batch sizes (e.g., 40-80) performed better than smaller sizes (e.g., 10-20) for NT-ASGD. After comple- tion, we run ASGD with T = 0 and hot-started w0 as a ï¬ne-tuning step to further improve the solution. For this ï¬ne-tuning step, we terminate the run using the same non- monotonic criterion detailed in Algorithm 1.
We carry out gradient clipping with maximum norm 0.25 and use an initial learning rate of 30 for all experiments. We use a random BPTT length which is N (70, 5) with proba- bility 0.95 and N (35, 5) with probability 0.05. The values used for dropout on the word vectors, the output between LSTM layers, the output of the ï¬nal LSTM layer, and em- bedding dropout where (0.4, 0.3, 0.4, 0.1) respectively. For the weight-dropped LSTM, a dropout of 0.5 was applied to
# 6. Experimental Analysis
We present the single-model perplexity results for both our models (AWD-LSTM) and other competitive models in Ta- ble 1 and 2 for PTB and WT2 respectively. On both data sets we improve the state-of-the-art, with our vanilla LSTM model beating the state of the art by approximately 1 unit on PTB and 0.1 units on WT2.
In comparison to other recent state-of-the-art models, our model uses a vanilla LSTM. Zilly et al. (2016) propose the recurrent highway network, which extends the LSTM to al- low multiple hidden state updates per timestep. Zoph & Le (2016) use a reinforcement learning agent to generate an RNN cell tailored to the speciï¬c task of language model- ing, with the cell far more complex than the LSTM.
Independently of our work, Melis et al. (2017) apply ex- tensive hyperparameter search to an LSTM based lan- guage modeling implementation, analyzing the sensitivity of RNN based language models to hyperparameters. Un- like our work, they use a modiï¬ed LSTM, which caps the input gate it to be min(1 â ft, it), use Adam with β1 = 0 rather than SGD or ASGD, use skip connections between LSTM layers, and use a black box hyperparameter tuner for exploring models and settings. Of particular interest is that their hyperparameters were tuned individually for each data set compared to our work which shared almost all hyperpa- rameters between PTB and WT2, including the embedding and hidden size for both data sets. Due to this, they used less model parameters than our model and found shallow LSTMs of one or two layers worked best for WT2.
Like our work, Melis et al. (2017) ï¬nd that the underly- ing LSTM architecture can be highly effective compared to complex custom architectures when well tuned hyperpa- rameters are used. The approaches used in our work and Melis et al. (2017) may be complementary and would be worth exploration.
# 7. Pointer models
In past work, pointer based attention models have been shown to be highly effective in improving language mod- eling (Merity et al., 2016; Grave et al., 2016). Given such
Regularizing and Optimizing LSTM Language Models
Model Parameters Validation Test Mikolov & Zweig (2012) - KN-5 Mikolov & Zweig (2012) - KN5 + cache Mikolov & Zweig (2012) - RNN Mikolov & Zweig (2012) - RNN-LDA Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache Zaremba et al. (2014) - LSTM (medium) Zaremba et al. (2014) - LSTM (large) Gal & Ghahramani (2016) - Variational LSTM (medium) Gal & Ghahramani (2016) - Variational LSTM (medium, MC) Gal & Ghahramani (2016) - Variational LSTM (large) Gal & Ghahramani (2016) - Variational LSTM (large, MC) Kim et al. (2016) - CharCNN Merity et al. (2016) - Pointer Sentinel-LSTM Grave et al. (2016) - LSTM Grave et al. (2016) - LSTM + continuous cache pointer Inan et al. (2016) - Variational LSTM (tied) + augmented loss Inan et al. (2016) - Variational LSTM (tied) + augmented loss Zilly et al. (2016) - Variational RHN (tied) Zoph & Le (2016) - NAS Cell (tied) Zoph & Le (2016) - NAS Cell (tied) Melis et al. (2017) - 4-layer skip connection LSTM (tied) 2Mâ¡ 2Mâ¡ 6Mâ¡ 7Mâ¡ 9Mâ¡ 20M 66M 20M 20M 66M 66M 19M 21M â â 24M 51M 23M 25M 54M 24M â â â â â 86.2 82.2 81.9 ± 0.2 â 77.9 ± 0.3 â â 72.4 â â 75.7 71.1 67.9 â â 60.9 141.2 125.7 124.7 113.7 92.0 82.7 78.4 79.7 ± 0.1 78.6 ± 0.1 75.2 ± 0.2 73.4 ± 0.0 78.9 70.9 82.3 72.1 73.2 68.5 65.4 64.0 62.4 58.3 AWD-LSTM - 3-layer LSTM (tied) 24M 60.0 57.3 AWD-LSTM - 3-layer LSTM (tied) + continuous cache pointer 24M 53.9 52.8
Table1. Single model perplexity on validation and test sets for the Penn Treebank language modeling task. Parameter numbers with â¡ are estimates based upon our understanding of the model and with reference to Merity et al. (2016). Models noting tied use weight tying on the embedding and softmax weights. Our model, AWD-LSTM, stands for ASGD Weight-Dropped LSTM.
Model Parameters Validation Test Inan et al. (2016) - Variational LSTM (tied) (h = 650) Inan et al. (2016) - Variational LSTM (tied) (h = 650) + augmented loss Grave et al. (2016) - LSTM Grave et al. (2016) - LSTM + continuous cache pointer Melis et al. (2017) - 1-layer LSTM (tied) Melis et al. (2017) - 2-layer skip connection LSTM (tied) 28M 28M â â 24M 24M 92.3 91.5 â â 69.3 69.1 87.7 87.0 99.3 68.9 65.9 65.9 AWD-LSTM - 3-layer LSTM (tied) 33M 68.6 65.8 AWD-LSTM - 3-layer LSTM (tied) + continuous cache pointer 33M 53.8 52.0
Table2. Single model perplexity over WikiText-2. Models noting tied use weight tying on the embedding and softmax weights. Our model, AWD-LSTM, stands for ASGD Weight-Dropped LSTM.
Regularizing and Optimizing LSTM Language Models
substantial improvements to the underlying neural lan- guage model, it remained an open question as to how ef- fective pointer augmentation may be, especially when im- provements such as weight tying may act in mutually ex- clusive ways.
The neural cache model (Grave et al., 2016) can be added on top of a pre-trained language model at negligible cost. The neural cache stores the previous hidden states in mem- ory cells and then uses a simple convex combination of the probability distributions suggested by the cache and the language model for prediction. The cache model has three hyperparameters: the memory size (window) for the cache, the coefï¬cient of the combination (which determines how the two distributions are mixed), and the ï¬atness of the cache distribution. All of these are tuned on the validation set once a trained language model has been obtained and require no training by themselves, making it quite inexpen- sive to use. The tuned values for these hyperparameters were (2000, 0.1, 1.0) for PTB and (3785, 0.1279, 0.662) for WT2 respectively.
Word Count âloss Word Count âloss . , of = to in <eos> and the a " that by was ) with for on as at 7632 9857 5816 2884 4048 4178 3690 5251 12481 3381 2540 1365 1252 2279 1101 1176 1215 1485 1338 879 -696.45 -687.49 Meridian -365.21 Churchill - -342.01 Blythe -283.10 -222.94 Sonic Richmond -216.42 -215.38 Starr -209.97 Australian Pagan -149.78 -127.99 Asahi -118.09 -113.05 Hu -107.95 Hedgehog -94.74 Burma 29 -93.01 -87.68 Mississippi -81.55 German -77.05 mill -59.86 <unk> Japanese 11540 161 137 67 97 75 101 74 234 54 39 181 43 29 35 92 72 108 67 33 5047.34 1057.78 849.43 682.15 554.95 543.85 429.18 416.52 366.36 365.19 316.24 295.97 285.58 266.48 263.65 260.88 241.59 241.23 237.76 231.11 Cooke
In Tables 1 and 2, we show that the model further improves the perplexity of the language model by as much as 6 per- plexity points for PTB and 11 points for WT2. While this is smaller than the gains reported in Grave et al. (2016), which used an LSTM without weight tying, this is still a substantial drop. Given the simplicity of the neural cache model, and the lack of any trained components, these re- sults suggest that existing neural language models remain fundamentally lacking, failing to capture long term depen- dencies or remember recently seen words effectively.
Table3. The sum total difference in loss (log perplexity) that a given word results in over all instances in the validation data set of WikiText-2 when the continuous cache pointer is introduced. The right column contains the words with the twenty best im- provements (i.e., where the cache was advantageous), and the left column the twenty most deteriorated (i.e., where the cache was disadvantageous).
likely well suited. These observations motivate the design of a cache framework that is more aware of the relative strengths of the two models.
To understand the impact the pointer had on the model, speciï¬cally the validation set perplexity, we detail the con- tribution that each word has on the cache modelâs overall perplexity in Table 3. We compute the sum of the total dif- ference in the loss function value (i.e., log perplexity) be- tween the LSTM-only and LSTM-with-cache models for the target words in the validation portion of the WikiText-2 data set. We present results for the sum of the difference as opposed to the mean since the latter undesirably overem- phasizes infrequently occurring words for which the cache helps signiï¬cantly and ignores frequently occurring words for which the cache provides modest improvements that cu- mulatively make a strong contribution.
The largest cumulative gain is in improving the handling of <unk> tokens, though this is over 11540 instances. The second best improvement, approximately one ï¬fth the gain given by the <unk> tokens, is for Meridian, yet this word only occurs 161 times. This indicates the cache still helps signiï¬cantly even for relatively rare words, further demon- strated by Churchill, Blythe, or Sonic. The cache is not beneï¬cial when handling frequent word categories, such as punctuation or stop words, for which the language model is
# 8. Model Ablation Analysis
In Table 4, we present the values of validation and test- ing perplexity for different variants of our best-performing LSTM model. Each variant removes a form of optimization or regularization.
The ï¬rst two variants deal with the optimization of the lan- guage models while the rest deal with the regularization. For the model using SGD with learning rate reduced by 2 using the same nonmonotonic fashion, there is a signiï¬- cant degradation in performance. This stands as empirical evidence regarding the beneï¬t of averaging of the iterates. Using a monotonic criterion instead also hampered perfor- mance. Similarly, the removal of the ï¬ne-tuning step ex- pectedly also degrades the performance. This step helps improve the estimate of the minimizer by resetting the memory of the previous experiment. While this process of ï¬ne-tuning can be repeated multiple times, we found little beneï¬t in repeating it more than once.
The removal of regularization strategies paints a similar the inclusion of all of the proposed strategies picture;
Regularizing and Optimizing LSTM Language Models
PTB WT2 Model Validation Test Validation Test AWD-LSTM (tied) 60.0 57.3 68.6 65.8 â ï¬ne-tuning â NT-ASGD 60.7 66.3 58.8 63.7 69.1 73.3 66.0 69.7 â variable sequence lengths â embedding dropout â weight decay â AR/TAR â full sized embedding â weight-dropping 61.3 65.1 63.7 62.7 68.0 71.1 58.9 62.7 61.0 60.3 65.6 68.9 69.3 71.1 71.9 73.2 73.7 78.4 66.2 68.1 68.7 70.1 70.7 74.9
Table4. Model ablations for our best LSTM models reporting results over the validation and test set on Penn Treebank and WikiText-2. Ablations are split into optimization and regularization variants, sorted according to the achieved validation perplexity on WikiText-2.
was pivotal in ensuring state-of-the-art performance. The most extreme perplexity jump was in removing the hidden- to-hidden LSTM regularization provided by the weight- dropped LSTM. Without such hidden-to-hidden regular- ization, perplexity rises substantially, up to 11 points. This is in line with previous work showing the neces- sity of recurrent regularization in state-of-the-art models (Gal & Ghahramani, 2016; Inan et al., 2016).
We also experiment with static sequence lengths which we had hypothesized would lead to inefï¬cient data usage. This also worsens the performance by approximately one per- plexity unit. Next, we experiment with reverting to match- ing the sizes of the embedding vectors and the hidden states. This signiï¬cantly increases the number of param- eters in the network (to 43M in the case of PTB and 70M for WT2) and leads to degradation by almost 8 perplexity points, which we attribute to overï¬tting in the word em- beddings. While this could potentially be improved with more aggressive regularization, the computational over- head involved with substantially larger embeddings likely outweighs any advantages. Finally, we experiment with the removal of embedding dropout, AR/TAR and weight decay. In all of the cases, the model suffers a perplexity increase of 2â6 points which we hypothesize is due to insufï¬cient regularization in the network.
vestigate other regularization strategies including the use of variable BPTT length and achieve a new state-of-the-art perplexity on the PTB and WikiText-2 data sets. Our mod- els outperform custom-built RNN cells and complex reg- ularization strategies that preclude the possibility of using optimized libraries such as the NVIDIA cuDNN LSTM. Finally, we explore the use of a neural cache in conjunc- tion with our proposed model and show that this further improves the performance, thus attaining an even lower state-of-the-art perplexity. While the regularization and op- timization strategies proposed are demonstrated on the task of language modeling, we anticipate that they would be generally applicable across other sequence learning tasks.
# References
Arjovsky, M., Shah, A., and Bengio, Y. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120â1128, 2016.
Ba, J., Kiros, J., and Hinton, G. E. Layer normalization. CoRR, abs/1607.06450, 2016.
Balduzzi, D. and Ghifary, M. Strongly-typed recurrent neu- ral networks. arXiv preprint arXiv:1602.02218, 2016.
# 9. Conclusion
Bottou, L., Curtis, F. E., and Nocedal, J. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
In this work, we discuss regularization and optimization strategies for neural language models. We propose the weight-dropped LSTM, a strategy that uses a DropConnect mask on the hidden-to-hidden weight matrices, as a means to prevent overï¬tting across the recurrent connections. Fur- ther, we investigate the use of averaged SGD with a non- monontonic trigger for training language models and show that it outperforms SGD by a signiï¬cant margin. We in-
Bradbury, J., Merity, S., Xiong, C., and Socher, R. arXiv preprint Quasi-Recurrent Neural Networks. arXiv:1611.01576, 2016.
Cooijmans, T., Ballas, N., Laurent, C., and Courville, A. C. Recurrent batch normalization. CoRR, abs/1603.09025, 2016.
Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization.
Regularizing and Optimizing LSTM Language Models
Journal of Machine Learning Research, 12(Jul):2121â 2159, 2011.
E. Moses: Open source toolkit for statistical machine translation. In ACL, 2007.
Földiák, P. Learning invariance from transformation se- quences. Neural Computation, 3(2):194â200, 1991.
Gal, Y. and Ghahramani, Z. A theoretically grounded appli- cation of dropout in recurrent neural networks. In NIPS, 2016.
Krueger, D., Maharaj, T., Kramár, J., Pezeshki, M., Bal- las, N., Ke, N., Goyal, A., Bengio, Y., Larochelle, H., Courville, A., et al. Zoneout: Regularizing RNNss by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Luciw, M. and Schmidhuber, J. Low complexity proto- value function learning from sensory observations with incremental slow feature analysis. Artiï¬cial Neural Net- works and Machine LearningâICANN 2012, pp. 279â 287, 2012.
Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016.
Mandt, S., Hoffman, M. D., and Blei, D. M. Stochastic gra- dient descent as approximate bayesian inference. arXiv preprint arXiv:1704.04289, 2017.
Hardt, M., Recht, B., and Singer, Y. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.
Hinton, G. E. Connectionist learning procedures. Artiï¬cial intelligence, 40(1-3):185â234, 1989.
Inan, H., Khosravi, K., and Socher, R. Tying Word Vectors and Word Classiï¬ers: A Loss Framework for Language Modeling. arXiv preprint arXiv:1611.01462, 2016.
Ioffe, S. and Szegedy, C. Batch normalization: Accelerat- ing deep network training by reducing internal covariate shift. In ICML, 2015.
Melis, G., Dyer, C., and Blunsom, P. On the State of the Art of Evaluation in Neural Language Models. arXiv preprint arXiv:1707.05589, 2017.
Merity, S., Xiong, C., Bradbury, J., and Socher, R. arXiv preprint Pointer Sentinel Mixture Models. arXiv:1609.07843, 2016.
Merity, S., McCann, B., and Socher, R. Revisiting acti- vation regularization for language rnns. arXiv preprint arXiv:1708.01009, 2017.
Jing, L., Shen, Y., DubËcek, T., Peurifoy, J., Skirlo, S., Tegmark, M., and SoljaËci´c, M. Tunable Efï¬cient Uni- tary Neural Networks (EUNN) and their application to RNN. arXiv preprint arXiv:1612.05231, 2016.
Mikolov, T. and Zweig, G. Context dependent recurrent neural network language model. SLT, 12:234â239, 2012.
Mikolov, T., Karaï¬Ã¡t, M., Burget, L., Cernocký, J., and Khudanpur, S. Recurrent neural network based language model. In INTERSPEECH, 2010.
Jonschkowski, R. and Brock, O. Learning state represen- tations with robotic priors. Auton. Robots, 39:407â428, 2015.
Keskar, N. and Saon, G. A nonmonotone learning rate strategy for sgd training of deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 4974â4978. IEEE, 2015.
Panageas, I. and Piliouras, G. Gradient descent converges to minimizers: The case of non-isolated critical points. CoRR, abs/1605.00405, 2016.
Polyak, B. and Juditsky, A. Acceleration of stochastic ap- proximation by averaging. SIAM Journal on Control and Optimization, 30(4):838â855, 1992.
Kim, Y., Jernite, Y., Sontag, D., and Rush, A. M. Character- aware neural language models. In Thirtieth AAAI Con- ference on Artiï¬cial Intelligence, 2016.
Press, O. and Wolf, L. Using the output embed- arXiv preprint ding to improve language models. arXiv:1608.05859, 2016.
Kingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Semeniuta, S., Severyn, A., and Barth, E. Recurrent dropout without memory loss. In COLING, 2016.
Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst,
Seo, M., Min, S., Farhadi, A., and Hajishirzi, H. Query- arXiv Reduction Networks for Question Answering. preprint arXiv:1606.04582, 2016.
Regularizing and Optimizing LSTM Language Models
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014.
Sutskever, I., Martens, J., Dahl, G., and Hinton, G. On the importance of initialization and momentum in deep learning. In International conference on machine learn- ing, pp. 1139â1147, 2013.
Tallec, C. and Ollivier, Y. Unbiasing truncated backprop- agation through time. arXiv preprint arXiv:1705.08209, 2017.
Tieleman, T. and Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magni- tude. COURSERA: Neural networks for machine learn- ing, 4(2):26â31, 2012.
Wan, L., Zeiler, M., Zhang, S., LeCun, Y, and Fergus, R. Regularization of neural networks using dropconnect. In Proceedings of the 30th international conference on ma- chine learning (ICML-13), pp. 1058â1066, 2013.
Wilson, A. C, Roelofs, R., Stern, M., Srebro, N., and Recht, B. The marginal value of adaptive gradient methods in machine learning. arXiv preprint arXiv:1705.08292, 2017.
Wisdom, S., Powers, T., Hershey, J., Le Roux, J., and Atlas, L. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880â4888, 2016.
Zaremba, W., Sutskever, I., and Vinyals, O. Recur- arXiv preprint rent neural network regularization. arXiv:1409.2329, 2014.
Zilly, J. G., Srivastava, R. K., KoutnÃk, J., and Schmid- huber, J. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
Zoph, B. and Le, Q. V. Neural architecture search with re- inforcement learning. arXiv preprint arXiv:1611.01578, 2016. | {
"id": "1611.01462"
} |
1708.00489 | Active Learning for Convolutional Neural Networks: A Core-Set Approach | Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin. | http://arxiv.org/pdf/1708.00489 | Ozan Sener, Silvio Savarese | stat.ML, cs.CV, cs.LG | ICLR 2018 Paper | null | stat.ML | 20170801 | 20180601 | 8 1 0 2
n u J 1 ] L M . t a t s [ 4 v 9 8 4 0 0 . 8 0 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# ACTIVE LEARNING FOR CONVOLUTIONAL NEURAL NETWORKS: A CORE-SET APPROACH
Ozan Senerâ Intel Labs ozan.sener@intel.com
Silvio Savarese Stanford University ssilvio@stanford.edu
# ABSTRACT
Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we deï¬ne the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method signiï¬cantly outperforms existing approaches in image classiï¬cation experiments by a large margin.
1
# INTRODUCTION
Deep convolutional neural networks (CNNs) have shown unprecedented success in many areas of research in computer vision and pattern recognition, such as image classiï¬cation, object detection, and scene segmentation. Although CNNs are universally successful in many tasks, they have a major drawback; they need a very large amount of labeled data to be able to learn their large number of parameters. More importantly, it is almost always better to have more data since the accuracy of CNNs is often not saturated with increasing dataset size. Hence, there is a constant desire to collect more and more data. Although this a desired behavior from an algorithmic perspective (higher representative power is typically better), labeling a dataset is a time consuming and an expensive task. These practical considerations raise a critical question: âwhat is the optimal way to choose data points to label such that the highest accuracy can be obtained given a ï¬xed labeling budget.â Active learning is one of the common paradigms to address this question.
The goal of active learning is to ï¬nd effective ways to choose data points to label, from a pool of unlabeled data points, in order to maximize the accuracy. Although it is not possible to obtain a universally good active learning strategy (Dasgupta, 2004), there exist many heuristics (Settles, 2010) which have been proven to be effective in practice. Active learning is typically an iterative process in which a model is learned at each iteration and a set of points is chosen to be labelled from a pool of unlabelled points using these aforementioned heuristics. We experiment with many of these heuristics in this paper and ï¬nd them not effective when applied to CNNs. We argue that the main factor behind this ineffectiveness is the correlation caused via batch acquisition/sampling. In the classical setting, the active learning algorithms typically choose a single point at each iteration; however, this is not feasible for CNNs since i) a single point is likely to have no statistically signiï¬cant impact on the accuracy due to the local optimization methods, and ii) each iteration requires a full training until convergence which makes it intractable to query labels one-by-one. Hence, it is necessary to query
âWork is completed while author is at Stanford University.
1
Published as a conference paper at ICLR 2018
labels for a large subset at each iteration and it results in correlated samples even for moderately small subset sizes.
In order to tailor an active learning method for the batch sampling case, we decided to deï¬ne the active learning as core-set selection problem. Core-set selection problem aims to ï¬nd a small subset given a large labeled dataset such that a model learned over the small subset is competitive over the whole dataset. Since we have no labels available, we perform the core-set selection without using the labels. In order to attack the unlabeled core-set problem for CNNs, we provide a rigorous bound between an average loss over any given subset of the dataset and the remaining data points via the geometry of the data points. As an active learning algorithm, we try to choose a subset such that this bound is minimized. Moreover, minimization of this bound turns out to be equivalent to the k-Center problem (Wolf, 2011) and we adopt an efï¬cient approximate solution to this combinatorial optimization problem. We further study the behavior of our proposed algorithm empirically for the problem of image classiï¬cation using three different datasets. Our empirical analysis demonstrates state-of-the-art performance by a large margin.
# 2 RELATED WORK
We discuss the related work in the following categories separately. Brieï¬y, our work is different from existing approaches in that i) it deï¬nes the active learning problem as core-set selection, ii) we consider both fully supervised and weakly supervised cases, and iii) we rigorously address the core-set selection problem directly for CNNs with no extra assumption.
Active Learning Active learning has been widely studied and most of the early work can be found in the classical survey of Settles (2010). It covers acquisition functions such as information theoretical methods (MacKay, 1992), ensemble approaches (McCallumzy & Nigamy, 1998; Freund et al., 1997) and uncertainty based methods (Tong & Koller, 2001; Joshi et al., 2009; Li & Guo, 2013).
Bayesian active learning methods typically use a non-parametric model like Gaussian process to estimate the expected improvement by each query (Kapoor et al., 2007) or the expected error after a set of queries (Roy & McCallum, 2001). These approaches are not directly applicable to large CNNs since they do not scale to large-scale datasets. A recent approach by Gal & Ghahramani (2016) shows an equivalence between dropout and approximate Bayesian inference enabling the application of Bayesian methods to deep learning. Although Bayesian active learning has been shown to be effective for small datasets (Gal et al., 2017), our empirical analysis suggests that they do not scale to large-scale datasets because of batch sampling.
One important class is that of uncertainty based methods, which try to ï¬nd hard examples using heuristics like highest entropy (Joshi et al., 2009), and geometric distance to decision boundaries (Tong & Koller, 2001; Brinker, 2003). Our empirical analysis ï¬nd them not to be effective for CNNs.
There are recent optimization based approaches which can trade-off uncertainty and diversity to obtain a diverse set of hard examples in batch mode active learning setting. Both Elhamifar et al. (2013) and Yang et al. (2015) design a discrete optimization problem for this purpose and use its convex surrogate. Similarly, Guo (2010) cast a similar problem as matrix partitioning. However, the optimization algorithms proposed in these papers use n2 variables where n is the number of data points. Hence, they do not scale to large datasets. There are also many pool based active learning algorithms designed for the speciï¬c class of machine learning algorithms like k-nearest neighbors and naive Bayes (Wei et al., 2015), logistic regression Hoi et al. (2006); Guo & Schuurmans (2008), and linear regression with Gaussian noise (Yu et al., 2006). Even in the algorithm agnostic case, one can design a set-cover algorithm to cover the hypothesis space using sub-modularity (Guillory & Bilmes, 2010; Golovin & Krause, 2011). On the other hand, Demir et al. (2011) uses a heuristic to ï¬rst ï¬lter the pool based on uncertainty and then choose point to label using diversity. Our algorithm can be considered to be in this class; however, we do not use any uncertainty information. Our algorithm is also the ï¬rst one which is applied to the CNNs. Most similar to ours are (Joshiy et al., 2010) and (Wang & Ye, 2015). Joshiy et al. (2010) uses a similar optimization problem. However, they offer no theoretical justiï¬cation or analysis. Wang & Ye (2015) proposes to use empirical risk minimization like us; however, they try to minimize the difference between two distributions (maximum mean discrepancy between iid. samples from the dataset and the actively selected samples) instead of
2
Published as a conference paper at ICLR 2018
core-set loss. Moreover, both algorithms are also not experimented with CNNs. In our experimental study, we compare with (Wang & Ye, 2015).
Recently, a discrete optimization based method (Berlind & Urner, 2015) which is similar to ours has been presented for k-NN type algorithms in the domain shift setting. Although our theoretical analysis borrows some techniques from them, their results are only valid for k-NNs.
Active learning algorithms for CNNs are also recently presented in (Wang et al., 2016; Stark et al., 2015). Wang et al. (2016) propose an heuristic based algorithm which directly assigns labels to the data points with high conï¬dence and queries labels for the ones with low conï¬dence. Moreover, Stark et al. (2015) speciï¬cally targets recognizing CAPTCHA images. Although their results are promising for CAPTCHA recognition, their method is not effective for image classiï¬cation. We discuss limitations of both approaches in Section 5.
On the theoretical side, it is shown that greedy active learning is not possible in algorithm and data agnostic case (Dasgupta, 2005). However, there are data dependent results showing that it is indeed possible to obtain a query strategy which has better sample complexity than querying all points. These results either use assumptions about data-dependent realizability of the hypothesis space like (Gonen et al., 2013) or a data dependent measure of the concept space called disagreement coefï¬cient (Hanneke, 2007). It is also possible to perform active learning in a batch setting using the greedy algorithm via importance sampling (Ganti & Gray, 2012). Although the aforementioned algorithms enjoy theoretical guarantees, they do not apply to large-scale problems.
Core-Set Selection The closest literature to our work is the problem of core-set selection since we deï¬ne active learning as a core-set selection problem. This problem considers a fully labeled dataset and tries to choose a subset of it such that the model trained on the selected subset will perform as closely as possible to the model trained on the entire dataset. For speciï¬c learning algorithms, there are methods like core-sets for SVM (Tsang et al., 2005) and core-sets for k-Means and k-Medians (Har-Peled & Kushal, 2005). However, we are not aware of such a method for CNNs.
The most similar algorithm to ours is the unsupervised subset selection algorithm in (Wei et al., 2013). It uses a facility location problem to ï¬nd a diverse cover for the dataset. Our algorithm differs in that it uses a slightly different formulation of facility location problem. Instead of the min-sum, we use the minimax (Wolf, 2011) form. More importantly, we apply this algorithm for the ï¬rst time to the problem of active learning and provide theoretical guarantees for CNNs.
Weakly-Supervised Deep Learning Our paper is also related to semi-supervised deep learning since we experiment the active learning both in the fully-supervised and weakly-supervised scheme. One of the early weakly-supervised convolutional neural network algorithms was Ladder networks (Rasmus et al., 2015). Recently, we have seen adversarial methods which can learn a data distribution as a result of a two-player non-cooperative game (Salimans et al., 2016; Goodfellow et al., 2014; Radford et al., 2015). These methods are further extended to feature learning (Dumoulin et al., 2016; Donahue et al., 2016). We use Ladder networks in our experiments; however, our method is agnostic to the weakly-supervised learning algorithm choice and can utilize any model.
# 3 PROBLEM DEFINITION
In this section, we formally deï¬ne the problem of active learning in the batch setting and set up the notation for the rest of the paper. We are interested in a C class classiï¬cation problem deï¬ned over a compact space X and a label space Y = {1, . . . , C}. We also consider a loss function l(·, ·; w) : X à Y â R parametrized over the hypothesis class (w), e.g. parameters of the deep learning algorithm. We further assume class-speciï¬c regression functions ηc(x) = p(y = c|x) to be λη-Lipschitz continuous for all c.
We consider a large collection of data points which are sampled i.i.d. over the space Z = X Ã Y as {xi, yi}iâ[n] â¼ pZ where [n] = {1, . . . , n}. We further consider an initial pool of data-points chosen uniformly at random as s0 = {s0(j) â [n]}jâ[m].
An active learning algorithm only has access to {xi}iâ[n] and {ys(j)}jâ[m]. In other words, it can only see the labels of the points in the initial sub-sampled pool. It is also given a budget b of queries
3
Published as a conference paper at ICLR 2018
to ask an oracle, and a learning algorithm As which outputs a set of parameters w given a labelled set s. The active learning with a pool problem can simply be deï¬ned as
min s1:|s1|â¤b Ex,yâ¼pZ [l(x, y; As0âªs1 )] (1)
In other words, an active learning algorithm can choose b extra points and get them labelled by an oracle to minimize the future expected loss. There are a few differences between our formulation and the classical deï¬nition of active learning. Classical methods consider the case in which the budget is 1 (b = 1) but a single point has negligible effect in a deep learning regime hence we consider the batch case. It is also very common to consider multiple rounds of this game. We also follow the multiple round formulation with a myopic approach by solving the single round of labelling as;
min sk+1:|sk+1|â¤b Ex,yâ¼pZ [l(x, y; As0âª...sk+1)] (2)
We only discuss the ï¬rst iteration where k = 0 for brevity although we apply it over multiple rounds.
At each iteration, an active learning algorithm has two stages: 1. identifying a set of data-points and presenting them to an oracle to be labelled, and 2. training a classiï¬er using both the new and the previously labeled data-points. The second stage (training the classiï¬er) can be done in a fully or weakly-supervised manner. Fully-supervised is the case where training the classiï¬er is done using only the labeled data-points. Weakly-supervised is the case where training also utilizes the points which are not labelled yet. Although the existing literature only focuses on the active learning for fully-supervised models, we consider both cases and experiment on both.
# 4 METHOD
4.1 ACTIVE LEARNING AS A SET COVER
In the classical active learning setting, the algorithm acquires labels one by one by querying an oracle (i.e. b = 1). Unfortunately, this is not feasible when training CNNs since i) a single point will not have a statistically signiï¬cant impact on the model due to the local optimization algorithms. ii) it is infeasible to train as many models as number of points since many practical problem of interest is very large-scale. Hence, we focus on the batch active learning problem in which the active learning algorithm choose a moderately large set of points to be labelled by an oracle at each iteration.
In order to design an active learning strategy which is effective in batch setting, we consider the following upper bound of the active learning loss we formally deï¬ned in (1):
1 Ex.y~pz U(X, ys As)] < | Bx.y~pz [U(x ys As)] â n S U(xi,yis As) ie[n] 1 ty ee wi As) Jjés Generalization Error Training Error 1 1 +)= D0 lexi ues As) - Fg] oe es As), ie[n] jes
# Core-Set Loss
The quantity we are interested in is the population risk of the model learned using a small labelled subset (s). The population risk is controlled by the training error of the model on the labelled subset, the generalization error over the full dataset ([n]) and a term we deï¬ne as the core-set loss. Core-set loss is simply the difference between average empirical loss over the set of points which have labels for and the average empirical loss over the entire dataset including unlabelled points. Empirically, it is widely observed that the CNNs are highly expressive leading to very low training error and they typically generalize well for various visual problems. Moreover, generalization error of CNNs is also theoretically studied and shown to be bounded by Xu & Mannor (2012). Hence, the critical part for active learning is the core-set loss. Following this observation, we re-deï¬ne the active learning problem as:
1 1 min |â U(x;, yi; Asous! ) â ââââ st:js!|<b|n- (xi, yes Asdust) |s° + s}| iâ¬[n] Yo U;, yj Asus) (4) jes°Us!
4
(3)
Published as a conference paper at ICLR 2018
Figure 1: Visualization of the Theorem [I] Consider the set of selected points s and the points in the remainder of the dataset [n| \ s, our results shows that if s is the ds cover of the dataset, ip Dieln| (Kis Yer As) â Fay Dyes (Kj, Uys As)] SO (6s) +O (V2)
Informally, given the initial labelled set (s0) and the budget (b), we are trying to ï¬nd a set of points to query labels (s1) such that when we learn a model, the performance of the model on the labelled subset and that on the whole dataset will be as close as possible.
4.2 CORE-SETS FOR CNNS
The optimization objective we deï¬ne in (4) is not directly computable since we do not have access to all the labels (i.e. [n] \ (s0 ⪠s1) is unlabelled). Hence, in this section we give an upper bound for this objective function which we can optimize.
We start with presenting this bound for any loss function which is Lipschitz for a ï¬xed true label y and parameters w, and then show that loss functions of CNNs with ReLu non-linearities satisfy this property. We also rely on the zero training error assumption. Although the zero training error is not an entirely realistic assumption, our experiments suggest that the resulting upper bound is very effective. We state the following theorem; Theorem 1. Given n i.i.d. samples drawn from pZ as {xi, yi}iâ[n], and set of points s. If loss function l(·, y, w) is λl-Lipschitz continuous for all y, w and bounded by L, regression function is λη-Lipschitz, s is δs cover of {xi, yi}iâ[n], and l(xs(j), ys(j); AS) = 0 âj â [m]; with probability at least 1 â γ,
1 . 1 : l, L? log(1/7) - So UK, ys As) - jg oe Or As) < 6(' + MIC) 4 . L 2n iâ¬[n] jes
Since we assume a zero training error for core-set, the core-set loss is equal to the average er- Diein| Ui, Ys As) â pay Dyes Oj. Ys} As)| =F Dicefny Ui. yi As)- We state the theorem in this form to be consistent with (3). We visualize this theorem in Figure[T]and defer its proof to the appendix. In this theorem, âa set s is a 6 cover of a set s*â means a set of balls with radius 5 centered at each member of s can cover the entire s*. Informally, this theorem suggests that we can bound the core-set loss with covering radius and a term which goes to zero with rate depends solely on n. This is an interesting result since this bound does not depend on the number of labelled points. In other words, a provided label does not help the core-set loss unless it decreases the covering radius. ror over entire dataset as
In order to show that this bound applies to CNNs, we prove the Lipschitz-continuity of the loss function of a CNN with respect to input image for a ï¬xed true label with the following lemma where max-pool and restricted linear units are the non-linearities and the loss is deï¬ned as the l2
5
Published as a conference paper at ICLR 2018
distance between the desired class probabilities and the soft-max outputs. CNNs are typically used with cross-entropy loss for classiï¬cation problems in the literature. Indeed, we also perform our experiments using the cross-entropy loss although we use l2 loss in our theoretical study. Although our theoretical study does not extend to cross-entropy loss, our experiments suggest that the resulting algorithm is very effective for cross-entropy loss. Lemma 1. Loss function deï¬ned as the 2-norm between the class probabilities and the softmax output of a convolutional neural network with nc convolutional (with max-pool and ReLU) and nf c fully connected layers deï¬ned over C classes is -Lipschitz function of input for ï¬xed class probabilities and network parameters.
Here, α is the maximum sum of input weights per neuron (see appendix for formal deï¬nition). Although it is in general unbounded, it can be made arbitrarily small without changing the loss function behavior (i.e. keeping the label of any data point s unchanged). We defer the proof to the appendix and conclude that CNNs enjoy the bound we presented in Theorem 1.
In order to computationally perform active learning, we use this upper bound. In other words, the practical problem of interest becomes mins1:|s1â¤b| δs0âªs1. This problem is equivalent to the k-Center problem (also called min-max facility location problem) (Wolf, 2011). In the next sec- tion, we explain how we solve the k-Center problem in practice using a greedy approximation.
# 4.3 SOLVING THE K-CENTER PROBLEM
We have so far provided an upper bound for the loss function of the core-set selection problem and showed that minimizing it is equivalent to the k-Center prob- lem (minimax facility location (Wolf, 2011)) which can intuitively be deï¬ned as follows; choose b center points such that the largest distance between a data point and its nearest center is minimized. Formally, we are trying to solve:
min s1:|s1|â¤b max i min jâs1âªs0 â(xi, xj) (5) Algorithm 1 k-Center-Greedy Input: data xi, existing pool s0 and a budget b Initialize s = s0 repeat u = arg maxiâ[n]\s minjâs â(xi, xj) s = s ⪠{u} until |s| = b + |s0| return s \ s0
Unfortunately this problem is NP-Hard (Cook et al., 1998). However, it is possible to obtain a 2 â OP T solution efï¬ciently using a greedy approach shown in Algorithm 1. If OP T = mins1 maxi minjâs1âªs0 â(xi, xj), the greedy algorithm shown in Algorithm 1 is proven to have a solution (s1) such that; maxi minjâs1âªs0 â(xi, xj) ⤠2 à OP T . Although the greedy algorithm gives a good initialization, in practice we can improve the 2 â OP T solution by iteratively querying upper bounds on the optimal value. In other words, we can design an algorithm which decides if OP T ⤠δ. In order to do so, we deï¬ne a mixed integer program (MIP) parametrized by δ such that its feasibility indicates mins1 maxi minjâs1âªs0 â(xi, xj) ⤠δ. A straight-forward algorithm would be to use this MIP as a sub-routine and performing a binary search between the result of the greedy algorithm and its half since the optimal solution is guaranteed to be included in that range. While constructing this MIP, we also try to handle one of the weaknesses of k-Center algorithm, namely robustness. To make the k-Center problem robust, we assume an upper limit on the number of outliers Î such that our algorithm can choose not to cover at most Î unsupervised data points. This mixed integer program can be written as:
Feasible(b,s°, 5,2) Souy.= |\s°| +6, i <= j tj Does =1 Vi, wiys<ujy Wig u=1 Wies®, ui ⬠{0,1} Vi wig =Ei,j Ving | A(xi, xj) >.
In this formulation, ui is 1 if the ith data point is chosen as center, Ïi,j is 1 if the ith point is covered by the jth, point and ξi,j is 1 if the ith point is an outlier and covered by the jth point without the δ
6
(6)
Published as a conference paper at ICLR 2018
constraint, and 0 otherwise. And, variables are binary as ui, Ïi,j, ξi,j â {0, 1}. We further visualize these variables in a diagram in Figure 2, and give the details of the method in Algorithm 2.
# Algorithm 2 Robust k-Center
Input: data xi, existing pool s0, budget b and outlier bound Î Initialize sg = k-Center-Greedy(xi, s0, b) δ2âOP T = maxj miniâsg â(xi, xj) lb = δ2âOP T repeat , ub = δ2âOP T 2 if F easible(b, s0, lb+ub , Î) then 2 ub = maxi,j|â(xi,xj )⤠lb+ub â(xi, xj) 2 else lb = mini,j|â(xi,xj )⥠lb+ub 2 end if until ub = lb return {i s.t. ui = 1} â(xi, xj)
bye 6 °
Figure 2: Visualizations of the variables. In this solution, the 4th node is chosen as a cen- ter and nodes 0, 1, 3 are in a δ ball around it. The 2nd node is marked as an outlier.
IMPLEMENTATION DETAILS
One of the critical design choices is the distance metric â(·, ·). We use the l2 distance between activations of the ï¬nal fully-connected layer as the distance. For weakly-supervised learning, we used Ladder networks (Rasmus et al., 2015) and for all experiments we used VGG-16 (Simonyan & Zisserman, 2014) as the CNN architecture. We initialized all convolutional ï¬lters according to He et al. (2016). We optimized all models using RMSProp with a learning rate of 1eâ3 using Tensorï¬ow (Abadi et al., 2016). We train CNNs from scratch after each iteration.
We used the Gurobi (Inc., 2016) framework for checking feasibility of the MIP deï¬ned in (6). As an upper bound on outliers, we used Î = 1eâ4 à n where n is the number of unlabelled points.
# 5 EXPERIMENTAL RESULTS
We tested our algorithm on the problem of classiï¬cation using three different datasets. We per- formed experiments on CIFAR (Krizhevsky & Hinton, 2009) dataset for image classiï¬cation and on SVHN(Netzer et al., 2011) dataset for digit classiï¬cation. CIFAR (Krizhevsky & Hinton, 2009) dataset has two tasks; one coarse-grained over 10 classes and one ï¬ne-grained over 100 classes. We performed experiments on both.
We compare our method with the following baselines: i)Random: Choosing the points to be labelled uniformly at random from the unlabelled pool. ii)Best Empirical Uncertainty: Following the em- pirical setup in (Gal et al., 2017), we perform active learning using max-entropy, BALD and Variation Ratios treating soft-max outputs as probabilities. We only report the best performing one for each dataset since they perform similar to each other. iii) Deep Bayesian Active Learning (DBAL)(Gal et al., 2017): We perform Monte Carlo dropout to obtain improved uncertainty measures and report only the best performing acquisition function among max-entropy, BALD and Variation Ratios for each dataset. iv) Best Oracle Uncertainty: We also report a best performing oracle algorithm which uses the label information for entire dataset. We replace the uncertainty with l(xi, yi, As0) for all unlabelled examples. We sample the queries from the normalized form of this function by setting the probability of choosing the ith point to be queried as pi = l(xi,yi,As0 ) j l(xj ,yj ,As0 ) . v)k-Median: Choosing the points to be labelled as the cluster centers of k-Median (k is equal to the budget) al- gorithm. vi)Batch Mode Discriminative-Representative Active Learning(BMDR)(Wang & Ye, 2015): ERM based approach which uses uncertainty and minimizes MMD between iid. samples from the dataset and the actively chosen points. vii)CEAL (Wang et al., 2016): CEAL (Wang et al., 2016) is a weakly-supervised active learning method proposed speciï¬cally for CNNs. we include it in the weakly-supervised analysis.
7
Published as a conference paper at ICLR 2018
90 Classification Accuracy (%) 01 02 03 04 CIFAR - 10 + Random + Empirical-Une. + Oracle-Une. â DBAL[GIG 17] ++ BMDR [WY 15] ++ CEAL [WZL+ 16] + K-Median â+ Our Method 05 06 O7 08 09 10 01 02 03 CIFAR - 100 + Random + Empirical-Une. + Oracle-Une. -- DBAL[GIG 17] + BMDR [WY 15] ++ CEAL [WZL+ 16] + K-Median + Our Method 04 05 06 O7 08 09 10 Number of Labelled Images (ratio) 90 ES + Random + Empirical-Une. -+- Oracle-Une. - DBALIGIG 17] + BMDR [WY 15] ++ CEAL [WZL+ 16] + K-Median + Our Method 06 08 10
Figure 3: Results on Active Learning for Weakly-Supervised Model (error bars are std-dev)
Classification Accuracy (%) 0102 03 04 CIFAR - 10 + Random + Empirical-Une. + Oracle-Une. + DBAL[GIG 17] + BMDR [WY 15] + K-Median + Our Method 05 08 O7 08 09 10 65 60 55 50 45 01 02 03 CIFAR - 100 -# Random + Empirical-Une. + Oracle-Une. | DBAL{GIG 17] + BMDR [WY 15] + K-Median + Our Method 04 05 08 O7 08 09 10 4 SVHN -# Random + Empirical-Une. + Oracle-Une. â+ DBAL{GIG 17] + BMDR [WY 15] + K-Median + Our Method 06 08 1.0 Number of Labelled Images (ratio)
Figure 4: Results on Active Learning for Fully-Supervised Model (error bars are std-dev)
We conducted experiments on active learning for fully-supervised models as well as active learning for weakly-supervised models. In our experiments, we start with small set of images sampled uniformly at random from the dataset as an initial pool. The weakly-supervised model has access to labeled examples as well as unlabelled examples. The fully-supervised model only has access to the labeled data points. We run all experiments with ï¬ve random initializations of the initial pool of labeled points and use the average classiï¬cation accuracy as a metric. We plot the accuracy vs the number of labeled points. We also plot error bars as standard deviations. We run the query algorithm iteratively; in other words, we solve the discrete optimization problem minsk+1:|sk+1|â¤b Ex,yâ¼pZ [l(x, y; As0âª...,sk+1)] for each point on the accuracy vs number of labelled examples graph. We present the results in Figures 3 and 4.
Figures 3 and 4 suggests that our algorithm outperforms all other baselines in all experiments; for the case of weakly-supervised models, by a large margin. We believe the effectiveness of our approach in the weakly-supervised case is due to the better feature learning. Weakly-supervised models provide better feature spaces resulting in accurate geometries. Since our method is geometric, it performs signiï¬cantly better with better feature spaces. We also observed that our algorithm is less effective in CIFAR-100 when compared with CIFAR-10 and SVHN. This can easily be explained using our theoretical analysis. Our bound over the core-set loss scales with the number of classes, hence it is better to have fewer classes.
One interesting observation is the fact that a state-of-the-art batch mode active learning baseline (BMDR (Wang & Ye, 2015)) does not necessarily perform better than greedy ones. We believe this is due to the fact that it still uses an uncertainty information and soft-max probabilities are not a good proxy for uncertainty. Our method does not use any uncertainty. And, incorporating uncertainty to our method in a principled way is an open problem and a fruitful future research direction. On the other hand, a pure clustering based batch active learning baseline (k-Medoids) is also not effective. We believe this is rather intuitive since cluster sentences are likely the points which are well covered with initial iid. samples. Hence, this clustering based method fails to sample the tails of the data distribution.
Our results suggest that both oracle uncertainty information and Bayesian estimation of uncertainty is helpful since they improve over empirical uncertainty baseline; however, they are still not effective in the batch setting since random sampling outperforms them. We believe this is due to the correlation in the queried labels as a consequence of active learning in batch setting. We further investigate this with a qualitative analysis via tSNE (Maaten & Hinton, 2008) embeddings. We compute embeddings for all points using the features which are learned using the labelled examples and visualize the points
8
Published as a conference paper at ICLR 2018
(a) Uncertainty Oracle (b) Our Method
Figure 5: tSNE embeddings of the CIFAR dataset and behavior of uncertainty oracle as well as our method. For both methods, the initial labeled pool of 1000 images are shown in blue, 1000 images chosen to be labeled in green and remaining ones in red. Our algorithm results in queries evenly covering the space. On the other hand, samples chosen by uncertainty oracle fails to cover the large portion of the space.
Table 1: Average run-time of our algorithm for b = 5k and |s0| = 10k in seconds.
Distance Greedy Matrix (2-OPT) (iteration) MIP MIP (total) Total 104.2 2 7.5 244.03 360.23
Figure 6: We compare our method with k-Center- Greedy. Our algorithm results in a small but im- portant accuracy improvement.
sampled by our method as well as the oracle uncertainty. This visualization suggests that due to the correlation among samples, uncertainty based methods fail to cover the large portion of the space conï¬rming our hypothesis.
Optimality of the k-Center Solution: Our proposed method uses the greedy 2-OPT solution for the k-Center problem as an initialization and checks the feasibility of a mixed integer program (MIP). We use LP-relaxation of the deï¬ned MIP and use branch-and-bound to obtain integer solutions. The utility obtained by solving this expensive MIP should be investigated. We compare the average run-time of MIP1 with the run-time of 2-OPT solution in Table 1. We also compare the accuracy obtained with optimal k-Center solution and the 2-OPT solution in Figure 6 on CIFAR-100 dataset.
As shown in the Table 1; although the run-time of MIP is not polynomial in worst-case, in practice it converges in a tractable amount of time for a dataset of 50k images. Hence, our algorithm can easily be applied in practice. Figure 6 suggests a small but signiï¬cant drop in the accuracy when the 2-OPT solution is used. Hence, we conclude that unless the scale of the dataset is too restrictive, using our proposed optimal solver is desired. Even with the accuracy drop, our active learning strategy using 2-OPT solution still outperforms the other baselines. Hence, we can conclude that our algorithm can scale to any dataset size with small accuracy drop even if solving MIP is not feasible.
# 6 CONCLUSION
We study the active learning problem for CNNs. Our empirical analysis showed that classical uncertainty based methods have limited applicability to the CNNs due to the correlations caused by batch sampling. We re-formulate the active learning problem as core-set selection and study the core-set problem for CNNs. We further validated our algorithm using an extensive empirical study. Empirical results on three datasets showed state-of-the-art performance by a large margin.
# REFERENCES
Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine
# 1On Intel Core i7-5930K@3.50GHz and 64GB memory
9
Published as a conference paper at ICLR 2018
learning on heterogeneous distributed systems. arXiv:1603.04467, 2016.
C. Berlind and R. Urner. Active nearest neighbors in changing environments. In ICML, 2015.
Klaus Brinker. Incorporating diversity in active learning with support vector machines. In ICML, volume 3, pp. 59â66, 2003.
William J Cook, William H Cunningham, William R Pulleyblank, and Alexander Schrijver. Combi- natorial optimization, volume 605. Springer, 1998.
Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In NIPS, 2004.
In L. K. Saul, Information Processing Sys- Y. Weiss, tems 17, pp. 337â344. MIT Press, 2005. URL http://papers.nips.cc/paper/ 2636-analysis-of-a-greedy-active-learning-strategy.pdf.
Beg¨um Demir, Claudio Persello, and Lorenzo Bruzzone. Batch-mode active-learning methods for the interactive classiï¬cation of remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 49(3):1014â1031, 2011.
Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Adversarial feature learning. arXiv:1605.09782, 2016.
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv:1606.00704, 2016.
Ehsan Elhamifar, Guillermo Sapiro, Allen Yang, and S Shankar Sasrty. A convex optimization framework for active learning. In ICCV, 2013.
Yoav Freund, H Sebastian Seung, Eli Shamir, and Naftali Tishby. Selective sampling using the query by committee algorithm. Machine learning, 28(2-3), 1997.
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, 2016.
Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. arXiv preprint arXiv:1703.02910, 2017.
Ravi Ganti and Alexander Gray. Upal: Unbiased pool based active learning. In Artiï¬cial Intelligence and Statistics, pp. 422â431, 2012.
Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artiï¬cial Intelligence Research, 42:427â486, 2011.
Alon Gonen, Sivan Sabato, and Shai Shalev-Shwartz. Efï¬cient active learning of halfspaces: an aggressive approach. The Journal of Machine Learning Research, 14(1):2583â2615, 2013.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. arXiv:1002.3345, 2010.
Yuhong Guo. Active instance sampling via matrix partition. In Advances in Neural Information Processing Systems, pp. 802â810, 2010.
Yuhong Guo and Dale Schuurmans. Discriminative batch mode active learning. In Advances in neural information processing systems, pp. 593â600, 2008.
Steve Hanneke. A bound on the label complexity of agnostic active learning. In Proceedings of the 24th international conference on Machine learning, pp. 353â360. ACM, 2007.
Sariel Har-Peled and Akash Kushal. Smaller coresets for k-median and k-means clustering. In Annual Symposium on Computational geometry. ACM, 2005.
10
Published as a conference paper at ICLR 2018
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Steven CH Hoi, Rong Jin, Jianke Zhu, and Michael R Lyu. Batch mode active learning and its application to medical image classiï¬cation. In Proceedings of the 23rd international conference on Machine learning, pp. 417â424. ACM, 2006.
Gurobi Optimization Inc. Gurobi optimizer reference manual, 2016. URL http://www.gurobi.
com.
Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning for image classiï¬cation. In CVPR, 2009.
A. J. Joshiy, F. Porikli, and N. Papanikolopoulos. Multi-class batch-mode active learning for image classiï¬cation. In 2010 IEEE International Conference on Robotics and Automation, pp. 1873â1878, May 2010. doi: 10.1109/ROBOT.2010.5509293.
Ashish Kapoor, Kristen Grauman, Raquel Urtasun, and Trevor Darrell. Active learning with gaussian processes for object categorization. In ICCV, 2007.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Xin Li and Yuhong Guo. Adaptive active learning for image classiï¬cation. In CVPR, 2013.
Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
David JC MacKay. Information-based objective functions for active data selection. Neural computa- tion, 4(4):590â604, 1992.
Andrew Kachites McCallumzy and Kamal Nigamy. Employing em and pool-based active learning for text classiï¬cation. In ICML, 1998.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, pp. 5, 2011.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015.
Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In NIPS, 2015.
Nicholas Roy and Andrew McCallum. Toward optimal active learning through monte carlo estimation of error reduction. ICML, 2001.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.
Burr Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
Fabian Stark, Caner Hazırbas, Rudolph Triebel, and Daniel Cremers. Captcha recognition with active deep learning. In GCPR Workshop on New Challenges in Neural Computation, 2015.
Simon Tong and Daphne Koller. Support vector machine active learning with applications to text classiï¬cation. JMLR, 2(Nov):45â66, 2001.
Ivor W Tsang, James T Kwok, and Pak-Ming Cheung. Core vector machines: Fast svm training on very large data sets. JMLR, 6(Apr):363â392, 2005.
11
Published as a conference paper at ICLR 2018
Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-effective active learning for deep image classiï¬cation. Transactions on Circuits and Systems for Video Technology, 2016.
Zheng Wang and Jieping Ye. Querying discriminative and representative samples for batch mode active learning. ACM Transactions on Knowledge Discovery from Data (TKDD), 9(3):17, 2015.
Kai Wei, Yuzong Liu, Katrin Kirchhoff, and Jeff A Bilmes. Using document summarization tech- niques for speech data subset selection. In HLT-NAACL, 2013.
Kai Wei, Rishabh K Iyer, and Jeff A Bilmes. Submodularity in data subset selection and active learning. In ICML, 2015.
Gert W Wolf. Facility location: concepts, models, algorithms and case studies., 2011.
Huan Xu and Shie Mannor. Robustness and generalization. Machine learning, 86(3):391â423, 2012.
Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113â127, 2015.
In Proceedings of the 23rd international conference on Machine learning, pp. 1081â1088. ACM, 2006.
A PROOF FOR LEMMA 1
â
Proof. We will start with showing that softmax function deï¬ned over C class is continuous. It is easy to show that for any differentiable function f : Rn â Rm, Câ1 C -Lipschitz
lf) â Fv)lo < Wie (lk - vile Yx.y ERâ where ||J||j- = max ||.J|| , and J is the Jacobian matrix of f. x
Softmax function is deï¬ned as
f(a); = expt) j_19¢ Cc exp(x; j=l
For brevity, we will denote fi(x) as fi. The Jacobian matrix will be,
f1(1 â f1) âf2f1 ... âfCf1 âf1f2 f2(1 â f2) ... âfCf2 ... ... ... ... âfC(1 â fC) âf1fC âf2fC ...
J =
Now, Frobenius norm of above matrix will be,
Cc Cc Cc Wp= |S) SS RR+>ORO- fi) i=1 i=1 j=1 Fj
% is the optimal solution for ||.J||;- 7 get || J||j, = vw
It is straightforward to show that fi = 1 putting fi = 1
# F = max
max<||.J|| » Hence,
It is straightforward to show that f; = % is the optimal solution for ||.J||;- = max<||.J|| » Hence,
# x
â
% in the above equation , we get || J||j,
F =
putting f; = % in the above equation , we get || J||j, = vw
Now, consider two inputs x and X, such that their representation at layer d is x¢ and x. Letâs consider any convolution or fully-connected layer as xf =>; wij a If we assume, Y; |wij| <a Vi, Jj, d, for any convolutional or fully connected layer, we can state:
IIx" â X||2 < ax! â Rg
12
Published as a conference paper at ICLR 2018
On the other hand, using |a â b| ⤠| max(0, a) â max(0, a)| and the fact that max pool layer can be written as a convolutional layer such that only one weight is 1 and others are 0, we can state for ReLU and max-pool layers,
[x â ¥° [2 < |x t â RT IIp
Combining with the Lipschitz constant of soft-max layer, â
ICN N(x; w) â CNN(%w)ll2 < Younes x â X|l2
Using the reverse triangle inequality as
Gx, ys w) IG, ys W)| = [|CNN Gs; w)âylla-|ICNN Ge; w)âylla] < ||CN N(x; w)-CN Nw) 2,
â
we can conclude that the loss function is Câ1 C αnc+nf c-Lipschitz for any ï¬xed y and w.
# B PROOF FOR THEOREM 1
Before starting our proof, we state the Claim 1 from|Berlind & Urner|(2015). Fix some p, pâ ⬠[0,1 and yâ ⬠{0, 1}. Then,
# Pym Â¥ FYâ) S Pur (y Ayâ) + IP -P'|
Proof. We will start our proof with bounding Eyiâ¼Î·(xi)[l(xi, yi; As)]. We have a condition which states that there exists and xj in δ ball around xi such that xj has 0 loss.
is As)] = SY Pycome (ei) (Yi = RU(%e, kj As) ue < DYE Puree; (Yi = RUG: BsAs) +S lone) â me (xy) |UO%e, Fs As) ke[C] ke[C] (e) < Ss Pysmna(x;) Ys = k)U(x:, k; As) + 5X"LC ke[C]
Eyiâ¼Î·(xi)[l(xi, yi; As)] =
With abuse of notation, we represent {yi = k} ⼠ηk(xi) with yi ⼠ηk(xi). We use Claim 1 in (d), and Lipschitz property of regression function and bound of loss in (d). Then, we can further bound the remaining term as;
S> Pysmmecas) (Yi = AU: Bs As) = S> Pysmn (xs) (Ys = AYILGK:, ki As) â UK;, Fs As)] ke[C] ke[C] + Ss Pyeng (x;) (Yi = k)UK;, ks As) ke[C <6N
kâ[C]
where last step is coming from the fact that the trained classiï¬er assumed to have 0 loss over training points. If we combine them,
Eyiâ¼Î·(xi)[l(xi, yi, As)] ⤠δ(λl + λµLC)
We further use the Hoeffdingâs Bound and conclude that with probability at least 1 â γ,
1 1, _ | log(1/7) = ui A s) Is ql xj,9jiAs)] < O(N! + MLC) +] âS i¢â¬[n] Jes
13 | {
"id": "1605.09782"
} |
1707.09457 | Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints | Language is increasingly being used to define rich visual recognition
problems with supporting image collections sourced from the web. Structured
prediction models are used in these tasks to take advantage of correlations
between co-occurring labels and visual input but risk inadvertently encoding
social biases found in web corpora. In this work, we study data and models
associated with multilabel object classification and visual semantic role
labeling. We find that (a) datasets for these tasks contain significant gender
bias and (b) models trained on these datasets further amplify existing bias.
For example, the activity cooking is over 33% more likely to involve females
than males in a training set, and a trained model further amplifies the
disparity to 68% at test time. We propose to inject corpus-level constraints
for calibrating existing structured prediction models and design an algorithm
based on Lagrangian relaxation for collective inference. Our method results in
almost no performance loss for the underlying recognition task but decreases
the magnitude of bias amplification by 47.5% and 40.5% for multilabel
classification and visual semantic role labeling, respectively. | http://arxiv.org/pdf/1707.09457 | Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang | cs.AI, cs.CL, cs.CV, stat.ML | 11 pages, published in EMNLP 2017 | null | cs.AI | 20170729 | 20170729 | 7 1 0 2
l u J 9 2 ] I A . s c [
1 v 7 5 4 9 0 . 7 0 7 1 : v i X r a
# Men Also Like Shopping: Reducing Gender Bias Ampliï¬cation using Corpus-level Constraints
# Jieyu Zhao§ Tianlu Wang§ Mark Yatskar⡠Vicente Ordonez§ Kai-Wei Chang§
§University of Virginia {jz4fu, tw8cb, vicente, kc2wc}@virginia.edu â¡University of Washington my89@cs.washington.edu
# Abstract
Language is increasingly being used to de- ï¬ne rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently en- coding social biases found in web corpora. In this work, we study data and models as- sociated with multilabel object classiï¬ca- tion and visual semantic role labeling. We ï¬nd that (a) datasets for these tasks con- tain signiï¬cant gender bias and (b) mod- els trained on these datasets further am- plify existing bias. For example, the ac- tivity cooking is over 33% more likely to involve females than males in a train- ing set, and a trained model further ampli- ï¬es the disparity to 68% at test time. We propose to inject corpus-level constraints for calibrating existing structured predic- tion models and design an algorithm based on Lagrangian relaxation for collective in- ference. Our method results in almost no performance loss for the underlying recog- nition task but decreases the magnitude of bias ampliï¬cation by 47.5% and 40.5% for multilabel classiï¬cation and visual seman- tic role labeling, respectively.
# Introduction
tics from images and require large quantities of la- beled data, predominantly retrieved from the web. Methods often combine structured prediction and deep learning to model correlations between la- bels and images to make judgments that otherwise would have weak visual support. For example, in the ï¬rst image of Figure 1, it is possible to pre- dict a spatula by considering that it is a com- mon tool used for the activity cooking. Yet such methods run the risk of discovering and exploiting societal biases present in the underlying web cor- pora. Without properly quantifying and reducing the reliance on such correlations, broad adoption of these models can have the inadvertent effect of magnifying stereotypes.
In this paper, we develop a general framework for quantifying bias and study two concrete tasks, visual semantic role labeling (vSRL) and multil- abel object classiï¬cation (MLC). In vSRL, we use the imSitu formalism (Yatskar et al., 2016, 2017), where the goal is to predict activities, objects and the roles those objects play within an activity. For MLC, we use MS-COCO (Lin et al., 2014; Chen et al., 2015), a recognition task covering 80 object classes. We use gender bias as a running example and show that both supporting datasets for these tasks are biased with respect to a gender binary1. Our analysis reveals that over 45% and 37% of verbs and objects, respectively, exhibit bias to- ward a gender greater than 2:1. For example, as seen in Figure 1, the cooking activity in imSitu is a heavily biased verb. Furthermore, we show that after training state-of-the-art structured pre- dictors, models amplify the existing bias, by 5.0% for vSRL, and 3.6% in MLC.
Visual recognition tasks involving language, such as captioning (Vinyals et al., 2015), visual ques- tion answering (Antol et al., 2015), and visual se- mantic role labeling (Yatskar et al., 2016), have emerged as avenues for expanding the diversity of information that can be recovered from im- ages. These tasks aim at extracting rich seman-
1To simplify our analysis, we only consider a gender bi- nary as perceived by annotators in the datasets. We recog- nize that a more ï¬ne-grained analysis would be needed for deployment in a production system. Also, note that the pro- posed approach can be applied to other NLP tasks and other variables such as identiï¬cation with a racial or ethnic group.
COOKING _ = COOKING COOKING COOKING COOKING ROLE | VALUE ROLE | VALUE ROLE | VALUE ROLE | VALUE ROLE | VALUE AGENT | WOMAN AGENT | WOMAN AGENT WOMAN AGENT | WOMAN AGENT MAN FOOD PASTA FOOD FRUIT FOOD FOOD @ FOOD @ HEAT STOVE HEAT @ HEAT STOVE HEAT STOVE HEAT. STOVE TOOL SPATULA TOOL KNIFE TOOL SPATULA TOOL SPATULA TOOL | SPATULA PLACE KITCHEN PLACE KITCHEN PLACE (OUTSIDE PLACE (KITCHEN PLACE _ KITCHEN
Figure 1: Five example images from the imSitu visual semantic role labeling (vSRL) dataset. Each im- age is paired with a table describing a situation: the verb, cooking, its semantic roles, i.e agent, and noun values ï¬lling that role, i.e. woman. In the imSitu training set, 33% of cooking images have man in the agent role while the rest have woman. After training a Conditional Random Field (CRF), bias is ampliï¬ed: man ï¬lls 16% of agent roles in cooking images. To reduce this bias ampliï¬cation our cal- ibration method adjusts weights of CRF potentials associated with biased predictions. After applying our methods, man appears in the agent role of 20% of cooking images, reducing the bias ampliï¬cation by 25%, while keeping the CRF vSRL performance unchanged.
To mitigate the role of bias ampliï¬cation when training models on biased corpora, we propose a novel constrained inference framework, called RBA, for Reducing Bias Ampliï¬cation in predic- tions. Our method introduces corpus-level con- straints so that gender indicators co-occur no more often together with elements of the prediction task than in the original training distribution. For ex- ample, as seen in Figure 1, we would like noun man to occur in the agent role of the cooking as often as it occurs in the imSitu training set when evaluating on a development set. We combine our calibration constraint with the original struc- tured predictor and use Lagrangian relaxation (Ko- rte and Vygen, 2008; Rush and Collins, 2012) to reweigh bias creating factors in the original model.
We evaluate our calibration method on imSitu vSRL and COCO MLC and ï¬nd that in both in- stances, our models substantially reduce bias am- pliï¬cation. For vSRL, we reduce the average mag- nitude of bias ampliï¬cation by 40.5%. For MLC, we are able to reduce the average magnitude of bias ampliï¬cation by 47.5%. Overall, our calibra- tion methods do not affect the performance of the underlying visual system, while substantially re- ducing the reliance of the system on socially bi- ased correlations2.
2Code and data are available at https://github. com/uclanlp/reducingbias
# 2 Related Work
As intelligence systems start playing important in- roles in our daily life, ethics in artiï¬cial telligence research has attracted signiï¬cant in- terest. It is known that big-data technologies sometimes inadvertently worsen discrimination due to implicit biases in data (Podesta et al., 2014). Such issues have been demonstrated in var- ious learning systems, including online advertise- ment systems (Sweeney, 2013), word embedding models (Bolukbasi et al., 2016; Caliskan et al., 2017), online news (Ross and Carter, 2011), web search (Kay et al., 2015), and credit score (Hardt et al., 2016). Data collection biases have been discussed in the context of creating image cor- pus (Misra et al., 2016; van Miltenburg, 2016) and text corpus (Gordon and Van Durme, 2013; Van Durme, 2010). In contrast, we show that given a gender biased corpus, structured models such as conditional random ï¬elds, amplify the bias.
The effect of the data imbalance can be easily detected and ï¬xed when the prediction task is sim- ple. For example, when classifying binary data with unbalanced labels (i.e., samples in the major- ity class dominate the dataset), a classiï¬er trained exclusively to optimize accuracy learns to always predict the majority label, as the cost of mak- ing mistakes on samples in the minority class can be neglected. Various approaches have been pro- posed to make a âfairâ binary classiï¬cation (Baro- cas and Selbst, 2014; Dwork et al., 2012; Feldman
et al., 2015; Zliobaite, 2015). For structured pre- diction tasks the effect is harder to quantify and we are the ï¬rst to propose methods to reduce bias ampliï¬cation in this context.
Lagrangian relaxation and dual decomposi- tion techniques have been widely used in NLP tasks (e.g., (Sontag et al., 2011; Rush and Collins, 2012; Chang and Collins, 2011; Peng et al., 2015)) for dealing with instance-level constraints. Simi- lar techniques (Chang et al., 2013; Dalvi, 2015) have been applied in handling corpus-level con- straints for semi-supervised multilabel classiï¬ca- tion. In contrast to previous works aiming for improving accuracy performance, we incorporate corpus-level constraints for reducing gender bias.
# 3 Visualizing and Quantifying Biases
learning approaches capture Modern statistical correlations among output variables in order to make coherent predictions. However, for real- world applications, some implicit correlations are not appropriate, especially if they are ampliï¬ed. In this section, we present a general framework to analyze inherent biases learned and ampliï¬ed by a prediction model.
Identifying bias We consider that prediction problems involve several inter-dependent output variables y1, y2, ...yK, which can be represented as a structure y = {y1, y2, ...yK} â Y . This is a common setting in NLP applications, includ- ing tagging, and parsing. For example, in the vSRL task, the output can be represented as a structured table as shown in Fig 1. Modern tech- niques often model the correlation between the sub-components in y and make a joint prediction over them using a structured prediction model. More details will be provided in Section 4.
We assume there is a subset of output vari- ables g â y, g â G that reï¬ects demographic at- tributes such as gender or race (e.g. g â G = {man, woman} is the agent), and there is another subset of the output o â y, o â O that are co- related with g (e.g., o is the activity present in an image, such as cooking). The goal is to identify the correlations that are potentially ampliï¬ed by a learned model.
To achieve this, we deï¬ne the bias score of a given output, o, with respect to a demographic
variable, g, as:
c(0, g) b(0, 9) = Yyeo 0,9)â
where c(o, g) is the number of occurrences of o and g in a corpus. For example, to analyze how genders of agents and activities are co-related in vSRL, we deï¬ne the gender bias toward man for each verb b(verb, man) as:
c(verb, man) c(verb, man) + c(verb, woman) . (1)
If b(0, g) > 1/||G||, then o is positively correlated with g and may exhibit bias.
Evaluating bias amplification To evaluate the degree of bias amplification, we propose to com- pare bias scores on the training set, b*(o, g), with bias scores on an unlabeled evaluation set of im- ages b(o,q) that has been annotated by a predic- tor. We assume that the evaluation set is iden- tically distributed to the training set. There- fore, if o is positively correlated with g (i.e, b*(o,g) > 1/||G\|) and b(0,g) is larger than b*(0,g), we say bias has been amplified. For example, if b*(cooking,woman) = .66, and b(cooking,woman) = .84, then the bias of woman toward cooking has been amplified. Fi- nally, we define the mean bias amplification as:
ol » > b(0, g) â b*(0, 9). I o0â¬{o0EO|b* (0,9) >1/||GI|}
This score estimates the average magnitude of bias ampliï¬cation for pairs of o and g which exhibited bias.
# 4 Calibration Algorithm
In this section, we introduce Reducing Bias Ampliï¬cation, RBA, a debiasing technique for calibrating the predictions from a structured pre- diction model. The intuition behind the algorithm is to inject constraints to ensure the model pre- dictions follow the distribution observed from the training data. For example, the constraints added to the vSRL system ensure the gender ratio of each verb in Eq. (1) are within a given margin based on the statistics of the training data. These constraints are applied at the corpus level, because comput- ing gender ratio requires the predictions of all test
instances. As a result, a joint inference over test instances is required3. Solving such a giant in- ference problem with constraints is hard. There- fore, we present an approximate inference algo- rithm based on Lagrangian relaxation. The advan- tages of this approach are:
⢠Our algorithm is iterative, and at each it- eration, the joint inference problem is de- composed to a per-instance basis. This can be solved by the original inference algo- rithm. That is, our approach works as a meta- algorithm and developers do not need to im- plement a new inference algorithm.
⢠The approach is general and can be applied in any structured model.
⢠Lagrangian relaxation guarantees the solu- tion is optimal if the algorithm converges and all constraints are satisï¬ed.
In practice, it is hard to obtain a solution where all corpus-level constrains are satisï¬ed. However, we show that the performance of the proposed ap- proach is empirically strong. We use imSitu for vSRL as a running example to explain our algo- rithm.
Structured Output Prediction As we men- tioned in Sec. 3, we assume the structured output y â Y consists of several sub-components. Given a test instance i as an input, the inference problem is to ï¬nd
arg max yâY fθ(y, i),
where fθ(y, i) is a scoring function based on a model θ learned from the training data. The struc- tured output y and the scoring function fθ(y, i) can be decomposed into small components based on an independence assumption. For example, in the vSRL task, the output y consists of two types of binary output variables {yv} and {yv,r}. The vari- able yv = 1 if and only if the activity v is chosen. Similarly, yv,r = 1 if and only if both the activity v and the semantic role r are assigned 4. The scoring function fθ(y, i) is decomposed accordingly such that:
foly.t) = So yoso(v.2) + D> yourso(v, 7, i),
3A sufï¬ciently large sample of test instances must be used so that bias statistics can be estimated. In this work we use the entire test set for each respective problem.
4We use r to refer to a combination of role and noun. For example, one possible value indicates an agent is a woman.
represents the overall score of an assignment, and so(v, 7) and s9(v, 7, i) are the potentials of the sub- assignments. The output space Y contains all fea- sible assignments of y,, and ¥,,-, which can be rep- resented as instance-wise constraints. For exam- ple, the constraint, yy Yu = 1 ensures only one activity is assigned to one image.
Corpus-level Constraints Our goal is to inject constraints to ensure the output labels follow a desired distribution. For example, we can set a constraint to ensure the gender ratio for each ac- (1) is within a given margin. Let tivity in Eq. yi = {yi v,r} be the output assignment for test instance i5. For each activity vâ, the con- straints can be written as
y< vi Yo=v* reM <bt +4 < ; : <b" 4 i Mocs rew + Qui Yoru reM (2) be
(2) where bâ â¡ bâ(vâ, man) is the desired gender ra- tio of an activity vâ, γ is a user-speciï¬ed margin. M and W are a set of semantic role-values rep- resenting the agent as a man or a woman, respec- tively.
Note that the constraints in (2) involve all the est instances. Therefore, it requires a joint in- erence over the entire test corpus. In general, these corpus-level constraints can be represented in a form of AM; y! âb < 0, where each row in the matrix A ¢ R'** is the coefficients of one constraint, and b ⬠R!. The constrained inference problem can then be formulated as:
max f(yâ, 4), tty D fala! ) s.t. Ay yi -b<0, i (3)
where {Y i} represents a space spanned by possi- ble combinations of labels for all instances. With- out the corpus-level constraints, Eq. (3) can be optimized by maximizing each instance i
max yiâY i fθ(yi, i),
separately.
Lagrangian Relaxation Eq. (3) can be solved by several combinatorial optimization methods. For example, one can represent the problem as an
5For the sake of simplicity, we abuse the notations and use i to represent both input and data index.
Dataset | Task | Images | O-Type| ||O]| imSitu. |vSRL| 60,000 | verb | 212 MS-COCO | MLC | 25,000 | object | 66
Table 1: Statistics for the two recognition prob- lems. In vSRL, we consider gender bias relating to verbs, while in MLC we consider the gender bias related to objects.
integer linear program and solve it using an off- the-shelf solver (e.g., Gurobi (Gurobi Optimiza- tion, 2016)). However, Eq. (3) involves all test in- stances. Solving a constrained optimization prob- lem on such a scale is difï¬cult. Therefore, we con- sider relaxing the constraints and solve Eq. (3) us- ing a Lagrangian relaxation technique (Rush and Collins, 2012). We introduce a Lagrangian multi- plier λj ⥠0 for each corpus-level constraint. The Lagrangian is
L(A, {y"}) = 1 i (4) S- folyâ) â Sod; (4 Soy - s) , i j=l i
where all the λj ⥠0, âj â {1, . . . , l}. The solu- tion of Eq. (3) can be obtained by the following iterative procedure:
1) At iteration t, get the output solution of each instance i
ye = argmax LAY, y) (5) yoy
2) update the Lagrangian multipliers.
\ =max (0 MEDS n( Ay â ») ; a
where λ(0) = 0. η is the learning rate for updat- ing λ. Note that with a ï¬xed λ(tâ1), Eq. (5) can be solved using the original inference algorithms. The algorithm loops until all constraints are satis- ï¬ed (i.e. optimal solution achieved) or reach max- imal number of iterations.
# 5 Experimental Setup
In this section, we provide details about the two vi- sual recognition tasks we evaluated for bias: visual semantic role labeling (vSRL), and multi-label classiï¬cation (MLC). We focus on gender, deï¬n- ing G = {man, woman} and focus on the agent
role in vSRL, and any occurrence in text associ- ated with the images in MLC. Problem statistics are summarized in Table 1. We also provide setup details for our calibration method.
# 5.1 Visual Semantic Role Labeling
Dataset We evaluate on imSitu (Yatskar et al., 2016) where activity classes are drawn from verbs and roles in FrameNet (Baker et al., 1998) and noun categories are drawn from WordNet (Miller et al., 1990). The original dataset includes about 125,000 images with 75,702 for training, 25,200 for developing, and 25,200 for test. However, the dataset covers many non-human oriented activities (e.g., rearing, retrieving, and wagging), so we ï¬lter out these verbs, resulting in 212 verbs, leaving roughly 60,000 of the original 125,000 im- ages in the dataset.
Model We build on the baseline CRF released with the data, which has been shown effective compared to a non-structured prediction base- line (Yatskar et al., 2016). The model decomposes the probability of a realized situation, y, the com- bination of activity, v, and realized frame, a set of semantic (role,noun) pairs (e, ne), given an image i as :
p(y|i; θ) â Ï(v, i; θ) Ï(v, e, ne, i; θ) (e,ne)âRf
where each potential value in the CRF for subpart x, is computed using features fi from the VGG convolutional neural network (Simonyan and Zis- serman, 2014) on an input image, as follows:
Ï(x, i; θ) = ewT x fi+bx,
where w and b are the parameters of an afï¬ne transformation layer. The model explicitly cap- tures the correlation between activities and nouns in semantic roles, allowing it to learn common pri- ors. We use a model pretrained on the original task with 504 verbs.
# 5.2 Multilabel Classiï¬cation
Dataset We use MS-COCO (Lin et al., 2014), a common object detection benchmark, for multi- label object classiï¬cation. The dataset contains 80 object types but does not make gender distinctions between man and woman. We use the ï¬ve asso- ciated image captions available for each image in this dataset to annotate the gender of people in the
images. If any of the captions mention the word man or woman we mark it, removing any images that mention both genders. Finally, we ï¬lter any object category not strongly associated with hu- mans by removing objects that do not occur with man or woman at least 100 times in the training set, leaving a total of 66 objects.
Model For this multi-label setting, we adapt a similar model as the structured CRF we use for vSRL. We decompose the joint probability of the output y, consisting of all object categories, c, and gender of the person, g, given an image i as:
p(y|i; θ) â Ï(g, i; θ) Ï(g, c, i; θ) cây
where each potential value for x, is computed us- ing features, fi, from a pretrained ResNet-50 con- volutional neural network evaluated on the image,
Ï(x, i; θ) = ewT x fi+bx.
We trained a model using SGD with learning rate 10â5, momentum 0.9 and weight-decay 10â4, ï¬ne tuning the initial visual network, for 50 epochs.
# 5.3 Calibration
The inference problems for both models are:
arg max yâY fθ(y, i) = log p(y|i; θ).
We use the algorithm in Sec. (4) to calibrate the predictions using model θ. Our calibration tries to enforce gender statistics derived from the training set of corpus applicable for each recognition prob- lem. For all experiments, we try to match gen- der ratios on the test set within a margin of .05 of their value on the training set. While we do adjust the output on the test set, we never use the ground truth on the test set and instead working from the assumption that it should be similarly distributed as the training set. When running the debiasing al- gorithm, we set η = 10â1 and optimize for 100 iterations.
# 6 Bias Analysis
In this section, we use the approaches outlined in Section 3 to quantify the bias and bias ampliï¬- cation in the vSRL and the MLC tasks.
# 6.1 Visual Semantic Role Labeling
imSitu is gender biased In Figure 2(a), along the x-axis, we show the male favoring bias of im- Situ verbs. Overall, the dataset is heavily biased toward male agents, with 64.6% of verbs favoring a male agent by an average bias of 0.707 (roughly 3:1 male). Nearly half of verbs are extremely bi- ased in the male or female direction: 46.95% of verbs favor a gender with a bias of at least 0.7.6 Figure 2(a) contains several activity labels reveal- ing problematic biases. For example, shopping, microwaving and washing are biased toward a female agent. Furthermore, several verbs such as driving, shooting, and coaching are heavily biased toward a male agent.
Training on imSitu ampliï¬es bias In Fig- ure 2(a), along the y-axis, we show the ratio of male agents (% of total people) in predictions on an unseen development set. The mean bias ampli- ï¬cation in the development set is high, 0.050 on average, with 45.75% of verbs exhibiting ampli- ï¬cation. Biased verbs tend to have stronger am- pliï¬cation: verbs with training bias over 0.7 in either the male or female direction have a mean ampliï¬cation of 0.072. Several already problem- atic biases have gotten much worse. For example, serving, only had a small bias toward females in the training set, 0.402, is now heavily biased toward females, 0.122. The verb tuning, origi- nally heavily biased toward males, 0.878, now has exclusively male agents.
# 6.2 Multilabel Classiï¬cation
MS-COCO is gender biased In Figure 2(b) along the x-axis, similarly to imSitu, we ana- lyze bias of objects in MS-COCO with respect to males. MS-COCO is even more heavily bi- ased toward men than imSitu, with 86.6% of ob- jects biased toward men, but with smaller average magnitude, 0.65. One third of the nouns are ex- tremely biased toward males, 37.9% of nouns fa- vor men with a bias of at least 0.7. Some prob- lematic examples include kitchen objects such as knife, fork, or spoon being more biased to- ward woman. Outdoor recreation related objects such tennis racket, snowboard and boat tend to be more biased toward men.
6In this gender binary, bias toward woman is 1â the bias toward man
(a) Bias analysis on imSitu vSRL (b) Bias analysis on MS-COCO MLC
Figure 2: Gender bias analysis of imSitu vSRL and MS-COCO MLC. (a) gender bias of verbs toward man in the training set versus bias on a predicted development set. (b) gender bias of nouns toward man in the training set versus bias on the predicted development set. Values near zero indicate bias toward woman while values near 0.5 indicate unbiased variables. Across both dataset, there is signiï¬cant bias toward males, and signiï¬cant bias ampliï¬cation after training on biased training data.
Training on MS-COCO ampliï¬es bias In Fig- ure 2(b), along the y-axis, we show the ratio of man (% of both gender) in predictions on an un- seen development set. The mean bias ampliï¬ca- tion across all objects is 0.036, with 65.67% of nouns exhibiting ampliï¬cation. Larger training bias again tended to indicate higher bias ampliï¬- cation: biased objects with training bias over 0.7 had mean ampliï¬cation of 0.081. Again, several problematic biases have now been ampliï¬ed. For example, kitchen categories already biased toward females such as knife, fork and spoon have all been ampliï¬ed. Technology oriented categories initially biased toward men such as keyboard and mouse have each increased their bias toward males by over 0.100.
# 6.3 Discussion
We conï¬rmed our hypothesis that (a) both the im- Situ and MS-COCO datasets, gathered from the web, are heavily gender biased and that (b) mod- els trained to perform prediction on these datasets amplify the existing gender bias when evaluated on development data. Furthermore, across both datasets, we showed that the degree of bias am- pliï¬cation was related to the size of the initial bias, with highly biased object and verb categories exhibiting more bias ampliï¬cation. Our results demonstrate that care needs be taken in deploying such uncalibrated systems otherwise they could not only reinforce existing social bias but actually make them worse.
# 7 Calibration Results
We test our methods for reducing bias ampliï¬ca- tion in two problem settings: visual semantic role labeling in the imSitu dataset (vSRL) and multil- abel image classiï¬cation in MS-COCO (MLC). In all settings we derive corpus constraints using the training set and then run our calibration method in batch on either the development or testing set. Our results are summarized in Table 2 and Figure 3.
# 7.1 Visual Semantic Role Labeling
Our quantitative results are summarized in the ï¬rst two sections of Table 2. On the development set, the number of verbs whose bias exceed the original bias by over 5% decreases 30.5% (Viol.). Overall, we are able to signiï¬cantly reduce bias ampliï¬cation in vSRL by 52% on the develop- ment set (Amp. bias). We evaluate the under- lying recognition performance using the standard measure in vSRL: top-1 semantic role accuracy, which tests how often the correct verb was pre- dicted and the noun value was correctly assigned to a semantic role. Our calibration method results in a negligible decrease in performance (Perf.). In Figure 3(c) we can see that the overall distance to the training set distribution after applying RBA de- creased signiï¬cantly, over 39%.
Figure 3(e) demonstrates that across all initial training bias, RBA is able to reduce bias ampliï¬- cation. In general, RBA struggles to remove bias ampliï¬cation in areas of low initial training bias,
B N 10 S 08 i . 3 06 2 & Z 04 % 3 2 02 5 0.0 0.2 L L 1 0.0 0.2 0.4 0.6 0.8 training gender ratio
1.0
B ib 1.0} 2 O97 i ~ 0.8} 3 2 & O77 3 0.6} 3 3 s05} 0.4} 03 cain 1 L L L 0.3 0.4 0.5 0.6 0.7 08 0.9 1.0 training gender ratio
(a) Bias analysis on imSitu vSRL without RBA
(b) Bias analysis on MS-COCO MLC without RBA
1.2 r T T r 10 g 08 3 c 3 06 3 os 0.4 $ 3 2 5 0? 0.0 -0.2 1. 1. L L 0.0 0.2 0.4 0.6 0.8 training gender ratio
1.0
11 r r T T T r 1.0} 9 OP é 5 0.8} 2 So7} ao] $ osk 3 g a 0.5} 0.4L 03 can 1. 1 A L L 0.3 0.4 0.5 0.6 0.7 08 0.9 1.0 training gender ratio
(c) Bias analysis on imSitu vSRL with RBA
(d) Bias analysis on MS-COCO MLC with RBA
0.10
0.08 | § és = 0.06} & 5 8 B 0.04 5 g £ 0.02 + 0.00 1 \ F f \ \ 01 02 03 04 O05 06 O7 08 09 10 training gender ratio
0.08
0.07} \ tooN 0.06 | " 0.05 ' hor \ r âa tet .
rly | 0.04 |. n , on rhi th eta 7 0.03 | Cay â f \ 1 0.02 | ' f 0.01} . 0.00 H f 0.4 0.5 0.6 0.7 0.8 0.9 1.0 training gender ratio
(e) Bias in vSRL with (blue) / without (red) RBA
(f) Bias in MLC with (blue) / without (red) RBA
Figure 3: Results of reducing bias ampliï¬cation using RBA on imSitu vSRL and MS-COCO MLC. Figures 3(a)-(d) show initial training set bias along the x-axis and development set bias along the y- axis. Dotted blue lines indicate the 0.05 margin used in RBA, with points violating the margin shown in red while points meeting the margin are shown in green. Across both settings adding RBA signiï¬- cantly reduces the number of violations, and reduces the bias ampliï¬cation signiï¬cantly. Figures 3(e)-(f) demonstrate bias ampliï¬cation as a function of training bias, with and without RBA. Across all initial training biases, RBA is able to reduce the bias ampliï¬cation.
Method Viol. Amp. bias Perf. (%) vSRL: Development Set 0.050 0.024 vSRL: Test Set 0.042 CRF 149 0.025 CRF + RBA 102 CRF 154 CRF + RBA 107 24.07 23.97 24.14 24.01 MLC: Development Set CRF CRF + RBA CRF CRF + RBA 0.032 40 0.022 24 MLC: Test Set 0.040 38 0.021 16 45.27 45.19 45.40 45.38
Table 2: Number of violated constraints, mean ampliï¬ed bias, and test performance before and af- ter calibration using RBA. The test performances of vSRL and MLC are measured by top-1 seman- tic role accuracy and top-1 mean average preci- sion, respectively.
likely because bias is encoded in image statistics and cannot be removed as effectively with an im- age agnostic adjustment. Results on the test set support our development set results: we decrease bias ampliï¬cation by 40.5% (Amp. bias).
# 7.2 Multilabel Classiï¬cation
Our quantitative results on MS-COCO RBA are summarized in the last two sections of Table 2. Similarly to vSRL, we are able to reduce the num- ber of objects whose bias exceeds the original training bias by 5%, by 40% (Viol.). Bias ampliï¬- cation was reduced by 31.3% on the development set (Amp. bias). The underlying recognition sys- tem was evaluated by the standard measure: top- 1 mean average precision, the precision averaged across object categories. Our calibration method results in a negligible loss in performance. In Fig- ure 3(d), we demonstrate that we substantially re- duce the distance between training bias and bias in the development set. Finally, in Figure 3(f) we demonstrate that we decrease bias ampliï¬cation for all initial training bias settings. Results on the test set support our development results: we de- crease bias ampliï¬cation by 47.5% (Amp. bias).
# 7.3 Discussion
We have demonstrated that RBA can signiï¬cantly reduce bias ampliï¬cation. While were not able to remove all ampliï¬cation, we have made signiï¬cant
progress with little or no loss in underlying recog- nition performance. Across both problems, RBA was able to reduce bias ampliï¬cation at all initial values of training bias.
# 8 Conclusion
Structured prediction models can leverage correla- tions that allow them to make correct predictions even with very little underlying evidence. Yet such models risk potentially leveraging social bias in their training data. In this paper, we presented a general framework for visualizing and quantify- ing biases in such models and proposed RBA to calibrate their predictions under two different set- tings. Taking gender bias as an example, our anal- ysis demonstrates that conditional random ï¬elds can amplify social bias from data while our ap- proach RBA can help to reduce the bias.
Our work is the ï¬rst to demonstrate structured prediction models amplify bias and the ï¬rst to propose methods for reducing this effect but sig- niï¬cant avenues for future work remain. While RBA can be applied to any structured predic- tor, it is unclear whether different predictors am- plify bias more or less. Furthermore, we pre- sented only one method for measuring bias. More extensive analysis could explore the interaction among predictor, bias measurement, and bias de- ampliï¬cation method. Future work also includes applying bias reducing methods in other struc- tured domains, such as pronoun reference resolu- tion (Mitkov, 2014).
Acknowledgement This work was supported in part by National Science Foundation Grant IIS- 1657193 and two NVIDIA Hardware Grants.
# References
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425â2433.
Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The Berkeley framenet project. In Proceed- ings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 86â90.
Solon Barocas and Andrew D Selbst. 2014. Big dataâs disparate impact. Available at SSRN 2477899.
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016.
Man is to computer programmer as woman is to In The homemaker? debiasing word embeddings. Conference on Advances in Neural Information Pro- cessing Systems (NIPS), pages 4349â4357.
and Arvind Joanna J Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183â186.
Kai-Wei Chang, S. Sundararajan, and S. Sathiya Keerthi. 2013. Tractable semi-supervised learning In Pro- of complex structured prediction models. ceedings of the European Conference on Machine Learning (ECML), pages 176â191.
Yin-Wen Chang and Michael Collins. 2011. Exact de- coding of phrase-based translation models through Lagrangian relaxation. In EMNLP, pages 26â37.
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
Constrained Semi- supervised Learning in the Presence of Unantici- pated Classes. Ph.D. thesis, Google Research.
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Fairness Reingold, and Richard Zemel. 2012. In Proceedings of the 3rd In- through awareness. novations in Theoretical Computer Science Confer- ence, pages 214â226. ACM.
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubrama- nian. 2015. Certifying and removing disparate im- In Proceedings of International Conference pact. on Knowledge Discovery and Data Mining (KDD), pages 259â268.
Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge extraction. Automated Knowledge Base Construction (AKBC).
Inc. Gurobi Optimization. 2016. Gurobi optimizer ref- erence manual.
Moritz Hardt, Eric Price, Nati Srebro, et al. 2016. In Equality of opportunity in supervised learning. Conference on Neural Information Processing Sys- tems (NIPS), pages 3315â3323.
Matthew Kay, Cynthia Matuszek, and Sean A Munson. 2015. Unequal representation and gender stereo- types in image search results for occupations. In Human Factors in Computing Systems, pages 3819â 3828. ACM.
Bernhard Korte and Jens Vygen. 2008. Combinatorial Optimization: Theory and Application. Springer Verlag.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: In European Confer- Common objects in context. ence on Computer Vision, pages 740â755. Springer.
G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K.J. Miller. 1990. Wordnet: An on-line lexical International Journal of Lexicography, database. 3(4):235â312.
Emiel van Miltenburg. 2016. Stereotyping and bias in the ï¬ickr30k dataset. MMC.
Ishan Misra, C Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. 2016. Seeing through the human reporting bias: Visual classiï¬ers from noisy human- In Conference on Computer Vision centric labels. and Pattern Recognition (CVPR), pages 2930â2939.
Ruslan Mitkov. 2014. Anaphora resolution. Rout- ledge.
Nanyun Peng, Ryan Cotterell, and Jason Eisner. 2015. Dual decomposition inference for graphical models over strings. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917â927.
John Podesta, Penny Pritzker, Ernest J. Moniz, John Holdren, and Jefrey Zients. 2014. Big data: Seiz- ing opportunities and preserving values. Executive Ofï¬ce of the President.
Karen Ross and Cynthia Carter. 2011. Women and news: A long and winding road. Media, Culture & Society, 33(8):1148â1165.
Alexander M Rush and Michael Collins. 2012. A Tuto- rial on Dual Decomposition and Lagrangian Relax- ation for Inference in Natural Language Processing. Journal of Artiï¬cial Intelligence Research, 45:305â 362.
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
David Sontag, Amir Globerson, and Tommi Jaakkola. 2011. Introduction to dual decomposition for infer- ence. Optimization for Machine Learning, 1:219â 254.
Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue, 11(3):10.
Benjamin D Van Durme. 2010. Extracting implicit knowledge from text. Ph.D. thesis, University of Rochester.
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 3156â3164.
Mark Yatskar, Vicente Ordonez, Luke Zettlemoyer, and Ali Farhadi. 2017. Commonly uncommon: Seman- tic sparsity in situation recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 5534â5542.
Indre Zliobaite. 2015. A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148. | {
"id": "1511.00148"
} |
1707.08819 | A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets | The original ImageNet dataset is a popular large-scale benchmark for training
Deep Neural Networks. Since the cost of performing experiments (e.g, algorithm
design, architecture search, and hyperparameter tuning) on the original dataset
might be prohibitive, we propose to consider a downsampled version of ImageNet.
In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet,
our proposed ImageNet32$\times$32 (and its variants ImageNet64$\times$64 and
ImageNet16$\times$16) contains exactly the same number of classes and images as
ImageNet, with the only difference that the images are downsampled to
32$\times$32 pixels per image (64$\times$64 and 16$\times$16 pixels for the
variants, respectively). Experiments on these downsampled variants are
dramatically faster than on the original ImageNet and the characteristics of
the downsampled datasets with respect to optimal hyperparameters appear to
remain similar. The proposed datasets and scripts to reproduce our results are
available at http://image-net.org/download-images and
https://github.com/PatrykChrabaszcz/Imagenet32_Scripts | http://arxiv.org/pdf/1707.08819 | Patryk Chrabaszcz, Ilya Loshchilov, Frank Hutter | cs.CV, cs.LG | null | null | cs.CV | 20170727 | 20170823 | 7 1 0 2
g u A 3 2 ] V C . s c [
3 v 9 1 8 8 0 . 7 0 7 1 : v i X r a
ImageNet32x32, ImageNet16x16 and ImageNet64x64
# A DOWNSAMPLED VARIANT OF IMAGENET AS AN AL- TERNATIVE TO THE CIFAR DATASETS
Patryk Chrabaszcz, Ilya Loshchilov & Frank Hutter University of Freiburg Freiburg, Germany, {chrabasp,ilya,fh}@cs.uni-freiburg.de
# ABSTRACT
The original ImageNet dataset is a popular large-scale benchmark for training Deep Neural Networks. Since the cost of performing experiments (e.g, algo- rithm design, architecture search, and hyperparameter tuning) on the original dataset might be prohibitive, we propose to consider a downsampled version of ImageNet. In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet, our proposed ImageNet32x32 (and its variants ImageNet64x64 and ImageNet16x16) contains exactly the same number of classes and images as ImageNet, with the only difference that the images are downsampled to 32Ã32 pixels per image (64Ã64 and 16Ã16 pixels for the variants, respec- tively). Experiments on these downsampled variants are dramatically faster than on the original ImageNet and the characteristics of the downsampled datasets with respect to optimal hyperparameters appear to remain similar. The proposed datasets and scripts to reproduce our results are available at http://image-net.org/download-images and https://github.com/PatrykChrabaszcz/Imagenet32_Scripts
# INTRODUCTION
Deep learning research has been substantially facilitated by the availability of realistic and acces- sible benchmark datasets, such as CIFAR-10 and CIFAR-100 (Krizhevsky and Hinton, 2009) (and MNIST (LeCun et al., 1998) in the 1990s). With the progress of machine learning, simple datasets lose some of their relevance, and more complex datasets/tasks become more important. While good results can be achieved on more complex datasets, such as ImageNet (Krizhevsky et al., 2012; Rus- sakovsky et al., 2015), this incurs a large computational burden, making it intractable to achieve state-of-the-art performance without massive compute resources (training a strong ImageNet model typically requires several GPU months).
Due to this computational expense of running experiments on the original ImageNet dataset we propose to explore cheaper alternatives that preserve the datasetâs complexity. In order to check the scalability of new methods, neural architectures and hyperparameters associated with them, one might be interested in a downscaled version of ImageNet which allows for cheaper experimentation. Moreover, a lower resolution of the images would make the classiï¬cation task much more difï¬cult and would thus postpone the saturation of benchmarking results currently observed on CIFAR-10, e.g., 3% error obtained by Gastaldi (2017) compared to roughly 6% obtained by a trained human (Karpathy, 2011).
To address this issue, we provide downsampled variants of the original ImageNet dataset and analyze results on them w.r.t. different hyperparameter settings and network sizes. We obtain surprisingly strong classiï¬cation results on our downsampled variants and ï¬nd qualitative results to be very similar across downsampling sizes. This suggests that these downsampled datasets are useful for facilitating cheap experimentation.
The basic contributions of this report are as follows:
1
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
⢠We make available downsampled versions of ImageNet (64Ã64, 32Ã32, and 16Ã16 pixels) to facilitate fast experimentation with different network architectures, training algorithms, and hyperparameters.
⢠We show that different downsampling techniques yield similar results, except for a nearest neighbor approach, which performed worse in all our experiments.
⢠Using Wide ResNets (Zagoruyko and Komodakis, 2016), we obtain surprisingly good performance, matching the baseline by the pioneering AlexNet (Krizhevsky et al., 2012) (18.2% top-5 error) while using ImageNet32x32 (whose images have roughly 50à less pixels per image than the original ones).
⢠We show that the range of optimal learning rates does not change much across Ima- geNet16x16, ImageNet32x32, and ImageNet64x64, as well as across different network widths. This could be exploited by multi-ï¬delity methods for architecture and hyperpa- rameter search (Li et al., 2016; Klein et al., 2016).
# 2 DOWNSAMPLING IMAGENET
The original ImageNet dataset consists of images released as a part of the ILSVRC-2012 classiï¬ca- tion dataset (Krizhevsky et al., 2012; Russakovsky et al., 2015). Each image belongs to one of 1000 object classes, with the number of training images per class varying from 732 to 1300; there are 50 validation images per class. The size of the original images varies; therefore, a preprocessing step is usually applied to scale and crop images to the size of 224 à 224 pixels.
We are aware of two datasets that contain low resolution images derived from the ImageNet dataset:
⢠Downsampled ImageNet (Oord et al., 2016), like our datasets, contains all images in ImageNet, but since it was constructed for unsupervised learning, it does not provide the actual image labels and can therefore not be used for supervised learning.
⢠TinyImageNet (available at https://tiny-imagenet.herokuapp.com/) con- tains a subset of 200 classes with 500 images per class.
Mishkin et al. (2016) suggested to use 128x128 pixels ImageNet images to evaluate various deep learning techniques, but their dataset is not available.
We downsample / resize the original images to smaller images of 32x32 pixels to form Ima- geNet32x32, to images of 64x64 pixels to form ImageNet64x64 and to images of 16x16 pixels to form ImageNet16x16. In contrast to TinyImageNet, we do not reduce the number of classes and number of images. All images are shufï¬ed and then divided into 10 different ï¬les so that each ï¬le is expected to have images from all classes. The validation data is stored in a separate ï¬le, both the training and validation data points are labeled (e.g., indexed starting from 1) according to the map- ping ï¬le of the ImageNet devkit. Each ï¬le contains images, labels and the mean image computed over the whole training set. We keep the same format of ï¬les as the one that is commonly used for the CIFAR datasets. ImageNet16x16, ImageNet32x32 and ImageNet64x64 take 1 GB, 4 GB and 16 GB of disk space, respectively.
We consider 6 different downsampling techniques available in the Pillow library1: lanczos, nearest, bilinear, bicubic, hamming, box (see Figure 1). In order to check the quality of the downsampled im- ages we use them to train Wide Residual Networks (WRNs) by Zagoruyko and Komodakis (2016), expecting that better validation errors will tend to be achieved with downsampling techniques that lose less information.
# 3 EXPERIMENTAL SETUP
We train Wide Residual Networks WRN-N-k by Zagoruyko and Komodakis (2016), where N is the number of layers and k is a multiplicative factor for the number of ï¬lters, with k = 1 corresponding to 16 ï¬lters in the ï¬rst residual block; increasing k makes the network wider. We use Stochastic Gradient Descent with momentum factor 0.9, drop the learning rate by a factor of 5.0 every 10
# 1Pillow version 4.1 available at https://python-pillow.org
2
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
epochs, and train up to a total budget of 40 epochs. Throughout, we show validation error rates obtained after training for 31 epochs (right after the last drop of the learning rate).
Our experiments on ImageNet32x32 employ the original WRNs designed for the CIFAR datasets with 32 à 32 pixels per image. To adapt WRNs for images with 64 à 64 pixels per image as used in ImageNet64x64, we add an additional stack of residual blocks to reduce the spatial resolution of the last feature map from 16 à 16 to 8 à 8 and thus double the number of features. Analogously, for ImageNet16x16, we remove the last stack of residual blocks. For data augmentation, we ï¬ip images horizontally and concatenate them to the original images, effectively doubling the number of images per epoch. We also use random image shifts (up to 4 pixels horizontally and vertically).
# 4 RESULTS
Does the downsampling technique matter? We evaluated the six downsampling techniques de- scribed in Section 2 using a small WRN-28-2 network and various initial learning rates LR â {0.001, 0.0025, 0.005, 0.01, 0.025, 0.05}. The results in Figure 2 show that all downsampling tech- niques performed very similarly, except for the nearest neighbour technique which yielded the worst results for all learning rates. This observation is in line with Figure 1 and also holds for Ima- geNet16x16 and ImageNet64x64 (results not shown for brevity). For all remaining experiments in this paper, we used the box method.
Do conclusions drawn for cheap evaluations carry over to expensive ones? Next, we studied to which extent conclusions drawn for small networks and downsampled images carry over to larger networks and higher resolution images. This in turn determines the usefulness of these techniques for speeding up the experimental loop of architecture design and hyperparameter optimization. For this, we performed three experiments:
⢠We studied how the results scale with the network size, more speciï¬cally, network width, deï¬ned by k. Table 1 shows that larger k yielded better results independently of the downsampling size. Performance on our downsampled datasets was surprisingly strong; for example, on ImageNet32x32, using k = 10 achieved 40.96% Top-1 validation er- ror and 18.87% Top-5 validation error. Interestingly, this matches the original results by AlexNets (Krizhevsky et al., 2012) (40.7% and 18.2%, respectively) on full-sized ImageNet (which has roughly 50 times more pixels per image). Clearly, greater image resolution yielded better results (e.g., 12.64% top-5 performance for ImageNet64x64).
⢠We studied how optimal learning rates changed across different combinations of downsam- pling sizes and network widths. Figure 3 shows that the region of optimal learning rates remained similar across all our experimental setups, including networks whose space and time complexity differed by up to a factor of 100. Additionally, Figure 4 compares perfor- mance as a function of both learning rate and width multiplier k for downsampling sizes of 32x32 and 16x16, showing qualitatively very similar results in both cases, including the interaction effect that larger values of k favor somewhat larger learning rates than smaller
Nearest Box Hamming Lanczos = Ry Bicubic __ Bilinear
Figure 1: The original images (ï¬rst column) and images obtained by 6 downsampling techniques (left to right): bicubic, bilinear, box, hamming, lanczos, nearest. Our resizing procedure changes the aspect ratio of images.
3
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
Top-5 Error (%) ee 2 2 x @ @ 8 90 80 Top-5 Error (%) =o 8 90 80 70 60 Top-5 Error (%) LR=0.05, k=2 ¢0 LR=0.025, k=2 4 bicubic 4 bicubic =e bilinear 8 = bilinear ⢠box 70 â box â>â hamming > hamming ie lanezos 60 ie lanezos â<+ nearest â<+ nearest Top-5 Error (%) =o 8 5 10 1% 20 28 31 35 40 5 10 15 20 25 30 35 40 Epochs Epochs LR=0.01, k=2 90 LR=0.005, k=2 =i bicubic =i bicubic â@ bilinear 8 â@ bilinear 4 box 70 â4 box > hamming > hamming âte lanezos 60 âie lanezos â+- nearest â+- nearest Top-5 Error (%) 5 10 18 20 2 3 35 40 5 10 15 20 25 30 35 40 Epochs Epochs LR=0.0025, k=2 90 LR=0.001, k=2 âl bicubic bicubic â@ bilinear 8 â® bilinear â# box 70 ââ box > hamming = â>â hamming <i lanezos 5° <i lanezos â+-_nearest uD 50 â+-_nearest © S40 fd 30 20 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 Epochs Epochs
Figure 2: The mean Top-5 errors obtained in 3 runs by WRN-28-2 on ImageNet32x32 for different learning rates (indicated at the top of each subï¬gure as LR) and downsampling algorithms.
k. This suggests that small networks and downsampled images may indeed facilitate faster experimentation.
⢠We also investigated the tradeoffs of performance vs. training time resulting from dif- ferent downsampling and network sizes. Figure 5 and Table 1 show that both mecha- nisms for reducing the computational cost should be considered simultaneously to achieve optimal anytime performance. An additional mechanism could be to perform warm restarts (Loshchilov and Hutter, 2017), which was shown to substantially improve any- time performance over reductions of the learning rate at regular intervals. Since the relative ranking of learning rates was consistent across different downsampling and network sizes, we also envision that architecture and hyperparameter search methods could exploit cheap proxies of computationally more expensive setups based on varying these degrees of free- dom. Possible methods for exploiting these include Li et al. (2016); Klein et al. (2016).
4
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
WRN-20-k on ImageNet16x16 WRN-20-k on ImageNet16x16 WRN-20-k on ImageNet16x16 WRN-20-k on ImageNet16x16 WRN-28-k on ImageNet32x32 WRN-28-k on ImageNet32x32 WRN-28-k on ImageNet32x32 WRN-28-k on ImageNet32x32 WRN-28-k on ImageNet32x32 WRN-36-k on ImageNet64x64 WRN-36-k on ImageNet64x64 WRN-36-k on ImageNet64x64 WRN-36-k on ImageNet64x64 width k 1 2 5 10 0.5 1 2 5 10 0.5 1 2 5 # params Top-1 error Top-5 error Time [days] 0.12M 0.42M 2.3M 8.9M 0.13M 0.44M 1.6M 9.5M 37.1M 0.44M 1.6M 6.2M 37.6M 85,18% 77,00% 66,60% 59.94% 79,83% 67,97% 56,92% 45,36% 40,96% 62,35% 49,79% 39,55% 32,34% 66,12% 54,22% 41,59% 35.10% 57,64% 42,49% 30,92% 21,36% 18,87% 36,06% 24,17% 16,57% 12,64% 0.2 0.4 1.0 2.7 0.5 0.8 1.5 4.9 13.8 2.1 3.4 6.4 22
Table 1: The mean Top-1 and Top-5 test error rates obtained in 3 runs by WRNs measured right after the last drop of the learning rate, i.e., after epoch 31 (for bigger models training for more than one epoch after the last drop can lead to overï¬tting). All results are based on a learning rate of 0.01. The timing results are reported for training on a single Titan X GPU.
_7 g Soo 5 5% > 40 8 © 30 < 20, HE 05 ke2 -@ 10 mm 3232 Ce ee rs 10 0.001 0.0025 0.005 0.01 0.025 0.05 Learning Rate mmm 32x32 mmm 64x54 ~@ K-10 mmm 16x16 0.0025 0.005 0.01 Learning Rate
Figure 3: The mean Top-1 (Left) and Top-5 (Right) errors obtained in 3 runs by WRN-N-k after 31 epochs with different settings of the initial learning rates and different sizes of the downsampled images (ImageNet16x16, ImageNet32x32 and ImageNet64x64). The results for ImageNet64x64 are shown for different k but a single value of the initial learning rate LR=0.01 which seems reasonably good across different settings.
16x16 Images ws fin, 10
[eglsoua g don mm 32x32 Images 16x16 Images , ws 5 hy, = ~ - 0.025 205 Netveng 0.0025 9-005 oot Mey, © 01 âearning RAE fin, 10
[eglsoua g don mm 32x32 Images , 5 hy, = ~ - 0.025 205 Netveng 0.0025 9-005 oot Mey, © 01 âearning RAE
Figure 4: The mean Top-5 errors for 32x32 images (left) and 16x16 images (right), as a function of network width k and learning rate.
5
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
oo oO @ 16x16 Images @ = 32x32 Images 64x64 Images ~â oO = I) wo == fo?) Oo 416k a Oo Top 5 Error (%) so 68hlU6S lU6S 2 N = fo) 10 10° 10° Training time (hours)
oo oO @ 16x16 Images og 123k @ 32x32 Images ES HM 64x64 Images Top 5 Error (%) = N w & a fep) a oO oO oO oO oO oO fon} | Wik a a f G2 N = fo) 10' 40° 10° Training time (hours)
Figure 5: The mean Top-5 test error rates according to Table 1 for different models (with different number of parameters) vs. training time on a single Titan X GPU. The bottom ï¬gure replicates the top one, but also shows semi-transparent curves in the background to represent convergence curves.
6
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
Top-1 Validation accuracy (%) Top-5 Validation accuracy (%) 400 600 400 600 Class Class
Figure 6: Percentage of correct Top-1 (Left) and Top-5 (Right) predictions for different classes ob- tained by WRN-28-5 on ImageNet32x32. Classes are ordered by this value for better visualization.
# 5 DISCUSSION AND CONCLUSION
Our proposed downsampled versions of the original ImageNet dataset might represent a viable alter- native to the CIFAR datasets while dealing with more complex data and classes. Quite surprisingly, even by greatly reducing the resolution of images to 32 Ã 32 pixels, one can predict image labels quite well (see also Figure 6 and Figure 7).
Classiï¬cation of low resolution images might also be of interest when (i) data storage is important (the original ImageNet dataset is 145GB), (ii) the input images are corrupted by noise, or (iii) a small subpart of a high resolution image must be classiï¬ed.
We hope that the provided datasets will ï¬ll the gap between the CIFAR datasets and the full Im- ageNet dataset, representing a good benchmark for experimental studies, such as algorithm de- sign, neural network architecture search and hyperparameter optimization. Our preliminary experi- ments support the hypothesis that ï¬ndings obtained on smaller networks for lower resolution images may transfer to larger networks for higher resolution images, while being up to 100 times cheaper to obtain. This could be exploited by multi-ï¬delity methods for architecture and hyperparameter search (Li et al., 2016; Klein et al., 2016).
# 6 ACKNOWLEDGEMENT
This work has partly been supported by the European Research Council (ERC) under the Euro- pean Unions Horizon 2020 research and innovation programme under grant no. 716721 and by the German Research Foundation (DFG), under the BrainLinksBrainTools Cluster of Excellence (grant number EXC 1086). The authors acknowledge support by the High Performance and Cloud Computing Group at the Zentrum f¨ur Datenverarbeitung of the University of T¨ubingen, the state of Baden-W¨urttemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 37/935-1 FUGG.
BIBLIOGRAPHY
Xavier Gastaldi. Shake-shake regularization of 3-branch residual networks. In 5th International Conference on Learning Representations (ICLR 2017), 2017.
Andrej Karpathy. Lessons learned from manually classifying CIFAR-10, 2011. URL http: //karpathy.github.io/2011/04/27/manually-classifying-cifar10/. Ac- cessed: 2017-05-19.
Fast bayesian optimization of machine learning hyperparameters on large datasets. arXiv preprint arXiv:1605.07079, 2016.
7
# ImageNet32x32, ImageNet16x16 and ImageNet64x64
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hy- arXiv preprint perband: A novel bandit-based approach to hyperparameter optimization. arXiv:1603.06560, 2016.
Ilya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Restarts. In Interna- tional Conference on Learning Representations (ICLR 2017), 2017.
Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of CNN advances on the ImageNet. arXiv preprint arXiv:1606.02228, 2016.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015. doi: 10.1007/s11263-015-0816-y.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
8
ImageNet32x32, ImageNet16x16 and ImageNet64x64
Figure 7: Subset of Imagenet32x32 validation images from classes with 1th (trolleybus, 100% ac- curacy), 250th (sloth bear, 88% accuracy), 500th (bulbul, 80% accuracy), 750th (projectile, 70% accuracy) and 1000th (plastic bag, 26% accuracy) best Top-5 accuracy as given in Figure 6. Green image borders indicate correct Top-5 predictions. The results were obtained by WRN-28-5 on Ima- geNet32x32.
9 | {
"id": "1603.06560"
} |
1707.08817 | Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards | We propose a general and model-free approach for Reinforcement Learning (RL)
on real robotics with sparse rewards. We build upon the Deep Deterministic
Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and
actual interactions are used to fill a replay buffer and the sampling ratio
between demonstrations and transitions is automatically tuned via a prioritized
replay mechanism. Typically, carefully engineered shaping rewards are required
to enable the agents to efficiently explore on high dimensional control
problems such as robotics. They are also required for model-based acceleration
methods relying on local solvers such as iLQG (e.g. Guided Policy Search and
Normalized Advantage Function). The demonstrations replace the need for
carefully engineered rewards, and reduce the exploration problem encountered by
classical RL approaches in these domains. Demonstrations are collected by a
robot kinesthetically force-controlled by a human demonstrator. Results on four
simulated insertion tasks show that DDPG from demonstrations out-performs DDPG,
and does not require engineered rewards. Finally, we demonstrate the method on
a real robotics task consisting of inserting a clip (flexible object) into a
rigid object. | http://arxiv.org/pdf/1707.08817 | Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller | cs.AI | null | null | cs.AI | 20170727 | 20181008 | 8 1 0 2
t c O 8 ] I A . s c [
2 v 7 1 8 8 0 . 7 0 7 1 : v i X r a
# Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards
Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang Olivier Pietquin, Bilal Piot, Nicolas Heess Thomas Rothörl, Thomas Lampe, Martin Riedmiller Deepmind vec, toddhester, jscholz, awaw pietquin, piot, heess tcr, thomaslampe, riedmiller@google.com
Abstract: We propose a general and model-free approach for Reinforcement Learning (RL) on real robotics with sparse rewards. We build upon the Deep Deterministic Policy Gradient (DDPG) algorithm to use demonstrations. Both demonstrations and actual interactions are used to ï¬ll a replay buffer and the sam- pling ratio between demonstrations and transitions is automatically tuned via a prioritized replay mechanism. Typically, carefully engineered shaping rewards are required to enable the agents to efï¬ciently explore on high dimensional control problems such as robotics. They are also required for model-based acceleration methods relying on local solvers such as iLQG (e.g. Guided Policy Search and Nor- malized Advantage Function). The demonstrations replace the need for carefully engineered rewards, and reduce the exploration problem encountered by classical RL approaches in these domains. Demonstrations are collected by a robot kines- thetically force-controlled by a human demonstrator. Results on four simulated insertion tasks show that DDPG from demonstrations out-performs DDPG, and does not require engineered rewards. Finally, we demonstrate the method on a real robotics task consisting of inserting a clip (ï¬exible object) into a rigid object.
Keywords: Demonstrations, Robot, Learning, Apprenticeship
# Introduction
The latest generation of collaborative robots are designed to eliminate cumbersome path programming by allowing humans to kinesthetically guide a robot through a desired motion. This approach dramatically reduces the time and expertise required to get a robot to solve a novel task, but there is still a fundamental dependence on scripted trajectories. Consider the task of inserting a wire into a connector: it is difï¬cult to imagine any predeï¬ned motion which can handle variability in wire shape and stiffness. To solve these sorts of tasks, it is desirable to have a richer control policy which considers a large amount of feedback including states, forces, and even raw images. Reinforcement Learning (RL) offers, in principle, a method to learn such policies from exploration, but the amount of actual exploration required has prohibited its use in real applications. In this paper we address this challenge by combining the demonstration and RL paradigms into a single framework which uses kinesthetic demonstrations to guide a deep-RL algorithm. Our long-term vision is for it to be possible to provide a few minutes of demonstrations, and have the robot rapidly and safely learn a policy to solve arbitrary manipulation tasks.
The primary alternative to demonstrations for guiding RL agents in continuous control tasks is reward shaping. Shaping is typically achieved using a hand-coded function, such as Cartesian distance to a goal site, which provides a smoothly varying reward signal for every state the agent visits. While attractive in theory, reward shaping can lead to bizarre behavior or premature convergence to local minima, and in practice requires considerable engineering and experimentation to get right [9]. By contrast, it is often quite natural to express a task goal as a sparse reward function, e.g. +1 if the wire is inserted, and 0 otherwise. Our central contribution is to show that off-policy replay-memory-based RL (e.g. DDPG) is a natural vehicle for injecting demonstration data into sparse-reward tasks, and
that it obviates the need for reward-shaping. In contrast to on-policy RL algorithms, such as classical policy gradient, DDPG can accept and learn from arbitrary transition data. Furthermore, the replay memory allows the agent to maintain these transitions for long enough to propagate the sparse rewards throughout the value function.
We present results of simulation experiments on a set of robot insertion problems involving rigid and ï¬exible objects. We then demonstrate the viability of our approach on a real robot task consisting of inserting a clip (ï¬exible object) into a rigid object. This task is realized by a Sawyer robotic arm, using demonstrations collected by kinesthetically controlling an arm by the wrist. Our results suggest that sparse rewards and a few human demonstrations are a practical alternative to shaping for teaching robots to solve challenging continuous control tasks.
# 2 Background
This section provides mathematical background for Markov Decision Processes (MDPs), DDPG, and deep RL techniques such as prioritized replay and n-step return. We adopt the standard Markov Decision Process (MDP) formalism for this work [15]. An MDP is defined by a tuple (S, A, R, P,y), which consists of a set of states S, a set of actions A, a reward function R(s, a), a transition function P(s'|s,a), and a discount factor y. In each state s ⬠S, the agent takes an action a ⬠A. Upon taking this action, the agent receives a reward R(s, a) and reaches a new state sâ, determined from the probability distribution P(sâ|s,a). A deterministic and stationary policy 7 specifies for each state which action the agent will take. The goal of the agent is to find the policy 7 mapping states to actions that maximizes the expected discounted total reward over the agentâs lifetime. This concept is formalized by the action value function: Qâ(s,a) = Eâ poe 7 R(se, ai)| , where Eâ is the expectation over the distribution of the admissible trajectories (so, a0, $1, @1,...) obtained by executing the policy 7 starting from sq = s and dg = a. Here, we are interested in continuous control problems, and take an actor-critic approach in which both components are represented using neural networks. These methods consist in maximizing a mean value J(0) = E,~,.[Q7!)(s, 7(s|6))] with respect to parameters 6 that parameterise the policy and where s is an initial state distribution. To do so, a gradient approach is considered and the parameters 6 are updated as follows: 0 <â 0+ aV9J(6@). Deep Deterministic Policy Gradient (DDPG) [7] is an actor-critic algorithm which directly uses the gradient of the Q-function w.r.t. the action to train the policy. DDPG maintains a parameterized policy network 7(.|6") (actor function) and a parameterized action-value function network (critic function) Q(.|0@). It produces new transitions e = (s,a,r = R(s,a),sâ ~ P(.|s,a)) by acting according to a = (s|0") + NV where N is a random process allowing action exploration. Those transitions are added to a replay buffer B. To update the action-value network, a one-step off-policy evaluation is used and consists of minimizing the following loss:
# 7 R(se, ai)|
2 L1(0%) = E(s,a,r,8')~D [Ri - Q(s, a\8%)] ,
where D is a distribution over transitions e = (s,a,r = R(s,a),sâ ~ P(.|s,a)) contained in a replay buffer and the one-step return R, is defined as: Ry = r + yQâ(s', 2"(sâ)|6â )|0@ ). Here Qâ(.|9@â) and 7â(.|"â) are the associated target networks of Q(. @) and 7(.\9") which stabilizes the learning (updated every Nâ steps to the values of their associated networks). To update the policy network a gradient step is taken with respect to: Vor J(0") © E(s,a)~d [VaQ(s, a|6%)|a=n(s)02) Vorâ¢(8|07)] «
and 7â(.|"â)
[VaQ(s, a|6%)|a=n(s)02) Vorâ¢(8|07)] «
The off-policy nature of the algorithm allows the use of arbitrary data such as human demonstrations.
Our experiments made use of several general techniques from the deep RL literature which signiï¬- cantly improved the overall performance of DDPG on our test domains. As we discuss in Sec. 5, these improvements had a particularly large impact when combined with demonstration data.
# 3 DDPG from Demonstrations
Our algorithm modifies DDPG to take advantage of demonstrations. The demonstrations are of the form of RL transitions: (s, a, sâ,7). DDPGfD loads the demonstration transitions into the replay buffer before the training begins and keeps all transitions forever.
2
DDPGfD uses prioritized replay to enable efficient propagation of the reward information, which is essential in problems with sparse rewards. Prioritized experience replay [13] modifies the agent to sample more important transitions from its replay buffer more frequently. The probability of sampling a particular transition 7 is proportional to its priority, P(i) = = a where p; is the priority of the kPk transition. DDPGfD uses p; = 6? + A3|VaQ(si, ai|9@)|? + ⬠+ ep, where 6; is the last TD error calculated for this transition, the second term represents the loss applied to the actor, ⬠is a small positive constant to ensure all transitions are sampled with some probability, â¬p is a positive constant for demonstration transitions to increase their probability of getting sampled, and 3 is used to weight the contributions. To account for the change in the distribution, updates to the network are weighted with importance sampling weights, w; = (GF . Pay)â. DDPGfD uses a = 0.3 and 8 = 1 as we want to learn about the correct distribution from the very beginning. In addition, the prioritized replay is used to prioritize samples between the demonstration and agent data, controlling the ratio of data between the two in a natural way.
A second modification for the sparse reward case is to use a mix of 1-step and n-step returns when updating the critic function. Incorporating n-step returns helps propagate the Q-values along the trajectories. The n-step return loss consists of using rollouts (forward view) of size n of a policy 7 close to the current policy 7(.|07) in order to evaluate the action-value function Q(.|9°). The idea is to minimize the difference between the action-value at state (s = so, 7(s) = ao) and the return of a rollout (s;,a; = 1(si), 5, ~ P(.|si, ai), Ti)poo Of size n starting from (s,7(s)) and following 7. nâ1 The n-step return has the following form: Rn = S79 y'ri + y"Q(s),_1, 7 (5,1); 0@ ). The loss corresponding to this particular rollout is then: L,(0@) = 4 (Rn â Q(s, (s)|6®) > A third modification is to do multiple learning updates per environment step. If a single learning update per environment step is used, each transition will only be sampled as many times as the size of the minibatch. Choosing a balance between gathering fresher data and doing more learning is in general a complicated trade-off. If our data is stale, the samples from the replay buffer no longer represent the distribution of states our current policy would experience. This can lead to wrong Q values in states which were not previously visited and potentially cause our policy and values to diverge. However in our case we require data efficiency and therefore we need to use each transition several times. In our experiments, we could increase the number of learning updates to 20 without affecting the per-update learning efficiency. In practice, we used the value of 40 which provided a good balance between learning from previous interaction (data efficiency) and stability.
Finally, L2 regularization on the parameters of the actor and the critic networks are added to stabilize the ï¬nal learning performance.
The ï¬nal loss can be written as:
LCritic(θQ) = L1(θQ) + λ1Ln(θQ) + λ2LC âÎ¸Ï LActor(θÏ) = ââÎ¸Ï J(θÏ) + λ2âÎ¸Ï LA
To summarize, we modiï¬ed the original DDPG algorithm in the following ways:
Transitions from a human demonstrator are added to the replay buffer. ⢠Prioritized replay is used for sampling transitions across both the demonstration and agent
data.
A mix of 1-step L1(θQ) and n-step return Ln(θQ) losses are used. ⢠Learning multiple times per environment step. ⢠L2 regularization losses on the weights of the critic LC
reg(θQ) and the actor LA reg(θÏ) are used.
# 4 Experimental setup
Our approach is designed for problems in which it is easy to specify a goal state, but difï¬cult to specify a smooth distance function for reward shaping that does not lead to sub-optimal behavior. One example of this is insertion tasks in which the goal state for the plug is at the bottom of a socket, but the only path to reach it, and therefore the focus of exploration, is at the socket opening. While this may sound like a minor distinction, we found in our initial experiments that DDPG with a simple
3
goal-distance reward would quickly ï¬nd a path to a local minimum on the outside of the socket, and fail to ever explore around the opening.
We therefore sought to design a set of insertion tasks that presented a range of exploration difï¬culties. Our tasks are illustrated in Fig. 1. The ï¬rst (Fig. 1(a)) is a classic peg-in-hole task, in which both bodies are rigid, and the plug is free to rotate along the insertion axis. The second (Fig. 1(b)) models a drive-insertion problem into an ATX-style computer chassis. Both bodies are again rigid, but in this case the drive orientation is relevant. The third task (Fig. 1(c)) models the problem of inserting a two-pronged deformable plastic clip into a housing. The clip is modeled as three separate bodies with hinge joints at the base of each prong. These joints are spring-loaded, and the resting state pinches inwards as is common with physical connectors to maintain pressure on the housing. The ï¬nal task (Fig. 1(d)) is a simpliï¬ed cable insertion task in which the plug is modeled as a 20-link chain of capsules coupled by ball-joints. This cable is highly under-actuated, but otherwise shares the same task speciï¬cation as the peg-in-hole task.
(a) Peg Insertion Task.
(b) Hard-drive Task.
(c) Clip Insertion Task Figure 1: This ï¬gure shows the four different insertion tasks.
# (d) Cable Insertion Task.
We created two reward functions for our experiments. The ï¬rst is a sparse reward function which returned +10 if the plug was within a small tolerance of the goal site(s) on the socket:
0, Y Wollgi - rill > ⬠iâ¬sites 10, SY Wallgi â ville <⬠iâ¬sites r=
where «7; is the position of the iâ tip site on the plug, g; is the iâ goal site on the socket, W, contains weighting coefficients for the goal site error vector, and ⬠is a proximity threshold. If this tolerance was reached, the robot received the reward signal and the episode was immediately terminated.
The second reward function is a shaped reward which composes terms for two movement phases: a reaching phase c, to align the plug to the socket opening, and an inserting phase cy to reach the socket goal. Both terms compute a weighted ¢2-distance between the plug tip(s) and their respective goal site(s). The distance from the goal to the opening site (i.e. the maximum value of cy) is added to Co during the reaching phase, such that the reward monotonically increases throughout an insertion:
4
Cg = in SS WP |g: - ill, SO WE oil) iâ¬sites iâ¬sites ota > WZ ale) SS? WF |loi - wile iâ¬sites iâ¬sites r = min(1,max(0,âalog(G(c, + ¢,))) â 1
where gi is the ith goal site, oi is the ith opening site, Wg and Wo are weighting coefï¬cients for the goal and opening site errors, respectively, I is the indicator function, and α and β are scaling parameters for log-transforming these distances into rewards ranging from 0 to 1. Note that tuning the weighting of each dimension in Wg and Wo must be done very carefully for the agent to learn the real desired task. In addition, the shaping of both stages must be balanced out in a delicate manner.
All tasks utilized a single vertically mounted robot arm. The robot was a Sawyer 7-DOF torque- controlled arm from Rethink Robotics, instrumented with a cuff for kinesthetic teaching. We utilized the Mujoco simulator [19] to simulate the Sawyer using publicly available kinematics and mesh ï¬les. In the simulation experiments the actions were joint veloc- ities, the rewards were sparse or shaped as described above, and the observations included joint position and velocity, joint-torque feedback, and the global pose of the socket and plug. In both the simulation and real world experiments the object being inserted was rigidly attached to the gripper, and the socket was ï¬xed to a table top.
In addition to the four simulation tasks, we also con- structed a real world clip insertion problem using a physical Sawyer robot. In the real robot experiment the clip was rigidly mounted to the robot gripper us- ing a 3D printed attachment. The socket position was provided to the robot, and rewards were computed by evaluating the distance from the clip prongs (avail- able via the robotâs kinematics) to the goal sites in the socket as described above. In real robot experiments the observations included the robot joint position and velocity, gravity-compensated torque feedback from the joints, and the relative pose of the plug tip sites in the socket opening site frames.
Figure 2: Real-robot experiment setup for deformable-clip insertion task. The clip is made of deformable nylon, and is rigidly at- tached to the robot gripper.
# 4.1 Demonstration data collection
To collect the demonstration data in simulated tasks, we used a Sawyer robotic arm. The arm was kinesthetically force controlled by a human demonstrator. In simulation an agent was running a hard-coded joint space P-controller to match the joint positions of the simulated Sawyer robot to the joint positions of the real one. This agent was using the same action space as the DDPGfD agent which allowed the demonstration transitions to be added directly to the agentâs replay buffer.
For providing demonstration for the real world tasks we used the same setup, this time controlling a second robotic arm. Separating the arm we were controlling and the arm which solved the task ensured that the demonstrator did not affect the dynamics of the environment from the agentâs perspective. For each experiment, we collected 100 episodes of human demonstrations which were on average about 25 steps (â 5s) long. This involved a total of 10-15 minutes of robot interaction time per task.
5
10 Peg Insertion 5 Harddrive Insertion -15 â20 -25, 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 10 Clip Insertion 10 Cable Insertion i) 3 = > a Do Z 3 B a i) oO a 3 g $ < -30 -25, 0 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 Environment interaction time / min ââ Demonstrations ââ Sparse reward pure DDPG ââ Sparse reward with demonstrations ââ Supervised ~~ Shaping reward pure DDPG =â Shaping reward with demonstrations
Figure 3: Learning curves show the means and 10th and 90th percentiles of rewards for the four approaches on each of the four tasks, with statistics computed over 64 trials. We measure reward against environment interaction time. Each episode was at most 5s long and the agent control rate was about 6Hz. The plots also show the mean and percentiles for the rewards received in each set of human demonstration and of supervised imitator which predicts demonstration actions trained with an ¢2 loss. The results show that DDPGfD out-performs DDPG, even when DDPG is given hand-tuned shaping rewards and DDPGfD exhibits a more robust training behaviour.
# 5 Results
In our first experiment we compared our approach to DDPG on sparse and shaped variants of the four simulated robotic tasks presented in Sec. 4. In addition, we show rewards for the demonstrations themselves as well as supervised imitation of the demonstrations. The DDPG implementation utilized all of the optimizations we incorporated into DDPGfD, including prioritized replay, n-step returns, and ¢-2 regularization. For each task we evaluated the agent with both the shaped and sparse versions of the reward, with results shown in Figure 3. All traces plot the shaped-reward value achieved, regardless of which reward was given to the agent. All of these experiments were performed with fixed hyper-parameters, tuned in advance.
We can see that in the case where we have hand-tuned shaping rewards all algorithms can solve the task. The results show that DDPGfD always out-performs DDPG, even when DDPG is given a well-tuned shaping reward. In contrast, DDPGfD learns nearly as well with sparse rewards as with shaping rewards. DDPGfD even out-performs DDPG on the hard drive insertion task, where the demonstrations are relatively poor. In general, DDPGfD not only learns to solve the task, but learns to solve it more efï¬ciently than the demonstrations, usually learning to insert the object in 2-4x fewer steps than the demonstrations. DDPGfD also learns more reliably, as the percentile plots are much wider for DDPG. Doing purely supervised learning of the demonstration policy performs poorly in every task.
In our second experiment we examined the effect of varying the quantity of demonstration data on agent performance. Fig. 4(a) compares learning curves for DDPGfD agents initialized with 1, 2, 3, 5, 10, and 100 expert trajectories on the sparse-reward clip-insertion task. DDPGfD is capable of solving this task with only a single demonstration, and we see diminishing returns with 50-100
6
(a) Number of demonstration trajectories. (b) Real robot experiment.
PUWNe ââ 100 Environment reward 10 20 30 40 50 60 70 80 Environment interaction time / min
78950 10050200250. 300, Environment interaction time / min . â Sparse reward with demonstrations â Shaped reward without demonstrations
(a) Learning curves for DDPGfD on the clip insertion task with varying amounts of Figure 4: demonstration data. DDPGfD can learn solve the sparse-reward task given only a single trajectory from a human demonstrator. (b) Performance from 2 runs on a real robot. DDPGfD learns faster than DDPG and without the engineered reward function.
demonstrations. This was surprising, since each demonstration contains only one state transition with non-zero reward.
Finally, we show results of DDPGfD learning the clip insertion task on physical Sawyer robot in Figure 4(b). DDPGfD was able to learn a robust insertion policy on the real robot. DDPGfD with sparse rewards outperforms shaped DDPG, showing that DDPGfD achieves faster learning without the extra engineering.
A video demonstrating the performance can be viewed here: https://www.youtube.com/watch? v=WGJwLfeVN9w
# 6 Related work
Imitation learning is primarily concerned with matching expert demonstrations. Our work combines imitation learning with learning from task rewards, so that the agent is able to improve upon the demonstrations it has seen. Imitation learning can be cast into a supervised learning problem (like classiï¬cation) [10, 11]. One popular imitation learning algorithm is DAGGER [12] which iteratively produces new policies based on polling the expert policy outside its original state space. This leads to no-regret over validation data in the online learning sense. DAGGER requires the expert to be available during training to provide additional feedback to the agent.
Imitation can also been achieved through inverse optimal control or inverse RL. The main principle is to learn a cost or a reward function under which the demonstration data is optimal. For instance, in [16, 17] the inverse RL problem is cast into a two-player zero-sum game where one player chooses policies and the other chooses reward functions. However, it doesnât scale to continuous state-action spaces and requires knowledge of the dynamics. To address continuous state spaces and unknown dynamics, [5] solve inverse RL by combining classiï¬cation and regression. Yet it is restricted to discrete action spaces. Demonstrations have also been used for inverse optimal control in high- dimensional, continuous robotic control problems [1]. However, these approaches only do imitation learning and do not allow for learning from task rewards.
Guided Cost Learning (GCL) [1] and Generative Adversarial Imitation Learning (GAIL) [4] are the ï¬rst efï¬cient imitation learning algorithms to learn from high-dimensional inputs without knowledge of the dynamics and hand-crafted features. They have a very similar algorithmic structure which consists of matching the distribution of the expert trajectories. To do so, they simultaneously learn the reward and the policy that imitates the expert demonstrations. At each step, sampled trajectories of the current policy and the expert policy are used to produce a reward function. Then, this reward is (partially) optimized to produce an updated policy and so on. In GAIL, the reward is obtained from a network trained to discriminate between expert trajectories and (partial) trajectories sampled from a generator (the policy), which is itself trained by TRPO[14]. In GCL, the reward is obtained by
7
minimization of the Maximum Entropy IRL cost[20] and one could use any RL algorithm procedure (DDPG, TRPO etc.) to optimize this reward.
Control in continuous state-action domains typically uses smooth shaped rewards that are designed to be amenable to classical analysis yielding closed-form solutions. Such requirements might be difï¬cult to meet in real world applications. For instance, iterative Linear Quadratic Gaussian (iLQG) [18] is a method for nonlinear stochastic systems where the dynamics is known and the reward has to be quadratic (and thus entails hand-crafted task designs). It uses iterative linearization of the dynamics around the current trajectory in order to obtain a noisy linear system (where the noise is a centered Gaussian) and where the reward constraints are quadratic. Then the algorithm uses the Ricatti family of equations to obtain locally linear optimal trajectories that improve on the current trajectory.
Guided Policy Search [6] aims at ï¬nding an optimal policy by decomposing the problem into three steps. First, it uses nominal or expert trajectories, obtained by previous interactions with the environment to learn locally linear approximations of its dynamics. Then, it uses optimal control algorithms such as iLQG or DDP to ï¬nd the locally linear optimal policies corresponding to these dynamics. Finally, via supervised learning, a neural network is trained to ï¬t the trajectories generated by these policies. Here again, there is a quadratic constraint on the reward that must be purposely shaped.
Normalized Advantage Functions (NAF) [2] with model-based acceleration is a model-free RL algorithm using imagination rollouts coming from a model learned with the previous interactions with the environment or via expert demonstrations. NAF is the natural extension of Q-Learning in the continuous case where the advantage function is parameterized as a quadratic function of non-linear state features. The uni-modal nature of this function allows the maximizing action for the Q-function to be obtained directly as the mean policy. This formulation makes the greedy step of Q-Learning tractable for continuous action domains. Then, similarly as GPS, locally linear approximations of the dynamics of the environment are learned and iLQG is used to produce model-guided rollouts to accelerate learning.
The most similar work to ours is DQfD [3], which combines Deep Q Networks (DQN) [8] with learning from demonstrations in a similar way to DDPGfD. It additionally adds a supervised loss to keep the agent close to the policy from the demonstrations. However DQfD is restricted to domains with discrete action spaces and is not applicable to robotics.
# 7 Conclusion
In this paper we presented DDPGfD, an off-policy RL algorithm which uses demonstration trajectories to quickly bootstrap performance on challenging motor tasks speciï¬ed by sparse rewards. DDPGfD utilizes a prioritized replay mechanism to prioritize samples across both demonstration and self- generated agent data. In addition, it incorporates n-step returns to better propagate the sparse rewards across the entire trajectory.
Most work on RL in high-dimensional continuous control problems relies on well-tuned shaping rewards both for communicating the goal to the agent as well as easing the exploration problem. While many of these tasks can be deï¬ned by a terminal goal state fairly easily, tuning a proper shaping reward that does not lead to degenerate solutions is very difï¬cult. This task only becomes more difï¬cult when you move to multi-stage tasks such as insertion. In this work, we replaced these difï¬cult to tune shaping reward functions with demonstrations of the task from a human demonstrator. This eases the exploration problem without requiring careful tuning of shaping rewards.
In our experiments we sought to determine whether demonstrations were a viable alternative to shaping rewards for training object insertion tasks. Insertion is an important subclass of object manipulation, with extensive applications in manufacturing. In addition, it is a challenging set of domains for shaping rewards, as it requires two stages: one for reaching the insertion point, and one for inserting the object. Our results suggest that Deep-RL is poised to have a large impact on real robot applications by extending the learning-from-demonstration paradigm to include richer, force-sensitive policies.
8
# References
[1] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proc. of ICML, 2016.
[2] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In Proc. of ICML, 2016.
[3] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, A. Sendonaris, G. Dulac- Arnold, I. Osband, J. Agapiou, et al. Learning from demonstrations for real world reinforcement learning. arXiv preprint arXiv:1704.03732, 2017.
[4] J. Ho and S. Ermon. Generative adversarial imitation learning. In Proc. of NIPS, 2016.
[5] E. Klein, B. Piot, M. Geist, and O. Pietquin. A cascaded supervised learning approach to inverse reinforcement learning. In Proc. of ECML, 2013.
[6] S. Levine and V. Koltun. Guided policy search. In Proc. of ICML, pages 1â9, 2013.
[7] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In Proc. of ICLR, 2016.
[8] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[9] A. Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Proc. of ICML, volume 99, pages 278â287, 1999.
[10] D. A. Pomerleau. ALVINN: An autonomous land vehicle in a neural network. In Proc. of NIPS, 1989.
[11] N. Ratliff, J. A. Bagnell, and S. S. Srinivasa. Imitation learning for locomotion and manipulation. In 2007 7th IEEE-RAS International Conference on Humanoid Robots, 2007.
[12] S. Ross, G. J. Gordon, and J. A. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proc. of AISTATS, 2011.
[13] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In Proc. of ICLR, volume abs/1511.05952, 2016.
[14] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. In Proc. of ICML, 2015.
[15] R. S. Sutton and A. G. Barto. Introduction to reinforcement learning. MIT Press, 1998.
[16] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Proc. of NIPS, 2007.
[17] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In Proc. of ICML, 2008.
[18] E. Todorov and W. Li. A generalized iterative lqg method for locally-optimal feedback control of constrained nonlinear stochastic systems. In American Control Conference, 2005. Proceedings of the 2005, pages 300â306. IEEE, 2005.
[19] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Proc. of IROS, pages 5026â5033, 2012.
[20] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In Proc. of AAAI, pages 1433â1438, 2008.
9
# A Real robot safety
To be able to run DDPG on the real robot we needed to ensure that the agent will not apply excessive force. To do this we created an intermediate impedance controller which subjects the agentâs commands to safety constraints before relaying them to the robot. It modiï¬es the target velocity set by the agent according to the externally applied forces.
ucontrol = uagentka + fappliedkf
Where uagent is agentâs control signal, fapplied are externally applied forces such as the clip pushing against the housing, and ka and kf are constants to choose the correct sensitivity. We further limit the velocity control signal ucontrol to limit the maximal speed increase while still allowing the agent to stop quickly. This increases the control stability of the system.
This allowed us to keep the agentâs control frequency, uagent, at 5Hz while still having a physically safe system as fapplied and ucontrol were updated at 1kHz.
# Algorithm 1: DDPG from Demonstrations
Env Environment; 9â initial policy parameters; 6â initial policy target parameters. 6° initial action-value parameters; 2 " initial action-value target parameters; Nâ target network replacement frequency; ⬠action noise. B replay buffer initialized with demonstrations; k number of pre-training gradient updates
# Input Input
# initial policy target parameters.
Input 6° initial action-value parameters; 2 " initial action-value target parameters; Nâ target network replacement frequency; ⬠action noise. Input: B replay buffer initialized with demonstrations; k number of pre-training gradient updates Output : Q(.|9°) action-value function (critic) and 7(.|97) the policy (actor). /* Learning via interaction with the environment 1 for episode e ⬠{1,..., M}do 2 Initialise state sy ~ Env 3 for steps t ⬠{1,... EpisodeLength} do 4 Sample noise from Gaussian ny = Nâ(0, â¬) 5 Select an action a, = (5:1, 07) + me 6 Get next state and reward s,,r, = T(s:_1, a1), R(s:) 7 Add single step transition (s;â1, a2, r1,Y, St) to the replay buffer 8 Add n-step transition (s;ân, @1â-n41, an Trâ-nti4i7',Y", 8t) to the replay buffer 9 end 10 for steps 1 ⬠{1,... EpisodeLength x LearningSteps} do u Sample a minibatch of with prioritization from D and calculate L; (9%) and Ln (02) as appropriate for a given transition 12 Update the critic using a gradient step with loss: Loritic(0@) 13 Update the actor: Vg L actor (97) 4 if step = 0 (mod Nâ) then 15 Update the target networks: 6â < 6â and 62 ~ 62 16 end 17 end 18 end
10
/ | {
"id": "1704.03732"
} |
1707.07012 | Learning Transferable Architectures for Scalable Image Recognition | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset. | http://arxiv.org/pdf/1707.07012 | Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | cs.CV, cs.LG, stat.ML | null | null | cs.CV | 20170721 | 20180411 | 8 1 0 2
r p A 1 1 ] V C . s c [ 4 v 2 1 0 7 0 . 7 0 7 1 : v i X r a
# Learning Transferable Architectures for Scalable Image Recognition
# Barret Zoph Google Brain barretzoph@google.com
# Vijay Vasudevan Google Brain vrv@google.com
Jonathon Shlens Google Brain shlens@google.com
# Quoc V. Le Google Brain qvl@google.com
# Abstract
# 1. Introduction
Developing neural network image classiï¬cation models often requires signiï¬cant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is ex- pensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribu- tion of this work is the design of a new search space (which we call the âNASNet search spaceâ) which enables trans- ferability. In our experiments, we search for the best con- volutional layer (or âcellâ) on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking to- gether more copies of this cell, each with their own parame- ters to design a convolutional architecture, which we name a âNASNet architectureâ. We also introduce a new regu- larization technique called ScheduledDropPath that signif- icantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet con- structed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accu- racy than the best human-invented architectures while hav- ing 9 billion fewer FLOPS â a reduction of 28% in compu- tational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classiï¬cation are generically useful and can be trans- ferred to other computer vision problems. On the task of ob- ject detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.
Developing neural network image classiï¬cation models often requires signiï¬cant architecture engineering. Starting from the seminal work of [32] on using convolutional archi- tectures [17, 34] for ImageNet [11] classiï¬cation, succes- sive advancements through architecture engineering have achieved impressive results [53, 59, 20, 60, 58, 68].
In this paper, we study a new paradigm of designing con- volutional architectures and describe a scalable method to optimize convolutional architectures on a dataset of inter- est, for instance the ImageNet classiï¬cation dataset. Our approach is inspired by the recently proposed Neural Ar- chitecture Search (NAS) framework [71], which uses a re- inforcement learning search method to optimize architec- ture conï¬gurations. Applying NAS, or any other search methods, directly to a large dataset, such as the ImageNet dataset, is however computationally expensive. We there- fore propose to search for a good architecture on a proxy dataset, for example the smaller CIFAR-10 dataset, and then transfer the learned architecture to ImageNet. We achieve this transferrability by designing a search space (which we call âthe NASNet search spaceâ) so that the complexity of the architecture is independent of the depth of the network and the size of input images. More concretely, all convolu- tional networks in our search space are composed of convo- lutional layers (or âcellsâ) with identical structure but dif- ferent weights. Searching for the best convolutional archi- tectures is therefore reduced to searching for the best cell structure. Searching for the best cell structure has two main beneï¬ts: it is much faster than searching for an entire net- work architecture and the cell itself is more likely to gener- alize to other problems. In our experiments, this approach signiï¬cantly accelerates the search for the best architectures using CIFAR-10 by a factor of 7à and learns architectures that successfully transfer to ImageNet.
Our main result is that the best architecture found on CIFAR-10, called NASNet, achieves state-of-the-art ac- curacy when transferred to ImageNet classiï¬cation with- out much modiï¬cation. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5. This result amounts to a
1
1.2% improvement in top-1 accuracy than the best human- invented architectures while having 9 billion fewer FLOPS. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is also state-of-the-art.
Additionally, by simply varying the number of the con- volutional cells and number of ï¬lters in the convolutional cells, we can create different versions of NASNets with dif- ferent computational demands. Thanks to this property of the cells, we can generate a family of models that achieve accuracies superior to all human-invented models at equiv- alent or smaller computational budgets [60, 29]. Notably, the smallest version of NASNet achieves 74.0% top-1 ac- curacy on ImageNet, which is 3.1% better than previously engineered architectures targeted towards mobile and em- bedded vision tasks [24, 70].
Finally, we show that the image features learned by NASNets are generically useful and transfer to other com- puter vision problems. In our experiments, the features learned by NASNets from ImageNet classiï¬cation can be combined with the Faster-RCNN framework [47] to achieve state-of-the-art on COCO object detection task for both the largest as well as mobile-optimized models. Our largest NASNet model achieves 43.1% mAP, which is 4% better than previous state-of-the-art.
# 2. Related Work
The proposed method is related to previous work in hy- perparameter optimization [44, 4, 5, 54, 55, 6, 40] â es- pecially recent approaches in designing architectures such as Neural Fabrics [48], DiffRNN [41], MetaQNN [3] and DeepArchitect [43]. A more ï¬exible class of methods for designing architecture is evolutionary algorithms [65, 16, 57, 30, 46, 42, 67], yet they have not had as much success at large scale. Xie and Yuille [67] also transferred learned architectures from CIFAR-10 to ImageNet but performance of these models (top-1 accuracy 72.1%) are notably below previous state-of-the-art (Table 2).
The concept of having one neural network interact with a second neural network to aid the learning process, or learn- ing to learn or meta-learning [23, 49] has attracted much attention in recent years [1, 62, 14, 19, 35, 45, 15]. Most of these approaches have not been scaled to large problems like ImageNet. An exception is the recent work focused on learning an optimizer for ImageNet classiï¬cation that achieved notable improvements [64].
The design of our search space took much inspira- tion from LSTMs [22], and Neural Architecture Search Cell [71]. The modular structure of the convolutional cell is also related to previous methods on ImageNet such as VGG [53], Inception [59, 60, 58], ResNet/ResNext [20, 68], and Xception/MobileNet [9, 24].
# 3. Method
Our work makes use of search methods to ï¬nd good con- volutional architectures on a dataset of interest. The main search method we use in this work is the Neural Architec- ture Search (NAS) framework proposed by [71]. In NAS, a controller recurrent neural network (RNN) samples child networks with different architectures. The child networks are trained to convergence to obtain some accuracy on a held-out validation set. The resulting accuracies are used to update the controller so that the controller will generate better architectures over time. The controller weights are updated with policy gradient (see Figure 1).
âSample architecture A with probability p Train a child network with architecture A to convergence to get validation accuracy R The controller (RNN)
Figure 1. Overview of Neural Architecture Search [71]. A con- troller RNN predicts architecture A from a search space with prob- ability p. A child network with architecture A is trained to con- vergence achieving accuracy R. Scale the gradients of p by R to update the RNN controller.
The main contribution of this work is the design of a novel search space, such that the best architecture found on the CIFAR-10 dataset would scale to larger, higher- resolution image datasets across a range of computational settings. We name this search space the NASNet search space as it gives rise to NASNet, the best architecture found in our experiments. One inspiration for the NASNet search space is the realization that architecture engineering with CNNs often identiï¬es repeated motifs consisting of com- binations of convolutional ï¬lter banks, nonlinearities and a prudent selection of connections to achieve state-of-the-art results (such as the repeated modules present in the Incep- tion and ResNet models [59, 20, 60, 58]). These observa- tions suggest that it may be possible for the controller RNN to predict a generic convolutional cell expressed in terms of these motifs. This cell can then be stacked in series to han- dle inputs of arbitrary spatial dimensions and ï¬lter depth.
In our approach, the overall architectures of the convo- lutional nets are manually predetermined. They are com- posed of convolutional cells repeated many times where each convolutional cell has the same architecture, but dif- ferent weights. To easily build scalable architectures for images of any size, we need two types of convolutional cells to serve two main functions when taking in a feature map
# Softmax
A [eae] Softmax Reduction Cell A Ly Reduction Cell Reduction Cell rN LY Reduction Cell Reduction Cell | x2 LN LY ay xN 3x3 conv stride 2 rN image image cirario imageNet architecture Architecture
Figure 2. Scalable architectures for image classiï¬cation consist of two repeated motifs termed Normal Cell and Reduction Cell. This diagram highlights the model architecture for CIFAR-10 and Ima- geNet. The choice for the number of times the Normal Cells that gets stacked between reduction cells, N , can vary in our experi- ments.
as input: (1) convolutional cells that return a feature map of the same dimension, and (2) convolutional cells that return a feature map where the feature map height and width is re- duced by a factor of two. We name the ï¬rst type and second type of convolutional cells Normal Cell and Reduction Cell respectively. For the Reduction Cell, we make the initial operation applied to the cellâs inputs have a stride of two to reduce the height and width. All of our operations that we consider for building our convolutional cells have an option of striding.
Figure 2 shows our placement of Normal and Reduction Cells for CIFAR-10 and ImageNet. Note on ImageNet we have more Reduction Cells, since the incoming image size is 299x299 compared to 32x32 for CIFAR. The Reduction and Normal Cell could have the same architecture, but we empirically found it beneï¬cial to learn two separate archi- tectures. We use a common heuristic to double the number of ï¬lters in the output whenever the spatial activation size is reduced in order to maintain roughly constant hidden state dimension [32, 53]. Importantly, much like Inception and ResNet models [59, 20, 60, 58], we consider the number of motif repetitions N and the number of initial convolutional ï¬lters as free parameters that we tailor to the scale of an image classiï¬cation problem.
What varies in the convolutional nets is the structures of
the Normal and Reduction Cells, which are searched by the controller RNN. The structures of the cells can be searched within a search space deï¬ned as follows (see Appendix, Figure 7 for schematic). In our search space, each cell re- ceives as input two initial hidden states hi and hiâ1 which are the outputs of two cells in previous two lower layers or the input image. The controller RNN recursively pre- dicts the rest of the structure of the convolutional cell, given these two initial hidden states (Figure 3). The predictions of the controller for each cell are grouped into B blocks, where each block has 5 prediction steps made by 5 distinct softmax classiï¬ers corresponding to discrete choices of the elements of a block:
Step 1. Select a hidden state from hi, hiâ1 or from the set of hidden states created in previous blocks.
Step 2. Select a second hidden state from the same options as in Step 1.
Step 3. Select an operation to apply to the hidden state selected in Step 1.
Step 4. Select an operation to apply to the hidden state selected in Step 2.
Step 5. Select a method to combine the outputs of Step 3 and 4 to create a new hidden state.
The algorithm appends the newly-created hidden state to the set of existing hidden states as a potential input in sub- sequent blocks. The controller RNN repeats the above 5 prediction steps B times corresponding to the B blocks in a convolutional cell. In our experiments, selecting B = 5 provides good results, although we have not exhaustively searched this space due to computational limitations.
In steps 3 and 4, the controller RNN selects an operation to apply to the hidden states. We collected the following set of operations based on their prevalence in the CNN litera- ture:
identity ⢠1x7 then 7x1 convolution ⢠3x3 average pooling ⢠5x5 max pooling ⢠1x1 convolution ⢠3x3 depthwise-separable conv ⢠5x5 depthwise-seperable conv ⢠7x7 depthwise-separable conv
1x3 then 3x1 convolution ⢠3x3 dilated convolution ⢠3x3 max pooling ⢠7x7 max pooling ⢠3x3 convolution
In step 5 the controller RNN selects a method to combine the two hidden states, either (1) element-wise addition be- tween two hidden states or (2) concatenation between two hidden states along the ï¬lter dimension. Finally, all of the unused hidden states generated in the convolutional cell are concatenated together in depth to provide the ï¬nal cell out- put.
To allow the controller RNN to predict both Normal Cell and Reduction Cell, we simply make the controller have 2 à 5B predictions in total, where the ï¬rst 5B predictions are for the Normal Cell and the second 5B predictions are for the Reduction Cell.
1 {new hidden layer! 1 i is Saracen eae 3 Ss hidden state | hidden state [\ \ t \ \ \ \ ab \ \ \ \ g, 4 nN 4 \ > 83 \ \ \ \ \ \ \ \ 7 \ \ 7 - - - y L t repeat B times A | hidden layer A q | hidden layer B q 1
# toe
Figure 3. Controller model architecture for recursively constructing one block of a convolutional cell. Each block requires selecting 5 discrete parameters, each of which corresponds to the output of a softmax layer. Example constructed block shown on right. A convolu- tional cell contains B blocks, hence the controller contains 5B softmax layers for predicting the architecture of a convolutional cell. In our experiments, the number of blocks B is 5.
Finally, our work makes use of the reinforcement learn- ing proposal in NAS [71]; however, it is also possible to use random search to search for architectures in the NAS- Net search space. In random search, instead of sampling the decisions from the softmax classiï¬ers in the controller RNN, we can sample the decisions from the uniform distri- bution. In our experiments, we ï¬nd that random search is slightly worse than reinforcement learning on the CIFAR- 10 dataset. Although there is value in using reinforcement learning, the gap is smaller than what is found in the original work of [71]. This result suggests that 1) the NASNet search space is well-constructed such that random search can per- form reasonably well and 2) random search is a difï¬cult baseline to beat. We will compare reinforcement learning against random search in Section 4.4.
# 4. Experiments and Results
In this section, we describe our experiments with the method described above to learn convolutional cells. In summary, all architecture searches are performed using the CIFAR-10 classiï¬cation task [31]. The controller RNN was trained using Proximal Policy Optimization (PPO) [51] by employing a global workqueue system for generating a pool of child networks controlled by the RNN. In our experi- ments, the pool of workers in the workqueue consisted of 500 GPUs.
convolutions and the number of branches compared with competing architectures [53, 59, 20, 60, 58]. Subsequent experiments focus on this convolutional cell architecture, although we examine the efï¬cacy of other, top-ranked con- volutional cells in ImageNet experiments (described in Ap- pendix B) and report their results as well. We call the three networks constructed from the best three searches NASNet- A, NASNet-B and NASNet-C.
We demonstrate the utility of the convolutional cells by employing this learned architecture on CIFAR-10 and a family of ImageNet classiï¬cation tasks. The latter family of tasks is explored across a few orders of magnitude in com- putational budget. After having learned the convolutional cells, several hyper-parameters may be explored to build a ï¬nal network for a given task: (1) the number of cell repeats N and (2) the number of ï¬lters in the initial convolutional cell. After selecting the number of initial ï¬lters, we use a common heuristic to double the number of ï¬lters whenever the stride is 2. Finally, we deï¬ne a simple notation, e.g., 4 @ 64, to indicate these two parameters in all networks, where 4 and 64 indicate the number of cell repeats and the number of ï¬lters in the penultimate layer of the network, respectively.
The result of this search process over 4 days yields sev- eral candidate convolutional cells. We note that this search procedure is almost 7Ã faster than previous approaches [71] that took 28 days.1 Additionally, we demonstrate below that the resulting architecture is superior in accuracy.
Figure 4 shows a diagram of the top performing Normal Cell and Reduction Cell. Note the prevalence of separable
1In particular, we note that previous architecture search [71] used 800 GPUs for 28 days resulting in 22,400 GPU-hours. The method in this pa- per uses 500 GPUs across 4 days resulting in 2,000 GPU-hours. The for- mer effort used Nvidia K40 GPUs, whereas the current efforts used faster NVidia P100s. Discounting the fact that the we use faster hardware, we estimate that the current procedure is roughly about 7à more efï¬cient.
For complete details of of the architecture learning algo- rithm and the controller system, please refer to Appendix A. Importantly, when training NASNets, we discovered Sched- uledDropPath, a modiï¬ed version of DropPath [33], to be an effective regularization method for NASNet. In Drop- Path [33], each path in the cell is stochastically dropped with some ï¬xed probability during training. In our mod- iï¬ed version, ScheduledDropPath, each path in the cell is dropped out with a probability that is linearly increased over the course of training. We ï¬nd that DropPath does not work well for NASNets, while ScheduledDropPath signiï¬- cantly improves the ï¬nal performance of NASNets in both CIFAR and ImageNet experiments.
Bisa [concat| Normal Cell Reduction Cell
Figure 4. Architecture of the best convolutional cells (NASNet-A) with B = 5 blocks identiï¬ed with CIFAR-10 . The input (white) is the hidden state from previous activations (or input image). The output (pink) is the result of a concatenation operation across all resulting branches. Each convolutional cell is the result of B blocks. A single block is corresponds to two primitive operations (yellow) and a combination operation (green). Note that colors correspond to operations in Figure 3.
# 4.1. Results on CIFAR-10 Image Classiï¬cation
CNNs hand-designed for this operating regime [24, 70].
For the task of image classiï¬cation with CIFAR-10, we set N = 4 or 6 (Figure 2). The test accuracies of the best architectures are reported in Table 1 along with other state-of-the-art models. As can be seen from the Table, a large NASNet-A model with cutout data augmentation [12] achieves a state-of-the-art error rate of 2.40% (averaged across 5 runs), which is slightly better than the previous best record of 2.56% by [12]. The best single run from our model achieves 2.19% error rate.
# 4.2. Results on ImageNet Image Classiï¬cation
We performed several sets of experiments on ImageNet with the best convolutional cells learned from CIFAR-10. We emphasize that we merely transfer the architectures from CIFAR-10 but train all ImageNet models weights from scratch.
Results are summarized in Table 2 and 3 and Figure 5. In the ï¬rst set of experiments, we train several image clas- siï¬cation systems operating on 299x299 or 331x331 reso- lution images with different experiments scaled in compu- tational demand to create models that are roughly on par in computational cost with Inception-v2 [29], Inception-v3 [60] and PolyNet [69]. We show that this family of mod- els achieve state-of-the-art performance with fewer ï¬oating point operations and parameters than comparable architec- tures. Second, we demonstrate that by adjusting the scale of the model we can achieve state-of-the-art performance at smaller computational budgets, exceeding streamlined
Note we do not have residual connections between con- volutional cells as the models learn skip connections on their own. We empirically found manually inserting resid- ual connections between cells to not help performance. Our training setup on ImageNet is similar to [60], but please see Appendix A for details. Table 2 shows that
the convolutional cells discov- ered with CIFAR-10 generalize well to ImageNet prob- lems. In particular, each model based on the convolu- tional cells exceeds the predictive performance of the cor- responding hand-designed model. Importantly, the largest model achieves a new state-of-the-art performance for Ima- geNet (82.7%) based on single, non-ensembled predictions, surpassing previous best published result by â¼1.2% [8]. Among the unpublished works, our model is on par with the best reported result of 82.7% [25], while having signif- icantly fewer ï¬oating point operations. Figure 5 shows a complete summary of our results in comparison with other published results. Note the family of models based on con- volutional cells provides an envelope over a broad class of human-invented architectures.
Finally, we test how well the best convolutional cells may perform in a resource-constrained setting, e.g., mobile devices (Table 3). In these settings, the number of ï¬oat- ing point operations is severely constrained and predictive performance must be weighed against latency requirements on a device with limited computational resources. Mo- bileNet [24] and Shufï¬eNet [70] provide state-of-the-art re- sults obtaining 70.6% and 70.9% accuracy, respectively on
model depth # params error rate (%) DenseNet (L = 40, k = 12) [26] DenseNet(L = 100, k = 12) [26] DenseNet (L = 100, k = 24) [26] DenseNet-BC (L = 100, k = 40) [26] 40 100 100 190 1.0M 7.0M 27.2M 25.6M 5.24 4.10 3.74 3.46 Shake-Shake 26 2x32d [18] Shake-Shake 26 2x96d [18] Shake-Shake 26 2x96d + cutout [12] 26 26 26 2.9M 26.2M 26.2M 3.55 2.86 2.56 NAS v3 [71] NAS v3 [71] 39 39 7.1M 37.4M 4.47 3.65 NASNet-A (6 @ 768) NASNet-A (6 @ 768) + cutout NASNet-A (7 @ 2304) NASNet-A (7 @ 2304) + cutout NASNet-B (4 @ 1152) NASNet-C (4 @ 640) - - - - - - 3.3M 3.3M 27.6M 27.6M 2.6M 3.1M 3.41 2.65 2.97 2.40 3.73 3.59
Table 1. Performance of Neural Architecture Search and other state-of-the-art models on CIFAR-10. All results for NASNet are the mean accuracy across 5 runs.
85 NASNet-A (6 @ 4032) a â NASNet-A (7 @ 1920) oe : wer - e Ben-131 SENEt S804 vasnetsi 6 @ 1598. inception Resnetve PVN! âResNext-101 ® , - tnception-v4 Bw O° xception ResNet-152 & Incéption-v3 R24 KY f © 75) ig eptionv2 & 7 > INASNet-A (4 @ 1056) 8 vaa-16 5 ShuffieNet e 3 © Mobitenet 8 70) @ inception-vt 65 0 10000 20000 30000 40000 # Mult-Add operations (millions) 85 NASNet-A (6 @ 4032) way _â) DPN-13T âSENet NASNet-A (7 @ 1920) ee let ( ) incéption-ResNet-ve Oe @ PolyNet @ Wodienet © inception-vt = eo S80 Jnasneras @ 1508): ection. v4 ResNeXt-101 ® © re ception 5 i Inceptionv3 i ResNet-152 5 © 754 fg Iception-ve £ . > WASNet-A (4 @ 1056) 3 i VGG-16 5 ShuttieNet e 3 fs) © 70 65 0 20 40 60 80 100 120 140 # parameters (millions)
Figure 5. Accuracy versus computational demand (left) and number of parameters (right) across top performing published CNN architec- tures on ImageNet 2012 ILSVRC challenge prediction task. Computational demand is measured in the number of ï¬oating-point multiply- add operations to process a single image. Black circles indicate previously published results and red squares highlight our proposed models.
224x224 images using â¼550M multliply-add operations. An architecture constructed from the best convolutional cells achieves superior predictive performance (74.0% ac- curacy) surpassing previous models but with comparable computational demand. In summary, we ï¬nd that the learned convolutional cells are ï¬exible across model scales achieving state-of-the-art performance across almost 2 or- ders of magnitude in computational budget.
# 4.3. Improved features for object detection
Image classiï¬cation networks provide generic image fea- tures that may be transferred to other computer vision prob-
lems [13]. One of the most important problems is the spa- tial localization of objects within an image. To further validate the performance of the family of NASNet-A net- works, we test whether object detection systems derived from NASNet-A lead to improvements in object detection [28].
To address this question, we plug in the family of NASNet-A networks pretrained on ImageNet into the Faster-RCNN object detection pipeline [47] using an open- source software platform [28]. We retrain the resulting ob- ject detection pipeline on the combined COCO training plus validation dataset excluding 8,000 mini-validation images.
Model image size # parameters Mult-Adds Top 1 Acc. (%) Top 5 Acc. (%) Inception V2 [29] NASNet-A (5 @ 1538) 224Ã224 299Ã299 11.2 M 10.9 M 1.94 B 2.35 B 74.8 78.6 92.2 94.2 Inception V3 [60] Xception [9] Inception ResNet V2 [58] NASNet-A (7 @ 1920) 299Ã299 299Ã299 299Ã299 299Ã299 23.8 M 22.8 M 55.8 M 22.6 M 5.72 B 8.38 B 13.2 B 4.93 B 78.8 79.0 80.1 80.8 94.4 94.5 95.1 95.3 ResNeXt-101 (64 x 4d) [68] PolyNet [69] DPN-131 [8] SENet [25] NASNet-A (6 @ 4032) 320Ã320 331Ã331 320Ã320 320Ã320 331Ã331 83.6 M 92 M 79.5 M 145.8 M 88.9 M 31.5 B 34.7 B 32.0 B 42.3 B 23.8 B 80.9 81.3 81.5 82.7 82.7 95.6 95.8 95.8 96.2 96.2
Table 2. Performance of architecture search and other published state-of-the-art models on ImageNet classiï¬cation. Mult-Adds indicate the number of composite multiply-accumulate operations for a single image. Note that the composite multiple-accumulate operations are calculated for the image size reported in the table. Model size for [25] calculated from open-source implementation.
Model Inception V1 [59] MobileNet-224 [24] Shufï¬eNet (2x) [70] 6.6M 4.2 M â¼ 5M 1,448 M 569 M 524 M 69.8 â 70.6 70.9 89.9 89.5 89.8 NASNet-A (4 @ 1056) NASNet-B (4 @ 1536) NASNet-C (3 @ 960) 5.3 M 5.3M 4.9M 564 M 488 M 558 M 74.0 72.8 72.5 91.6 91.3 91.0
# parameters Mult-Adds Top 1 Acc. (%) Top 5 Acc. (%)
Table 3. Performance on ImageNet classiï¬cation on a subset of models operating in a constrained computational setting, i.e., < 1.5 B multiply-accumulate operations per image. All models use 224x224 images. â indicates top-1 accuracy not reported in [59] but from open-source implementation.
Model resolution MobileNet-224 [24] Shufï¬eNet (2x) [70] NASNet-A (4 @ 1056) 600 à 600 600 à 600 600 à 600 19.8% 24.5%â 29.6% - - - ResNet-101-FPN [36] Inception-ResNet-v2 (G-RMI) [28] Inception-ResNet-v2 (TDM) [52] NASNet-A (6 @ 4032) NASNet-A (6 @ 4032) 800 (short side) 600 à 600 600 à 1000 800 à 800 1200 à 1200 - 35.7% 37.3% 41.3% 43.2% 36.2% 35.6% 36.8% 40.7% 43.1% ResNet-101-FPN (RetinaNet) [37] 800 (short side) - 39.1%
# mAP (mini-val) mAP (test-dev)
Table 4. Object detection performance on COCO on mini-val and test-dev datasets across a variety of image featurizations. All results are with the Faster-RCNN object detection framework [47] from a single crop of an image. Top rows highlight mobile-optimized image featurizations, while bottom rows indicate computationally heavy image featurizations geared towards achieving best results. All mini-val results employ the same 8K subset of validation images in [28].
We perform single model evaluation using 300-500 RPN proposals per image. In other words, we only pass a sin- gle image through a single network. We evaluate the model on the COCO mini-val [28] and test-dev dataset and report the mean average precision (mAP) as computed with the standard COCO metric library [38]. We perform a simple search over learning rate schedules to identify the best pos- sible model. Finally, we examine the behavior of two object
detection systems employing the best performing NASNet- A image featurization (NASNet-A, 6 @ 4032) as well as the image featurization geared towards mobile platforms (NASNet-A, 4 @ 1056).
For the mobile-optimized network, our resulting system achieves a mAP of 29.6% â exceeding previous mobile- optimized networks that employ Faster-RCNN by over 5.0% (Table 4). For the best NASNet network, our resulting
network operating on images of the same spatial resolution (800 Ã 800) achieves mAP = 40.7%, exceeding equivalent object detection systems based off lesser performing image featurization (i.e. Inception-ResNet-v2) by 4.0% [28, 52] (see Appendix for example detections on images and side- by-side comparisons). Finally, increasing the spatial reso- lution of the input image results in the best reported, single model result for object detection of 43.1%, surpassing the best previous best by over 4.0% [37].2 These results provide further evidence that NASNet provides superior, generic image features that may be transferred across other com- puter vision tasks. Figure 10 and Figure 11 in Appendix C show four examples of object detection results produced by NASNet-A with the Faster-RCNN framework.
# 4.4. Efï¬ciency of architecture search methods
0.930 0.925 â" RL Top 1 Unique Models RL Top 5 Unique Models RL Top 25 Unique Models â RS Top 1 Unique Models = = RS Top 5 Unique Models = ++ RS Top 25 Unique Models 0 10000 20000 30000 40000 50000 Number of Models Sampled Accuracy at 20 Epochs go ge ee 2 2g o o oO oO o 6 § § & § =} a [=} a i=}
Figure 6. Comparing the efï¬ciency of random search (RS) to re- inforcement learning (RL) for learning neural architectures. The x-axis measures the total number of model architectures sampled, and the y-axis is the validation performance on CIFAR-10 after 20 epochs of training.
Though what search method to use is not the focus of the paper, an open question is how effective is the rein- forcement learning search method. In this section, we study the effectiveness of reinforcement learning for architecture search on the CIFAR-10 image classiï¬cation problem and compare it to brute-force random search (considered to be a very strong baseline for black-box optimization [5]) given an equivalent amount of computational resources.
Figure 6 shows the performance of reinforcement learn- ing (RL) and random search (RS) as more model architec-
2A primary advance in the best reported object detection system is the introduction of a novel loss [37]. Pairing this loss with NASNet-A image featurization may lead to even further performance gains. Additionally, performance gains are achievable through ensembling multiple inferences across multiple model instances and image crops (e.g., [28]).
tures are sampled. Note that the best model identiï¬ed with RL is signiï¬cantly better than the best model found by RS by over 1% as measured by on CIFAR-10. Additionally, RL ï¬nds an entire range of models that are of superior quality to random search. We observe this in the mean performance of the top-5 and top-25 models identiï¬ed in RL versus RS. We take these results to indicate that although RS may pro- vide a viable search strategy, RL ï¬nds better architectures in the NASNet search space.
# 5. Conclusion
In this work, we demonstrate how to learn scalable, con- volutional cells from data that transfer to multiple image classiï¬cation tasks. The learned architecture is quite ï¬ex- ible as it may be scaled in terms of computational cost and parameters to easily address a variety of problems. In all cases, the accuracy of the resulting model exceeds all human-designed models â ranging from models designed for mobile applications to computationally-heavy models designed to achieve the most accurate results.
The key insight in our approach is to design a search space that decouples the complexity of an architecture from the depth of a network. This resulting search space per- mits identifying good architectures on a small dataset (i.e., CIFAR-10) and transferring the learned architecture to im- age classiï¬cations across a range of data and computational scales.
The resulting architectures approach or exceed state- of-the-art performance in both CIFAR-10 and ImageNet datasets with less computational demand than human- designed architectures [60, 29, 69]. The ImageNet re- sults are particularly important because many state-of-the- art computer vision problems (e.g., object detection [28], face detection [50], image localization [63]) derive im- age features or architectures from ImageNet classiï¬cation models. For instance, we ï¬nd that image features ob- tained from ImageNet used in combination with the Faster- RCNN framework achieves state-of-the-art object detection results. Finally, we demonstrate that we can use the re- sulting learned architecture to perform ImageNet classiï¬- cation with reduced computational budgets that outperform streamlined architectures targeted to mobile and embedded platforms [24, 70].
# References
[1] M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, and N. de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981â3989, 2016.
[2] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[3] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neu- ral network architectures using reinforcement learning. In In- ternational Conference on Learning Representations, 2016. [4] J. Bergstra, R. Bardenet, Y. Bengio, and B. K´egl. Algo- In Neural Infor-
rithms for hyper-parameter optimization. mation Processing Systems, 2011.
[5] J. Bergstra and Y. Bengio. Random search for hyper- parameter optimization. Journal of Machine Learning Re- search, 2012.
[6] J. Bergstra, D. Yamins, and D. D. Cox. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. International Confer- ence on Machine Learning, 2013.
[7] J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. Revisiting distributed synchronous sgd. In International Conference on Learning Representations Workshop Track, 2016.
[8] Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. arXiv preprint arXiv:1707.01083, 2017. [9] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[10] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In International Conference on Learning Representa- tions, 2016.
[11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recogni- tion. IEEE, 2009.
Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
[13] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional ac- In Interna- tivation feature for generic visual recognition. tional Conference on Machine Learning, volume 32, pages 647â655, 2014.
[14] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779, 2016.
[15] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta- In Interna- learning for fast adaptation of deep networks. tional Conference on Machine Learning, 2017.
[16] D. Floreano, P. D¨urr, and C. Mattiussi. Neuroevolution: from architectures to learning. Evolutionary Intelligence, 2008.
[17] K. Fukushima. A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in po- sition. Biological Cybernetics, page 93202, 1980.
[18] X. Gastaldi. Shake-shake regularization of 3-branch residual networks. In International Conference on Learning Repre- sentations Workshop Track, 2017.
[19] D. Ha, A. Dai, and Q. V. Le. Hypernetworks. In Interna- tional Conference on Learning Representations, 2017. [20] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning In IEEE Conference on Computer
for image recognition. Vision and Pattern Recognition, 2016.
[21] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Com- puter Vision, 2016.
[22] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
[23] S. Hochreiter, A. Younger, and P. Conwell. Learning to learn using gradient descent. Artiï¬cial Neural Networks, pages 87â94, 2001.
[24] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017.
[25] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation net- works. arXiv preprint arXiv:1709.01507, 2017.
[26] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[27] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, 2016.
[28] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. In IEEE Conference on Computer Vision and Pat- tern Recognition, 2017.
[29] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Learning Representations, 2015.
[30] R. Jozefowicz, W. Zaremba, and I. Sutskever. An empirical In Interna-
exploration of recurrent network architectures. tional Conference on Learning Representations, 2015. [31] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Imagenet In
[32] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classiï¬cation with deep convolutional neural networks. Advances in Neural Information Processing System, 2012.
[33] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
[34] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 1998.
[35] K. Li and J. Malik. Learning to optimize neural nets. arXiv preprint arXiv:1703.00441, 2017.
[36] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[37] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar. arXiv preprint Focal arXiv:1708.02002, 2017. loss for dense object detection.
[38] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In European Conference on Com- puter Vision, pages 740â755. Springer, 2014.
[39] I. Loshchilov and F. Hutter. SGDR: Stochastic gradient de- In International Conference on scent with warm restarts. Learning Representations, 2017.
[40] H. Mendoza, A. Klein, M. Feurer, J. T. Springenberg, and F. Hutter. Towards automatically-tuned neural networks. In Proceedings of the 2016 Workshop on Automatic Machine Learning, pages 58â65, 2016.
[41] T. Miconi. Neural networks with differentiable structure. arXiv preprint arXiv:1606.06216, 2016.
[42] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, A. Navruzyan, N. Duffy, and B. Hod- arXiv preprint jat. arXiv:1703.00548, 2017.
[43] R. Negrinho and G. Gordon. DeepArchitect: Automatically designing and training deep architectures. arXiv preprint arXiv:1704.08792, 2017.
[44] N. Pinto, D. Doukhan, J. J. DiCarlo, and D. D. Cox. A high- throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS Computa- tional Biology, 5(11):e1000579, 2009.
[45] S. Ravi and H. Larochelle. Optimization as a model for few- shot learning. In International Conference on Learning Rep- resentations, 2017.
[46] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, Q. Le, and A. Kurakin. Large-scale evolution of image clas- In International Conference on Machine Learning, siï¬ers. 2017.
[47] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In Advances in Neural Information Processing Sys- tems, pages 91â99, 2015.
[48] S. Saxena and J. Verbeek. Convolutional neural fabrics. In Advances in Neural Information Processing Systems, 2016. [49] T. Schaul and J. Schmidhuber. Metalearning. Scholarpedia,
2010.
[50] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ed embedding for face recognition and clustering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815â823, 2015.
[51] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[52] A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Be- yond skip connections: Top-down modulation for object de- tection. arXiv preprint arXiv:1612.06851, 2016.
[53] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
[54] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Neural In- formation Processing Systems, 2012.
[55] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sun- daram, M. Patwary, M. Ali, R. P. Adams, et al. Scalable Bayesian optimization using deep neural networks. In Inter- national Conference on Machine Learning, 2015.
[56] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neu-
ral networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
A hypercube-based encoding for evolving large-scale neural networks. Artiï¬cial Life, 2009.
[58] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception- v4, Inception-Resnet and the impact of residual connections on learning. In International Conference on Learning Rep- resentations Workshop Track, 2016.
[59] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. In IEEE Conference on Going deeper with convolutions. Computer Vision and Pattern Recognition, 2015.
[60] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the Inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2016.
[61] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normal- ization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
[62] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, C. Blundell, D. Kumaran, and arXiv M. Botvinick. Learning to reinforcement learn. preprint arXiv:1611.05763, 2016.
[63] T. Weyand, I. Kostrikov, and J. Philbin. Planet-photo ge- olocation with convolutional neural networks. In European Conference on Computer Vision, 2016.
[64] O. Wichrowska, N. Maheswaranathan, M. W. Hoffman, S. G. Colmenarejo, M. Denil, N. de Freitas, and J. Sohl-Dickstein. Learned optimizers that scale and generalize. arXiv preprint arXiv:1703.04813, 2017.
[65] D. Wierstra, F. J. Gomez, and J. Schmidhuber. Modeling In The Genetic systems with internal state using evolino. and Evolutionary Computation Conference, 2005.
[66] R. J. Williams. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. In Machine Learning, 1992.
[67] L. Xie and A. Yuille. Genetic CNN. arXiv preprint arXiv:1703.01513, 2017.
[68] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[69] X. Zhang, Z. Li, C. C. Loy, and D. Lin. Polynet: A pursuit In Proceed- of structural diversity in very deep networks. ings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[70] X. Zhang, X. Zhou, L. Mengxiao, and J. Sun. Shufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083, 2017.
[71] B. Zoph and Q. V. Le. Neural architecture search with rein- forcement learning. In International Conference on Learning Representations, 2017.
# Appendix
# A. Experimental Details
# A.1. Dataset for Architecture Search
The CIFAR-10 dataset [31] consists of 60,000 32x32 RGB images across 10 classes (50,000 train and 10,000 test images). We partition a random subset of 5,000 images from the training set to use as a validation set for the con- troller RNN. All images are whitened and then undergone several data augmentation steps: we randomly crop 32x32 patches from upsampled images of size 40x40 and apply random horizontal ï¬ips. This data augmentation procedure is common among related work.
# A.2. Controller architecture
The controller RNN is a one-layer LSTM [22] with 100 hidden units at each layer and 2 Ã 5B softmax predictions for the two convolutional cells (where B is typically 5) as- sociated with each architecture decision. Each of the 10B predictions of the controller RNN is associated with a prob- ability. The joint probability of a child network is the prod- uct of all probabilities at these 10B softmaxes. This joint probability is used to compute the gradient for the controller RNN. The gradient is scaled by the validation accuracy of the child network to update the controller RNN such that the controller assigns low probabilities for bad child networks and high probabilities for good child networks.
Unlike [71], who used the REINFORCE rule [66] to up- date the controller, we employ Proximal Policy Optimiza- tion (PPO) [51] with learning rate 0.00035 because train- ing with PPO is faster and more stable. To encourage ex- ploration we also use an entropy penalty with a weight of In our implementation, the baseline function is 0.00001. an exponential moving average of previous rewards with a weight of 0.95. The weights of the controller are initialized uniformly between -0.1 and 0.1.
# A.3. Training of the Controller
For distributed training, we use a workqueue system where all the samples generated from the controller RNN are added to a global workqueue. A free âchildâ worker in a distributed worker pool asks the controller for new work from the global workqueue. Once the training of the child network is complete, the accuracy on a held-out valida- tion set is computed and reported to the controller RNN. In our experiments we use a child worker pool size of 450, which means there are 450 networks being trained on 450 GPUs concurrently at any time. Upon receiving enough child model training results, the controller RNN will per- form a gradient update on its weights using PPO and then sample another batch of architectures that go into the global workqueue. This process continues until a predetermined
number of architectures have been sampled. In our experi- ments, this predetermined number of architectures is 20,000 which means the search process is terminated after 20,000 child models have been trained. Additionally, we update the controller RNN with minibatches of 20 architectures. Once the search is over, the top 250 architectures are then chosen to train until convergence on CIFAR-10 to determine the very best architecture.
# A.4. Details of architecture search space
We performed preliminary experiments to identify a ï¬ex- ible, expressive search space for neural architectures that learn effectively. Generally, our strategy for preliminary ex- periments involved small-scale explorations to identify how to run large-scale architecture search.
⢠All convolutions employ ReLU nonlinearity. Exper- iments with ELU nonlinearity [10] showed minimal beneï¬t.
⢠To ensure that the shapes always match in convolu- tional cells, 1x1 convolutions are inserted as necessary.
⢠Unlike [24], all depthwise separable convolution do not employ Batch Normalization and/or a ReLU be- tween the depthwise and pointwise operations.
⢠All convolutions followed an ordering of ReLU, con- volution operation and Batch Normalization following [21].
⢠Whenever a separable convolution is selected as an op- eration by the model architecture, the separable convo- lution is applied twice to the hidden state. We found this empirically to improve overall performance.
# A.5. Training with ScheduledDropPath
We performed several experiments with various stochas- tic regularization methods. Naively applying dropout [56] across convolutional ï¬lters degraded performance. How- ever, we discovered a new technique called ScheduledDrop- Path, a modiï¬ed version of DropPath [33], that works well In DropPath, we stochastically in regularizing NASNets. drop out each path (i.e., edge with a yellow box in Figure 4) in the cell with some ï¬xed probability. This is simi- lar to [27] and [69] where they dropout full parts of their model during training and then at test time scale the path by the probability of keeping that path during training. In- terestingly we also found that DropPath alone does not help NASNet training much, but DropPath with linearly increas- ing the probability of dropping out a path over the course of training signiï¬cantly improves the ï¬nal performance for both CIFAR and ImageNet experiments. We name this method ScheduledDropPath.
hidden state set hidden state set hidden state set blocks
Figure 7. Schematic diagram of the NASNet search space. Network motifs are constructed recursively in stages termed blocks. Each block consists of the controller selecting a pair of hidden states (dark gray), operations to perform on those hidden states (yellow) and a combination operation (green). The resulting hidden state is retained in the set of potential hidden states to be selected on subsequent blocks.
# A.6. Training of CIFAR models
All of our CIFAR models use a single period cosine de- cay as in [39, 18]. All models use the momentum optimizer with momentum rate set to 0.9. All models also use L2 weight decay. Each architecture is trained for a ï¬xed 20 epochs on CIFAR-10 during the architecture search process. Additionally, we found it beneï¬cial to use the cosine learn- ing rate decay during the 20 epochs the CIFAR models were trained as this helped to further differentiate good architec- tures. We also found that having the CIFAR models use a small N = 2 during the architecture search process allowed for models to train quite quickly, while still ï¬nding cells that work well once more were stacked.
# A.7. Training of ImageNet models
We use ImageNet 2012 ILSVRC challenge data for large scale image classiï¬cation. The dataset consists of â¼ 1.2M images labeled across 1,000 classes [11]. Overall our train- ing and testing procedures are almost identical to [60]. Im- ageNet models are trained and evaluated on 299x299 or 331x331 images using the same data augmentation proce- dures as described previously [60]. We use distributed syn- chronous SGD to train the ImageNet model with 50 work- ers (and 3 backup workers) each with a Tesla K40 GPU [7]. We use RMSProp with a decay of 0.9 and epsilon of 1.0. Evaluations are calculated using with a running average of parameters over time with a decay rate of 0.9999. We use label smoothing with a value of 0.1 for all ImageNet mod- els as done in [60]. Additionally, all models use an auxiliary
classiï¬er located at 2/3 of the way up the network. The loss of the auxiliary classiï¬er is weighted by 0.4 as done in [60]. We empirically found our network to be insensitive to the number of parameters associated with this auxiliary clas- siï¬er along with the weight associated with the loss. All models also use L2 regularization. The learning rate de- cay scheme is the exponential decay scheme used in [60]. Dropout is applied to the ï¬nal softmax matrix with proba- bility 0.5.
# B. Additional Experiments
We now present two additional cells that performed well on CIFAR and ImageNet. The search spaces used for these cells are slightly different than what was used for NASNet- A. For the NASNet-B model in Figure 8 we do not concate- nate all of the unused hidden states generated in the convo- lutional cell. Instead all of the hiddenstates created within the convolutional cell, even if they are currently used, are fed into the next layer. Note that B = 4 and there are 4 hid- denstates as input to the cell as these numbers must match for this cell to be valid. We also allow addition followed by layer normalization [2] or instance normalization [61] to be predicted as two of the combination operations within the cell, along with addition or concatenation.
For NASNet-C (Figure 9), we concatenate all of the un- used hidden states generated in the convolutional cell like in NASNet-A, but now we allow the prediction of addition followed by layer normalization or instance normalization like in NASNet-B.
# Normal Cell
Reduction Cell
Figure 8. Architecture of NASNet-B convolutional cell with B = 4 blocks identiï¬ed with CIFAR-10. The input (white) is the hidden state from previous activations (or input image). Each convolu- tional cell is the result of B blocks. A single block is corresponds to two primitive operations (yellow) and a combination operation (green). As do we not concatenate the output hidden states, each output hidden state is used as a hidden state in the future layers. Each cell takes in 4 hidden states and thus needs to also create 4 output hidden states. Each output hidden state is therefore labeled with 0, 1, 2, 3 to represent the next four layers in that order.
# C. Example object detection results
Finally, we will present examples of object detection re- sults on the COCO dataset in Figure 10 and Figure 11. As can be seen from the ï¬gures, NASNet-A featurization works well with Faster-RCNN and gives accurate localiza- tion of objects.
iconcat| Normal Cell iconcat|
# Reduction Cel
Figure 9. Architecture of NASNet-C convolutional cell with B = 4 blocks identiï¬ed with CIFAR-10. The input (white) is the hid- den state from previous activations (or input image). The output (pink) is the result of a concatenation operation across all result- ing branches. Each convolutional cell is the result of B blocks. A single block corresponds to two primitive operations (yellow) and a combination operation (green).
Figure 10. Example detections showing improvements of object detection over previous state-of-the-art model for Faster-RCNN with Inception-ResNet-v2 featurization [28] (top) and NASNet-A featurization (bottom).
Figure 11. Example detections of best performing NASNet-A fea- turization with Faster-RCNN trained on COCO dataset. Top and middle images courtesy of http://wikipedia.org. Bottom image courtesy of Jonathan Huang | {
"id": "1708.02002"
} |
1707.06875 | Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly. | http://arxiv.org/pdf/1707.06875 | Jekaterina Novikova, Ondřej Dušek, Amanda Cercas Curry, Verena Rieser | cs.CL | accepted to EMNLP 2017 | Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, pages 2231-2242, Copenhagen, Denmark, September 7-11,
2017 | cs.CL | 20170721 | 20170721 | 7 1 0 2
l u J 1 2 ] L C . s c [
1 v 5 7 8 6 0 . 7 0 7 1 : v i X r a
# Why We Need New Evaluation Metrics for NLG
Jekaterina Novikova, OndËrej DuËsek, Amanda Cercas Curry and Verena Rieser School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh j.novikova, o.dusek, ac293, v.t.rieser@hw.ac.uk
# Abstract
The majority of NLG evaluation relies on automatic metrics, such as BLEU. In this paper, we motivate the need for novel, system- and data-independent au- tomatic evaluation methods: We inves- tigate a wide range of metrics, includ- ing state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reï¬ect human judge- ments of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is data- and system-speciï¬c. Nevertheless, our results also suggest that automatic metrics per- form reliably at system-level and can sup- port system development by ï¬nding cases where a system performs poorly.
# Introduction
Automatic evaluation measures, such as BLEU (Pa- pineni et al., 2002), are used with increasing fre- quency to evaluate Natural Language Generation (NLG) systems: Up to 60% of NLG research published between 2012â2015 relies on automatic metrics (Gkatzia and Mahamood, 2015). Auto- matic evaluation is popular because it is cheaper and faster to run than human evaluation, and it is needed for automatic benchmarking and tuning of algorithms. The use of such metrics is, however, only sensible if they are known to be sufï¬ciently correlated with human preferences. This is rarely the case, as shown by various studies in NLG (Stent et al., 2005; Belz and Reiter, 2006; Reiter and Belz, 2009), as well as in related ï¬elds, such as dialogue systems (Liu et al., 2016), machine translation (MT) (Callison-Burch et al., 2006), and image captioning (Elliott and Keller, 2014; Kilick- aya et al., 2017). This paper follows on from the
above previous work and presents another evalu- ation study into automatic metrics with the aim to ï¬rmly establish the need for new metrics. We consider this paper to be the most complete study to date, across metrics, systems, datasets and do- mains, focusing on recent advances in data-driven NLG. In contrast to previous work, we are the ï¬rst to: ⢠Target end-to-end data-driven NLG, where we compare 3 different approaches. In contrast to NLG methods evaluated in previous work, our sys- tems can produce ungrammatical output by (a) generating word-by-word, and (b) learning from noisy data. ⢠Compare a large number of 21 automated met- rics, including novel grammar-based ones. ⢠Report results on two different domains and three different datasets, which allows us to draw more general conclusions. ⢠Conduct a detailed error analysis, which sug- gests that, while metrics can be reasonable indi- cators at the system-level, they are not reliable at the sentence-level. ⢠Make all associated code and data publicly avail- able, including detailed analysis results.1
# 2 End-to-End NLG Systems
In this paper, we focus on recent end-to-end, data- driven NLG methods, which jointly learn sentence planning and surface realisation from non-aligned data (DuËsek and JurËc´ıËcek, 2015; Wen et al., 2015; Mei et al., 2016; Wen et al., 2016; Sharma et al., 2016; DuËsek and JurËc´ıËcek, 2016, Lampouras and Vlachos, 2016). These approaches do not require costly semantic alignment between Meaning Rep- resentations (MR) and human references (also re- ferred to as âground truthâ or âtargetsâ), but are
1Available for download at: https://github.com/ jeknov/EMNLP_17_submission
System BAGEL Dataset SFREST SFHOTEL Total LOLS RNNLG TGEN Total 202 - 202 404 581 600 - 1,181 398 477 - 875 1,181 1,077 202 2,460
Table 1: Number of NLG system outputs from dif- ferent datasets and systems used in this study.
based on parallel datasets, which can be collected in sufï¬cient quality and quantity using effective crowdsourcing techniques, e.g. (Novikova et al., 2016), and as such, enable rapid development of NLG components in new domains. In particular, we compare the performance of the following sys- tems: ⢠RNNLG:2 The system by Wen et al. (2015) uses a Long Short-term Memory (LSTM) network to jointly address sentence planning and surface re- alisation. It augments each LSTM cell with a gate that conditions it on the input MR, which allows it to keep track of MR contents generated so far. ⢠TGEN:3 The system by DuËsek and JurËc´ıËcek (2015) learns to incrementally generate deep- syntax dependency trees of candidate sentence plans (i.e. which MR elements to mention and the overall sentence structure). Surface realisation is performed using a separate, domain-independent rule-based module. ⢠LOLS:4 The system by Lampouras and Vlachos (2016) learns sentence planning and surface reali- sation using Locally Optimal Learning to Search (LOLS), an imitation learning framework which learns using BLEU and ROUGE as non-decomposable loss functions.
# 3 Datasets
We consider the following crowdsourced datasets, which target utterance generation for spoken dia- logue systems. Table 1 shows the number of sys- tem outputs for each dataset. Each data instance consists of one MR and one or more natural lan- guage references as produced by humans, such as the following example, taken from the BAGEL dataset:5
# 2https://github.com/shawnwun/RNNLG 3https://github.com/UFAL-DSG/tgen 4https://github.com/glampouras/JLOLS_
NLG
5Note that we use lexicalised versions of SFHOTEL and SFREST and a partially lexicalised version of BAGEL, where proper names and place names are replaced by placeholders (âXâ), in correspondence with the outputs generated by the
MR: type=restaurant) Reference: âX is a moderately priced restaurant in X.â
SFHOTEL & SFREST (Wen et al., 2015) pro- vide information about hotels and restaurants in San Francisco. There are 8 system dialogue act types, such as inform, conï¬rm, goodbye etc. Each domain contains 12 attributes, where some are common to both domains, such as name, type, pricerange, address, area, etc., and the others are domain-speciï¬c, e.g. food and kids-allowed for restaurants; hasinternet and dogs-allowed for ho- tels. For each domain, around 5K human refer- ences were collected with 2.3K unique human ut- terances for SFHOTEL and 1.6K for SFREST. The number of unique system outputs produced is 1181 for SFREST and 875 for SFHOTEL. ⢠BAGEL (Mairesse et al., 2010) provides informa- tion about restaurants in Cambridge. The dataset contains 202 aligned pairs of MRs and 2 corre- sponding references each. The domain is a subset of SFREST, including only the inform act and 8 at- tributes.
# 4 Metrics
# 4.1 Word-based Metrics (WBMs)
NLG evaluation has borrowed a number of au- tomatic metrics from related ï¬elds, such as MT, summarisation or image captioning, which com- pare output texts generated by systems to ground- truth references produced by humans. We refer to this group as word-based metrics. In general, the higher these scores are, the better or more simi- lar to the human references the output is.6 The following order reï¬ects the degree these metrics move from simple n-gram overlap to also consid- ering term frequency (TF-IDF) weighting and se- mantically similar words. ⢠Word-overlap Metrics (WOMs): We consider frequently used metrics, including TER (Snover et al., 2006), BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), NIST (Doddington, 2002), LEPOR (Han et al., 2012), CIDEr (Vedantam et al., 2015), and METEOR (Lavie and Agarwal, 2007). ⢠Semantic Similarity (SIM): We calculate the Se- mantic Text Similarity measure designed by Han et al. (2013). This measure is based on distri- butional similarity and Latent Semantic Analysis
systems, as provided by the system authors. 6Except for TER whose scale is reversed.
(LSA) and is further complemented with semantic relations extracted from WordNet.
# 4.2 Grammar-based metrics (GBMs)
Grammar-based measures have been explored in related ï¬elds, such as MT (Gim´enez and M`arquez, 2008) or grammatical error correction (Napoles et al., 2016), and, in contrast to WBMs, do not rely on ground-truth references. To our knowledge, we are the ï¬rst to consider GBMs for sentence-level NLG evaluation. We focus on two important prop- erties of texts here â readability and grammatical- ity:
⢠Readability quantiï¬es the difï¬culty with which a reader understands a text, as used for e.g. eval- uating summarisation (Kan et al., 2001) or text simpliï¬cation (Francois and Bernhard, 2014). We measure readability by the Flesch Reading Ease score (RE) (Flesch, 1979), which calculates a ra- tio between the number of characters per sentence, the number of words per sentence, and the num- ber of syllables per word. Higher RE score indi- cates a less complex utterance that is easier to read and understand. We also consider related mea- sures, such as characters per utterance (len) and per word (cpw), words per sentence (wps), syl- lables per sentence (sps) and per word (spw), as well as polysyllabic words per utterance (pol) and per word (ppw). The higher these scores, the more complex the utterance.
⢠Grammaticality: In contrast to previous NLG methods, our corpus-based end-to-end systems can produce ungrammatical output by (a) gener- ating word-by-word, and (b) learning from noisy data. As a ï¬rst approximation of grammatical- ity, we measure the number of misspellings (msp) and the parsing score as returned by the Stanford parser (prs). The lower the msp, the more gram- matically correct an utterance is. The Stanford parser score is not designed to measure grammat- icality, however, it will generally prefer a gram- matical parse to a non-grammatical one.7 Thus, lower parser scores indicate less grammatically- correct utterances. In future work, we aim to use speciï¬cally designed grammar-scoring functions, e.g. (Napoles et al., 2016), once they become pub- licly available.
7http://nlp.stanford.edu/software/ parser-faq.shtml
# 5 Human Data Collection
To collect human rankings, we presented the MR together with 2 utterances generated by differ- ent systems side-by-side to crowdworkers, which were asked to score each utterance on a 6-point Likert scale for: ⢠Informativeness: Does the utterance provide all the useful information from the meaning represen- tation? ⢠Naturalness: Could the utterance have been produced by a native speaker? ⢠Quality: How do you judge the overall quality of the utterance in terms of its grammatical cor- rectness and ï¬uency?
Each system output (see Table 1) was scored by 3 different crowdworkers. To reduce participantsâ bias, the order of appearance of utterances pro- duced by each system was randomised and crowd- workers were restricted to evaluate a maximum of 20 utterances. The crowdworkers were selected from English-speaking countries only, based on their IP addresses, and asked to conï¬rm that En- glish was their native language.
To assess the reliability of ratings, we calculated the intra-class correlation coefï¬cient (ICC), which measures inter-observer reliability on ordinal data for more than two raters (Landis and Koch, 1977). The overall ICC across all three datasets is 0.45 (p < 0.001), which corresponds to a moderate agreement. In general, we ï¬nd consistent differ- ences in inter-annotator agreement per system and dataset, with lower agreements for LOLS than for RNNLG and TGEN. Agreement is highest for the SFHOTEL dataset, followed by SFREST and BAGEL (details provided in supplementary material).
# 6 System Evaluation
Table 2 summarises the individual systemsâ over- all corpus-level performance in terms of automatic and human scores (details are provided in the sup- plementary material).
All WOMs produce similar results, with SIM showing different results for the restaurant domain (BAGEL and SFREST). Most GBMs show the same trend (with different levels of statistical signiï¬- cance), but RE is showing inverse results. System performance is dataset-speciï¬c: For WBMs, the LOLS system consistently produces better results on BAGEL compared to TGEN, while for SFREST and SFHOTEL, LOLS is outperformed by RNNLG in
BAGEL SFHOTEL SFREST metric TGEN LOLS RNNLG LOLS RNNLG LOLS WOMs SIM GBMs More similar Better grammar(*) More overlap More overlap* More similar* Better grammar(*) More overlap* Better grammar More similar RE inform natural quality 4.77(Sd=1.09) 4.76(Sd=1.26) 4.77(Sd=1.19) More complex* 4.91(Sd=1.23) 4.67(Sd=1.25) 4.54(Sd=1.28) 5.47*(Sd=0.81) 4.99*(Sd=1.13) 4.54 (Sd=1.18) More complex* 5.27(Sd=1.02) 4.62(Sd=1.28) 4.53(Sd=1.26) 5.29*(Sd=0.94) 4.86 (Sd=1.13) 4.51 (Sd=1.14) More complex* 5.16(Sd=1.07) 4.74(Sd=1.23) 4.58(Sd=1.33)
Table 2: System performance per dataset (summarised over metrics), where â*â denotes p < 0.05 for all the metrics and â(*)â shows signiï¬cance on p < 0.05 level for the majority of the metrics.
terms of WBMs. We observe that human informa- tiveness ratings follow the same pattern as WBMs, while the average similarity score (SIM) seems to be related to human quality ratings.
Looking at GBMs, we observe that they seem to be related to naturalness and quality ratings. Less complex utterances, as measured by read- ability (RE) and word length (cpw), have higher naturalness ratings. More complex utterances, as measured in terms of their length (len), number of words (wps), syllables (sps, spw) and polysyl- lables (pol, ppw), have lower quality evaluation. Utterances measured as more grammatical are on average evaluated higher in terms of naturalness. These initial results suggest a relation between automatic metrics and human ratings at system level. However, average scores can be mislead- ing, as they do not identify worst-case scenarios. This leads us to inspect the correlation of human and automatic metrics for each MR-system output pair at utterance level.
# 7 Relation of Human and Automatic Metrics
# 7.1 Human Correlation Analysis
We calculate the correlation between automatic metrics and human ratings using the Spearman coefï¬cient (Ï). We split the data per dataset and system in order to make valid pairwise com- parisons. To handle outliers within human rat- ings, we use the median score of the three human raters.8 Following Kilickaya et al. (2017), we use the Williamsâ test (Williams, 1959) to determine signiï¬cant differences between correlations. Ta- ble 3 summarises the utterance-level correlation
results between automatic metrics and human rat- ings, listing the best (i.e. highest absolute Ï) re- sults for each type of metric (details provided in supplementary material). Our results suggest that: ⢠In sum, no metric produces an even moderate correlation with human ratings, independently of dataset, system, or aspect of human rating. This contrasts with our initially promising results on the system level (see Section 6) and will be further dis- cussed in Section 8. Note that similar inconsisten- cies between document- and sentence-level eval- uation results are observed in MT (Specia et al., 2010). ⢠Similar to our results in Section 6, we ï¬nd that WBMs show better correlations to human ratings of informativeness (which reï¬ects content selec- tion), whereas GBMs show better correlations to quality and naturalness. ⢠Human ratings for informativeness, naturalness and quality are highly correlated with each other, with the highest correlation between the latter two (Ï = 0.81) reï¬ecting that they both target surface realisation. ⢠All WBMs produce similar results (see Figure 1 and 2): They are strongly correlated with each other, and most of them produce correlations with human ratings which are not signiï¬cantly different from each other. GBMs, on the other hand, show greater diversity. ⢠Correlation results are system- and dataset- speciï¬c (details provided in supplementary mate- rial). We observe the highest correlation for TGEN on BAGEL (Figures 1 and 2) and LOLS on SFREST, whereas RNNLG often shows low correlation be- tween metrics and human ratings. This lets us conclude that WBMs and GBMs are sensitive to different systems and datasets. ⢠The highest positive correlation is observed be- tween the number of words (wps) and informative-
8As an alternative to using the median human judgment for each item, a more effective way to use all the human judgments could be to use Hovy et al. (2013)âs MACE tool for inferring the reliability of judges.
BAGEL SFHOTEL SFREST TGEN LOLS RNNLG LOLS RNNLG LOLS 0.30* (BLEU-1) -0.19* (TER) -0.16* (TER) 0.33* (wps) -0.25* (len) -0.19* (cpw) 0.20* (ROUGE) -0.19* (TER) 0.16* (METEOR) 0.16* (ppw) -0.28* (wps) 0.31* (prs) 0.09 (BLEU-1) 0.10* (METEOR) 0.10* (METEOR) -0.09 (ppw) -0.17* (len) -0.16* (ppw) 0.14* (LEPOR) -0.20* (TER) -0.12* (TER) 0.13* (cpw) -0.18* (sps) -0.17* (spw) 0.13* (SIM) 0.17* (ROUGE) 0.09* (METEOR) 0.11* (len) -0.19* (wps) 0.11* (prs)
Table 3: Highest absolute Spearman correlation between metrics and human ratings, with â*â denoting p < 0.05 (metric with the highest absolute value of Ï given in brackets).
WBM TER -0.8 |-0.9 -0.9|-0.8 -0.9 -0.7/-0.6 -0.8 -0.8 -0.1}-0.2 -0.2 -0.: @ & 09 09 08/08 07/07 07 08 0.8 0.7 0.7 0.8 0.8 os 0.7/0.6 0.9 0.8/0.1 [ ) B4 0.9/0.7 /0.6 0.9 0.8/0.1 o4 0.8 0.6 0.8 0.9/0.1 02 OOO OO @'1 0s 08 08 02 0 COOCCOe§ 0.2 0.4 0.6 08 GBM RE -0.8 -0.5 -0.2/-0.7 -1 0.7 0.7 0.3/0.1 @cw 0.5 0.1/0.5 0.8/0.5 0.5 0.3/0.1 08 @ @ cn 09 09 04 06/01 03 08 06 ee @wes 08/0 0.4 0.1/0.1 0.9 os @ e@e@:: 0.5 0.7 0.3/0.2 0.7 : r Y * @srwos 07 03 |-0.1 02 @cecee⢠: 0.2 0.4 0.6 0.8
Figure 1: Spearman correlation results for TGEN on BAGEL. Bordered area shows correlations between human ratings and automatic metrics, the rest shows correlations among the metrics. Blue colour of circles indicates positive correlation, while red indicates negative correlation. The size of circles denotes the correlation strength.
Informativeness Naturainess. Quality Te 1a 4 â Pa Peo et Informativonoss. a
Figure 2: Williams test results: X represents a non-signiï¬cant difference between correlations (p < 0.05; top: WBMs, bottom: GBMs).
ness for the TGEN system on BAGEL (Ï = 0.33, p < 0.01, see Figure 1). However, the wps met- ric (amongst most others) is not robust across sys- tems and datasets: Its correlation on other datasets is very weak, (Ï â¤ .18) and its correlation with in-
formativeness ratings of LOLS outputs is insigniï¬- cant. ⢠As a sanity check, we also measure a random score [0.0, 1.0] which proves to have a close-to- zero correlation with human ratings (highest Ï = 0.09).
# 7.2 Accuracy of Relative Rankings
We now evaluate a more coarse measure, namely the metricsâ ability to predict relative human rat- ings. That is, we compute the score of each metric for two system output sentences corresponding to the same MR. The prediction of a metric is cor- rect if it orders the sentences in the same way as median human ratings (note that ties are allowed). Following previous work (Vedantam et al., 2015; Kilickaya et al., 2017), we mainly concentrate on WBMs. Results summarised in Table 4 show that most metricsâ performance is not signiï¬cantly different from that of a random score (Wilcoxon
signed rank test). While the random score ï¬uc- tuates between 25.4â44.5% prediction accuracy, the metrics achieve an accuracy of between 30.6â 49.8%. Again, the performance of the metrics is dataset-speciï¬c: Metrics perform best on BAGEL data; for SFHOTEL, metrics show mixed perfor- mance while for SFREST, metrics perform worst.
informat. naturalness quality BAGEL SFHOTEL raw data raw data TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, METEOR, SIM TER, BLEU1-4, ROUGE, LEPOR, CIDEr, METEOR, TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, METEOR, SIM METEOR TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, METEOR, SIM N/A SIM SFREST raw data SIM LEPOR N/A quant. data TER, BLEU1-4, ROUGE, NIST, LEPOR, CIDEr, N/A N/A METEOR SIM
Table 4: Metrics predicting relative human rating with signiï¬cantly higher accuracy than a random baseline.
Discussion: Our data differs from the one used in previous work (Vedantam et al., 2015; Kilick- aya et al., 2017), which uses explicit relative rank- ings (âWhich output do you prefer?â), whereas we compare two Likert-scale ratings. As such, we have 3 possible outcomes (allowing ties). This way, we can account for equally valid system outputs, which is one of the main drawbacks of forced-choice approaches (Hodosh and Hocken- maier, 2016). Our results are akin to previous work: Kilickaya et al. (2017) report results be- tween 60-74% accuracy for binary classiï¬cation on machine-machine data, which is comparable to our results for 3-way classiï¬cation.
Still, we observe a mismatch between the or- dinal human ratings and the continuous metrics. For example, humans might rate system A and system B both as a 6, whereas BLEU, for exam- ple, might assign 0.98 and 1.0 respectively, mean- ing that BLEU will declare system B as the win- In order to account for this mismatch, we ner. quantise our metric data to the same scale as the median scores from our human ratings.9 Applied to SFREST, where we previously got our worst re-
9Note that this mismatch can also be accounted for by continuous rating scales, as suggested by Belz and Kow (2011).
sults, we can see an improvement for predicting informativeness, where all WBMs now perform signiï¬cantly better than the random baseline (see Table 4). In the future, we will investigate re- lated discriminative approaches, e.g. (Hodosh and Hockenmaier, 2016; Kannan and Vinyals, 2017), where the task is simpliï¬ed to distinguishing cor- rect from incorrect output.
# 8 Error Analysis
In this section, we attempt to uncover why auto- matic metrics perform so poorly.
# 8.1 Scales
We ï¬rst explore the hypothesis that metrics are good in distinguishing extreme cases, i.e. system outputs which are rated as clearly good or bad by the human judges, but do not perform well for ut- terances rated in the middle of the Likert scale, as suggested by Kilickaya et al. (2017). We âbinâ our data into three groups: bad, which comprises low ratings (â¤2); good, comprising high ratings (â¥5); and ï¬nally a group comprising average ratings.
We ï¬nd that utterances with low human ratings of informativeness and naturalness correlate sig- niï¬cantly better (p < 0.05) with automatic metrics than those with average and good human ratings. For example, as shown in Figure 3, the correlation between WBMs and human ratings for utterances with low informativeness scores ranges between 0.3 â¤ Ï â¤ 0.5 (moderate correlation), while the highest correlation for utterances of average and high informativeness barely reaches Ï â¤ 0.2 (very weak correlation). The same pattern can be ob- served for correlations with quality and natural- ness ratings.
This discrepancy in correlation results between low and other user ratings, together with the fact that the majority of system outputs are rated âgoodâ for informativeness (79%), naturalness (64%) and quality (58%), whereas low ratings do not exceed 7% in total, could explain why the overall correlations are low (Section 7) despite the observed trends in relationship between average system-level performance scores (Section 6). It also explains why the RNNLG system, which con- tains very few instances of low user ratings, shows poor correlation between human ratings and auto- matic metrics.
system MR system output human reference WOMs SIM humans 1 2 3 LOLS LOLS TGEN inform(name = the donatello, hasinternet = yes) inform nomatch(area = embarcadero, kidsallowed= yes, pricerange = expensive) inform(name = X, area = riverside, eattype = restaurant, food = fastfood, pricerange = cheap) well there is a hotel with in- ternet access called the do- natello i but i but i but i but i but i but i but i but i but i x is a restaurant on the riverside called located at the riverside and at is the donatello has internet unfortunately i could not ï¬nd any expensive restaurants in embarcadero that allow kids. x is a cheap fastfood restau- rant located near the riverside 1.4 1.1 2.4 5 1 4 6 1 1 4 RNNLG inform nomatch(kidsallowed = yes, food = moroccan) i am sorry, i did not ï¬nd any restaurants that allows kids and serve moroccan. there are no restau- sorry, rants allowing kids and serv- ing moroccan food 1.85 4 5
Table 5: Example pairs of MRs and system outputs from our data, contrasting the average of word- overlap metrics (normalised in the 1-6 range) and semantic similarity (SIM) with human ratings (median of all measures).
Bad informativeness Grageyo rFoangoac INF ee NAT @@@ QUA ee 1 06 02 02 06 1
Figure 3: Correlation between automatic metrics (WBMs) and human ratings for utterances of bad informativeness (top), and average and good infor- mativeness (bottom).
# Impact of Target Data
Characteristics of Data: In Section 7.1, we ob- served that datasets have a signiï¬cant impact on how well automatic metrics reï¬ect human ratings. A closer inspection shows that BAGEL data differs signiï¬cantly from SFREST and SFHOTEL, both in terms of grammatical and MR properties. BAGEL has signiï¬cantly shorter references both in terms of number of characters and words compared to the other two datasets. Although being shorter, the words in BAGEL references are signiï¬cantly more often polysyllabic. Furthermore, BAGEL only con- sists of utterances generated from inform MRs, while SFREST and SFHOTEL also have less complex MR types, such as conï¬rm, goodbye, etc. Utter- ances produced from inform MRs are signiï¬cantly longer and have a signiï¬cantly higher correlation with human ratings of informativeness and natu- ralness than non-inform utterance types. In other words, BAGEL is the most complex dataset to gen-
erate from. Even though it is more complex, met- rics perform most reliably on BAGEL here (note that the correlation is still only weak). One possible explanation is that BAGEL only contains two human references per MR, whereas SFHOTEL and SFREST both contain 5.35 references per MR on average. Having more references means that WBMs natu- rally will return higher scores (âanything goesâ). This problem could possibly be solved by weight- ing multiple references according to their quality, as suggested by (Galley et al., 2015), or following a reference-less approach (Specia et al., 2010). Quality of Data: Our corpora contain crowd- sourced human references that have grammatical errors, e.g. âFifth Floor does not allow childsâ (SFREST reference). Corpus-based methods may pick up these errors, and word-based metrics will rate these system utterances as correct, whereas we can expect human judges to be sensitive to ungrammatical utterances. Note that the pars- ing score (while being a crude approximation of grammaticality) achieves one of our highest cor- relation results against human ratings, with |Ï| = .31. Grammatical errors raise questions about the quality of the training data, especially when be- ing crowdsourced. For example, Belz and Reiter (2006) ï¬nd that human experts assign low rank- ings to their original corpus text. Again, weighting (Galley et al., 2015) or reference-less approaches (Specia et al., 2010) might remedy this issue.
# 8.3 Example-based Analysis
As shown in previous sections, word-based met- rics moderately agree with humans on bad quality output, but cannot distinguish output of good or medium quality. Table 5 provides examples from
Dimension of human ratings Study Sentence Planning Surface Realisation Domain this paper (Reiter and Belz, 2009) (Stent et al., 2005) (Liu et al., 2016) (Elliott and Keller, 2014) (Kilickaya et al., 2017) (Cahill, 2009) (Espinosa et al., 2010) weak positive (Ï = 0.33, WPS) none weak positive (Ï = 0.47, LSA) weak positive (Ï = 0.35, BLEU-4) positive (Ï = 0.53, METEOR) positive (Ï = 0.64, SPICE) N/A weak positive (Ï = 0.43, TER) weak negative (Ï = 0. â 31, parser) strong positive (Pearsonâs r = 0.96, NIST) negative (Ï = â0.56, NIST) N/A N/A N/A negative (Ï = â0.64, ROUGE) positive (Ï = 0.62, BLEU-4) NLG, restaurant/hotel search NLG, weather forecast paraphrasing of news dialogue/Twitter pairs image caption image caption NLG, German news texts NLG, news texts
Table 6: Best correlation results achieved by our and previous work. Dimensions targeted towards Sen- tence Planning include âaccuracyâ, âadequacyâ, âcorrectnessâ, âinformativenessâ. Dimensions for Surface Realisation include âclarityâ, âï¬uencyâ, ânaturalnessâ.
our three systems.10 Again, we observe differ- ent behaviour between WOMs and SIM scores. In Example 1, LOLS generates a grammatically cor- rect English sentence, which represents the mean- ing of the MR well, and, as a result, this utter- ance received high human ratings (median = 6) for informativeness, naturalness and quality. How- ever, WOMs rate this utterance low, i.e. scores of BLEU1-4, NIST, LEPOR, CIDEr, ROUGE and METEOR nor- malised into the 1-6 range all stay below 1.5. This is because the system-generated utterance has low overlap with the human/corpus references. Note that the SIM score is high (5), as it ignores human references and computes distributional semantic similarity between the MR and the system output. Examples 2 and 3 show outputs which receive low scores from both automatic metrics and humans. WOMs score these system outputs low due to lit- tle or no overlap with human references, whereas humans are sensitive to ungrammatical output and missing information (the former is partially cap- tured by GBMs). Examples 2 and 3 also illus- trate inconsistencies in human ratings since sys- tem output 2 is clearly worse than output 3 and both are rated by human with a median score of 1. Example 4 shows an output of the RNNLG system which is semantically very similar to the reference (SIM=4) and rated high by humans, but WOMs fail to capture this similarity. GBMs show more accu- rate results for this utterance, with mean of read- ability scores 4 and parsing score 3.5.
# 9 Related Work
Table 6 summarises results published by previous studies in related ï¬elds which investigate the re- lation between human scores and automatic met-
10Please note that WBMs tend to match against the refer- ence that is closest to the generated output. Therefore, we only include the closest match in Table 5 for simplicity.
rics. These studies mainly considered WBMs, while we are the ï¬rst study to consider GBMs. Some studies ask users to provide separate ratings for surface realisation (e.g. asking about âclarityâ or âï¬uencyâ), whereas other studies focus only on sentence planning (e.g. âaccuracyâ, âadequacyâ, or âcorrectnessâ). In general, correlations reported by previous work range from weak to strong. The re- sults conï¬rm that metrics can be reliable indica- tors at system-level (Reiter and Belz, 2009), while they perform less reliably at sentence-level (Stent et al., 2005). Also, the results show that the met- rics capture realization better than sentence plan- ning. There is a general trend showing that best- performing metrics tend to be the more complex ones, combining word-overlap, semantic similar- ity and term frequency weighting. Note, however, that the majority of previous works do not report whether any of the metric correlations are signiï¬- cantly different from each other.
# 10 Conclusions
This paper shows that state-of-the-art automatic evaluation metrics for NLG systems do not suf- ï¬ciently reï¬ect human ratings, which stresses the need for human evaluations. This result is opposed to the current trend of relying on automatic evalua- tion identiï¬ed in (Gkatzia and Mahamood, 2015). A detailed error analysis suggests that auto- matic metrics are particularly weak in distinguish- ing outputs of medium and good quality, which can be partially attributed to the fact that hu- man judgements and metrics are given on differ- ent scales. We also show that metric performance is data- and system-speciï¬c.
Nevertheless, our results also suggest that auto- matic metrics can be useful for error analysis by helping to ï¬nd cases where the system is perform- ing poorly. In addition, we ï¬nd reliable results on
system-level, which suggests that metrics can be useful for system development.
# 11 Future Directions
Word-based metrics make two strong assump- tions: They treat human-generated references as a gold standard, which is correct and complete. We argue that these assumptions are invalid for corpus-based NLG, especially when using crowd- sourced datasets. Grammar-based metrics, on the other hand, do not rely on human-generated ref- erences and are not inï¬uenced by their quality. However, these metrics can be easily manipulated with grammatically correct and easily readable output that is unrelated to the input. We have experimented with combining WBMs and GBMs using ensemble-based learning. However, while our model achieved high correlation with humans within a single domain, its cross-domain perfor- mance is insufï¬cient.
Our paper clearly demonstrates the need for more advanced metrics, as used in related ï¬elds, including: assessing output quality within the di- alogue context, e.g. (DuËsek and JurËc´ıËcek, 2016); extrinsic evaluation metrics, such as NLGâs con- tribution to task success, e.g. (Rieser et al., 2014; Gkatzia et al., 2016; Hastie et al., 2016); building discriminative models, e.g. (Hodosh and Hock- enmaier, 2016), (Kannan and Vinyals, 2017); or reference-less quality prediction as used in MT, e.g. (Specia et al., 2010). We see our paper as a ï¬rst step towards reference-less evaluation for NLG by introducing grammar-based metrics. In current work (DuËsek et al., 2017), we investigate a reference-less quality estimation approach based on recurrent neural networks, which predicts a quality score for a NLG system output by compar- ing it to the source meaning representation only.
Finally, note that the datasets considered in this study are fairly small (between 404 and 2.3k hu- man references per domain). To remedy this, sys- tems train on de-lexicalised versions (Wen et al., 2015), which bears the danger of ungrammatical lexicalisation (Sharma et al., 2016) and a possi- ble overlap between testing and training set (Lam- pouras and Vlachos, 2016). There are ongoing ef- forts to release larger and more diverse data sets, e.g. (Novikova et al., 2016, 2017).
# Acknowledgements
This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrI- gAL (EP/N017536/1). The Titan Xp used for this research was donated by the NVIDIA Corpora- tion.
# References
Anja Belz and Eric Kow. 2011. Discrete vs. con- tinuous rating scales for language evaluation in In Proceedings of the 49th Annual Meet- NLP. ing of the Association for Computational Linguis- tics: Human Language Technologies: Short Pa- pers â Volume 2. Association for Computational Linguistics, Portland, OR, USA, pages 230â235. http://aclweb.org/anthology/P11-2040.
Anja Belz and Ehud Reiter. 2006. Comparing au- tomatic and human evaluation of NLG systems. In Proceedings of the 11th Conference of the Eu- ropean Chapter of the Association for Computa- tional Linguistics. Trento, Italy, pages 313â320. http://aclweb.org/anthology/E06-1040.
Correlating human and au- tomatic evaluation of a German surface realiser. In Proceedings of the ACL-IJCNLP 2009 Con- ference Short Papers. Association for Computa- tional Linguistics, Suntec, Singapore, pages 97â100. https://aclweb.org/anthology/P09-2025.
Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of BLEU In Proceed- in machine translation research. the European the 11th Conference of ings of the Association for Computational Chapter of Linguistics. Trento, 249â256. pages http://aclweb.org/anthology/E06-1032.
Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research. Morgan Kaufmann Publish- ers Inc., San Francisco, CA, USA, pages 138â145. http://dl.acm.org/citation.cfm?id=1289189.1289273.
Ondrej DuËsek, Jekaterina Novikova, and Verena Rieser. 2017. Referenceless quality estimation for natu- ral language generation. In Proceedings of the 1st Workshop on Learning to Generate Natural Lan- guage.
Train- ing a natural language generator from unaligned In Proceedings of the 53rd Annual Meet- data. ing of the Association for Computational Lin- guistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers). Beijing, China, pages 451â461. http://aclweb.org/anthology/P15-1044.
OndËrej DuËsek and Filip JurËc´ıËcek. 2016. A context- for dialogue aware natural the 17th An- systems. nual Meeting of Interest Group on Discourse and Dialogue. Association for Computational Linguistics, Los Angeles, CA, arXiv:1608.07076. USA, http://aclweb.org/anthology/W16-3622.
OndËrej DuËsek and Filip JurËc´ıËcek. 2016. Sequence- to-sequence generation for spoken dialogue via In Proceed- deep syntax trees and strings. ings of the As- the 54th Annual Meeting of sociation for Computational Linguistics. Berlin, Germany, arXiv:1606.05491. pages 45â51. http://aclweb.org/anthology/P16-2008.
Desmond Elliott and Frank Keller. 2014. Comparing automatic evaluation measures for image descrip- tion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers). Association for Computa- tional Linguistics, Baltimore, MD, USA, pages 452â 457. http://aclweb.org/anthology/P14-2074.
Dominic Espinosa, Rajakrishnan Rajkumar, Michael White, and Shoshana Berleant. 2010. Further meta- evaluation of broad-coverage surface realization. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 564â574. http://aclweb.org/anthology/D10-1055.
Rudolf Franz Flesch. 1979. How to write plain En- glish: A book for lawyers and consumers. Harper- Collins.
Thomas Francois and Delphine Bernhard, editors. 2014. Recent Advances in Automatic Readability Assessment and Text Simpliï¬cation, volume 165:2 of International Journal of Applied Linguistics. John Benjamins. http://doi.org/10.1075/itl.165.2.
Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 445â450. http://aclweb.org/anthology/P15-2073.
Jes´us Gim´enez and Llu´ıs M`arquez. 2008. A smor- gasbord of features for automatic MT evaluation. In Proceedings of the Third Workshop on Statisti- cal Machine Translation. Association for Computa- tional Linguistics, Columbus, OH, USA, pages 195â 198. http://aclweb.org/anthology/W08-0332.
Dimitra Gkatzia, Oliver Lemon, and Verena Rieser. 2016. language generation enhances human decision-making with uncertain informa- the 54th Annual tion.
Meeting of Linguistics (Volume 2: Germany, pages 264â268. http://aclweb.org/anthology/P16-2043. the Association for Computational Short Papers). Berlin, arXiv:1606.03254.
Dimitra Gkatzia and Saad Mahamood. 2015. A snap- In shot of NLG evaluation practices 2005â2014. Proceedings of the 15th European Workshop on Natural Language Generation (ENLG). Association for Computational Linguistics, Brighton, UK, pages 57â60. https://doi.org/10.18653/v1/W15-4708.
Aaron L. F. Han, Derek F. Wong, and Lidia S. Chao. 2012. LEPOR: A robust evaluation metric for ma- In Pro- chine translation with augmented factors. ceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, Mumbai, India, pages 441â450. http://aclweb.org/anthology/C12-2044.
Tim Finin, and Jonathan Weese. 2013. James Mayï¬eld, UMBC EBIQUITY-CORE: Semantic textual simi- larity systems. In Proceedings of the Second Joint Conference on Lexical and Computational Seman- tics (*SEM). Atlanta, Georgia, volume 1, pages 44â52. http://aclweb.org/anthology/S13-1005.
Helen Hastie, Heriberto Cuayahuitl, Nina Dethlefs, Si- mon Keizer, and Xingkun Liu. 2016. Why bother? Is evaluation of NLG in an end-to-end Spoken Di- alogue System worth it? In Proceedings of the In- ternational Workshop on Spoken Dialogue Systems (IWSDS). Saariselk¨a, Finland.
Micah Hodosh and Julia Hockenmaier. 2016. Focused evaluation for image description with binary forced- choice tasks. In Proceedings of the 5th Workshop on Vision and Language. Berlin, Germany, pages 19â 28. http://aclweb.org/anthology/W16-3203.
Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard H. Hovy. 2013. Learning whom to In Proceedings of NAACL- trust with MACE. HLT. Atlanta, GA, USA, pages 1120â1130. http://aclweb.org/anthology/N13-1132.
Min-Yen Kan, Kathleen R. McKeown, and Judith L. Klavans. 2001. Applying natural language gen- In Proceed- eration to indicative summarization. ings of the 8th European Workshop on Natural Language Generation. Association for Computa- tional Linguistics, Toulouse, France, pages 1â9. https://doi.org/10.3115/1117840.1117853.
Anjuli Kannan and Oriol Vinyals. 2017. Adver- CoRR sarial evaluation of dialogue models. abs/1701.08198. https://arxiv.org/abs/1701.08198.
Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, Re-evaluating auto- and Erkut Erdem. 2017. In Pro- matic metrics for image captioning. the Euro- ceedings of the 15th Conference of the Association for Computa- pean Chapter of tional Linguistics. Association for Computational Linguistics, Valencia, Spain. arXiv:1612.07600. https://arxiv.org/abs/1612.07600.
Gerasimos Lampouras and Andreas Vlachos. 2016. Imitation learning for language generation from un- In Proceedings of COLING 2016, aligned data. the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 1101â 1112. http://aclweb.org/anthology/C16-1105.
J Richard Landis and Gary G Koch. 1977. measurement of observer agreement Biometrics egorical https://doi.org/10.2307/2529310.
Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation. Association for Computational Linguistics, Prague, Czech Republic, pages 228â 231. http://aclweb.org/anthology/W07-0734.
Chin-Yew Lin. 2004. ROUGE: A package for au- In Text summa- tomatic evaluation of summaries. rization branches out: Proceedings of the ACL- 04 workshop. Barcelona, Spain, pages 74â81. http://aclweb.org/anthology/W04-1013.
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue sys- tem: An empirical study of unsupervised evalua- tion metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, arXiv:1603.08023. TX, USA, pages 2122â2132. http://aclweb.org/anthology/D16-1230.
Franc¸ois Mairesse, Milica GaËsi´c, Filip JurËc´ıËcek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1552â1561. http://aclweb.org/anthology/P10-1157.
Hongyuan Mei, Mohit Bansal, and Matthew R. Wal- ter. 2016. What to talk about and how? Se- lective generation using LSTMs with coarse-to- In Proceedings of NAACL-HLT ï¬ne alignment. 2016. San Diego, CA, USA. arXiv:1509.00838. http://aclweb.org/anthology/N16-1086.
Courtney Napoles, Keisuke Sakaguchi, and Joel comparison: Tetreault. in gram- Reference-less In Proceedings of matical error correction. the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA, pages 2109â2115. arXiv:1610.02124. http://aclweb.org/anthology/D16-1228.
Jekaterina Novikova, Ondrej DuËsek, and Verena The E2E dataset: New chal- In Pro- the
Special Interest Group on Discourse and Dia- logue. Saarbr¨ucken, Germany. ArXiv:1706.09254. https://arxiv.org/abs/1706.09254.
Jekaterina Novikova, Oliver Lemon, and Verena Rieser. 2016. Crowd-sourcing NLG data: Pictures In Proceedings of the 9th Inter- elicit better data. national Natural Language Generation Conference. Edinburgh, UK, pages 265â273. arXiv:1608.00339. http://aclweb.org/anthology/W16-2302.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic In Proceedings evaluation of machine translation. of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Compu- tational Linguistics, Philadelphia, PA, USA, pages 311â318. http://aclweb.org/anthology/P02-1040.
Ehud Reiter and Anja Belz. 2009. An investiga- tion into the validity of some metrics for automat- ically evaluating natural language generation sys- tems. Computational Linguistics 35(4):529â558. https://doi.org/10.1162/coli.2009.35.4.35405.
Verena Rieser, Oliver Lemon, and Simon Keizer. language generation as incre- 2014. mental planning under uncertainty: Adaptive information presentation for statistical dialogue IEEE/ACM Transactions on Audio, systems. Speech, and Language Processing 22(5):979â993. https://doi.org/10.1109/TASL.2014.2315271.
Shikhar Sharma, Jing He, Kaheer Suleman, Hannes Schulz, and Philip Bachman. 2016. Natural lan- guage generation in dialogue using lexicalized CoRR abs/1606.03632. and delexicalized data. http://arxiv.org/abs/1606.03632.
Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annota- tion. In Proceedings of the 7th Conference of the As- sociation for Machine Translation of the Americas. Cambridge, MA, USA, pages 223â231. http://mt- archive.info/AMTA-2006-Snover.pdf.
and Marco Turchi. 2010. Machine translation evaluation versus qual- ity estimation. Machine translation 24(1):39â50. https://doi.org/10.1007/s10590-010-9077-2.
Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for gener- In Computa- ation in the presence of variation. tional Linguistics and Intelligent Text Processing: 6th International Conference, CICLing 2005, Mex- ico City, Mexico, February 13-19, 2005. Proceed- ings. Springer, Berlin/Heidelberg, pages 341â351. https://doi.org/10.1007/978-3-540-30586-6 38.
Ramakrishna Vedantam, C. Lawrence Zitnick, CIDEr: Consensus- In Pro- the 2015 IEEE Conference on
Computer Recognition (CVPR). Boston, MA, USA, pages 4566â4575. https://doi.org/10.1109/CVPR.2015.7299087.
Tsung-Hsien Wen, Milica GaËsi´c, Nikola MrkËsi´c, Lina Maria Rojas-Barahona, Pei-hao Su, David Vandyke, and Steve J. Young. 2016. Multi- domain neural network language generation for In Proceedings of the spoken dialogue systems. 2016 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies. San Diego, CA, USA, pages 120â129. arXiv:1603.01232. http://aclweb.org/anthology/N16-1015.
Tsung-Hsien Wen, Milica GaËsi´c, Nikola MrkËsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue the 2015 Confer- systems. ence on Empirical Methods in Natural Language Processing. Lisbon, Portugal, pages 1711â1721. http://aclweb.org/anthology/D15-1199.
Evan James Williams. 1959. Regression analysis. John Wiley & Sons, New York, NY, USA.
# Appendix A: Detailed Results
BAGEL SFHOTEL SFREST Inf: 0.16* Nat: 0.36* Qua: 0.38* TGEN: 0.42* LOLS: 0.24* Inf: 0.41* Nat: 0.47* Qua: 0.52* RNNLG: 0.52* LOLS: 0.45* Inf: 0.35* Nat: 0.29* Qua: 0.35* RNNLG: 0.28* LOLS: 0.38* Total BAGEL: 0.31* Total SFHOTEL: 0.50* Total all data: 0.45* Total SFREST: 0.35*
Table 7: Intra-class correlation coefï¬cient (ICC) for human ratings across the three datasets. â*â denotes statistical signiï¬cance (p < 0.05).
metric TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM RE msp prs len wps sps cpw spw pol ppw informativeness naturalness quality BAGEL TGEN Avg / StDev 0.36/0.24 0.75*/0.21 0.68/0.23 0.60/0.28 0.52/0.32 0.76/0.18 4.44*/2.05 0.46*/0.22 2.92/2.40 0.50/0.22 0.66/0.09 86.79/19.48 0.04*/0.21 84.51*/25.78 38.20*/14.22 10.08*/3.10 13.15*/4.98 3.77/0.60 1.30/0.22 2.22/1.21 0.22/0.09 4.77/1.09 4.76/1.26 4.77/1.19 LOLS Avg / StDev 0.33/0.24 0.81*/0.16 0.72/0.21 0.63/0.26 0.53/0.33 0.78/0.17 4.91*/2.04 0.50*/0.19 3.01/2.27 0.53/0.23 0.65/0.12 83.39/20.41 0.14*/0.37 93.30*/27.04 42.54*/14.11 10.94*/3.19 14.61*/5.13 3.88/0.59 1.33/0.23 2.40/1.16 0.22/0.09 4.91/1.23 4.67/1.25 4.54/1.28 SFHOTEL LOLS Avg / StDev 0.65*/0.32 0.66*/0.23 0.54*/0.28 0.42*/0.33 0.28*/0.33 0.64*/0.21 3.49*/1.99 0.30*/0.16 1.66*/1.67 0.44*/0.20 0.73*/0.14 69.62/19.14 0.69/0.77 107.90*/36.41 51.69*/17.30 12.07*/4.17 17.02*/5.90 4.36/0.63 1.43/0.26 1.33/1.04 0.12/0.09 5.27/1.02 4.62/1.28 4.53/1.26 RNNLG Avg / StDev 0.28*/0.27 0.85*/0.18 0.78*/0.25 0.69*/0.32 0.56*/0.40 0.83*/0.18 4.37*/2.19 0.52*/0.23 3.08*/2.05 0.62*/0.27 0.76*/0.15 70.90/17.07 0.68/0.78 97.58*/32.58 49.06*/15.77 11.43*/3.63 16.03*/4.88 4.34/0.58 1.43/0.23 1.24/1.04 0.11/0.10 5.47*/0.81 4.99*/1.13 4.54/1.18 SFREST RNNLG Avg / StDev 0.41*/0.35 0.73*/0.24 0.62*/0.31 0.52*/0.37 0.44*/0.41 0.72*/0.24 4.86*/2.55 0.51*/0.25 3.39*/2.53 0.54*/0.28 0.76/0.13 64.67/19.07 0.78/0.82 93.74/34.98 53.27*/19.50 11.15*/4.37 16.39*/6.17 4.86*/0.64 1.50/0.26 1.69/1.12 0.16/0.11 5.29*/0.94 4.86/1.13 4.51/1.14 LOLS Avg / StDev 0.65*/0.27 0.59*/0.23 0.45*/0.29 0.34*/0.33 0.24*/0.32 0.58*/0.22 4.01*/2.07 0.30*/0.17 2.09*/1.73 0.41*/0.19 0.77/0.14 64.27/22.22 0.85/0.89 97.20/39.30 50.92*/18.74 10.52*/4.21 15.41*/5.92 4.94*/0.76 1.50/0.29 1.57/1.07 0.16/0.12 5.16/1.07 4.74/1.23 4.58/1.33
Table 8: The systemsâ performance for all datasets. Avg denotes a mean value, StDev stands for standard deviation, â*â denotes a statistically signiï¬cant difference (p < 0.05) between the two systems on the given dataset.
T S E R F S L E T O H F S L E G A B S L O L G L N N R S L O L G L N N R S L O L l a u q t a n f n i l a u q t a n f n i l a u q t a n f n i l a u q t a n f n i l a u q t a n f n i * 4 1 . 0 - * 4 1 . 0 - * 6 1 . 0 - 8 0 . 0 - * 4 1 . 0 - 2 0 . 0 * 2 1 . 0 - * 0 2 . 0 - 6 0 . 0 - 8 0 . 0 - 9 0 . 0 - 3 0 . 0 - * 6 1 . 0 - * 9 1 . 0 - * 6 1 . 0 - * 3 1 . 0 * 5 1 . 0 * 9 1 . 0 6 0 . 0 * 2 1 . 0 2 0 . 0 6 0 . 0 * 2 1 . 0 1 0 . 0 8 0 . 0 * 9 0 . 0 9 0 . 0 3 1 . 0 * 5 1 . 0 3 1 . 0 * 8 0 . 0 * 0 1 . 0 * 4 1 . 0 7 0 . 0 * 3 1 . 0 1 0 . 0 7 0 . 0 * 2 1 . 0 0 0 . 0 7 0 . 0 * 9 0 . 0 8 0 . 0 1 1 . 0 * 4 1 . 0 2 1 . 0 7 0 . 0 8 0 . 0 * 2 1 . 0 * 9 0 . 0 * 3 1 . 0 2 0 . 0 8 0 . 0 * 1 1 . 0 1 0 . 0 6 0 . 0 8 0 . 0 6 0 . 0 0 1 . 0 3 1 . 0 1 1 . 0 5 0 . 0 4 0 . 0 * 2 1 . 0 7 0 . 0 * 2 1 . 0 3 0 . 0 3 0 . 0 2 0 . 0 0 0 . 0 7 0 . 0 5 0 . 0 6 0 . 0 0 1 . 0 3 1 . 0 1 1 . 0 8 0 . 0 * 1 1 . 0 * 2 1 . 0 * 9 0 . 0 * 7 1 . 0 4 0 . 0 2 0 . 0 4 0 . 0 1 0 . 0 - 8 0 . 0 9 0 . 0 7 0 . 0 * 5 1 . 0 * 7 1 . 0 * 0 2 . 0 7 0 . 0 8 0 . 0 * 5 1 . 0 1 0 . 0 7 0 . 0 3 0 . 0 * 1 1 . 0 * 4 1 . 0 2 0 . 0 1 0 . 0 5 0 . 0 7 0 . 0 1 1 . 0 3 1 . 0 * 6 1 . 0 * 8 1 . 0 * 7 1 . 0 * 8 2 . 0 2 0 . 0 - 5 0 . 0 0 0 . 0 * 0 1 . 0 * 7 1 . 0 * 4 1 . 0 3 0 . 0 3 0 . 0 3 0 . 0 4 0 . 0 - 2 0 . 0 7 0 . 0 - 8 0 . 0 * 1 1 . 0 * 0 1 . 0 3 0 . 0 * 2 1 . 0 2 0 . 0 9 0 . 0 * 3 1 . 0 3 0 . 0 0 0 . 0 7 0 . 0 7 0 . 0 * 4 1 . 0 * 9 1 . 0 * 4 1 . 0 * 7 1 . 0 * 9 1 . 0 * 3 2 . 0 * 9 0 . 0 * 6 1 . 0 6 0 . 0 4 0 . 0 6 0 . 0 5 0 . 0 * 0 1 . 0 * 0 1 . 0 7 0 . 0 * 6 1 . 0 * 8 1 . 0 * 0 2 . 0 2 0 . 0 1 0 . 0 * 9 1 . 0 * 8 0 . 0 - 6 0 . 0 - * 3 1 . 0 8 0 . 0 - 3 0 . 0 - 3 0 . 0 * 1 1 . 0 - * 2 1 . 0 - 5 0 . 0 - 9 0 . 0 3 1 . 0 * 4 1 . 0 * 8 0 . 0 * 9 0 . 0 4 0 . 0 2 0 . 0 5 0 . 0 - 0 0 . 0 9 0 . 0 3 0 . 0 1 0 . 0 - * 0 1 . 0 3 0 . 0 0 0 . 0 4 0 . 0 4 0 . 0 - 9 0 . 0 - * 9 0 . 0 * 0 1 . 0 6 0 . 0 1 0 . 0 * 1 1 . 0 2 0 . 0 6 0 . 0 * 4 1 . 0 * 3 1 . 0 * 9 0 . 0 - 2 0 . 0 - 2 0 . 0 3 0 . 0 - 5 0 . 0 8 0 . 0 * 9 0 . 0 - * 4 1 . 0 - * 1 2 . 0 8 0 . 0 - * 7 1 . 0 - * 1 1 . 0 7 0 . 0 - 8 0 . 0 - * 2 1 . 0 9 0 . 0 - * 7 1 . 0 - 1 0 . 0 * 4 2 . 0 - * 9 1 . 0 - 4 0 . 0 * 1 1 . 0 - * 5 1 . 0 - * 8 1 . 0 7 0 . 0 - * 9 1 . 0 - * 1 1 . 0 8 0 . 0 - * 2 1 . 0 - 8 0 . 0 5 0 . 0 - * 5 1 . 0 - 1 0 . 0 * 9 2 . 0 - * 8 2 . 0 - 5 0 . 0 - * 6 1 . 0 - * 1 2 . 0 - * 2 1 . 0 8 0 . 0 - * 7 1 . 0 - 7 0 . 0 * 6 1 . 0 - * 8 1 . 0 - 2 0 . 0 8 0 . 0 - * 6 1 . 0 - 2 0 . 0 - * 3 2 . 0 - * 7 1 . 0 - 3 0 . 0 * 1 1 . 0 - * 0 1 . 0 - * 4 1 . 0 - 3 0 . 0 - 6 0 . 0 7 0 . 0 - * 7 1 . 0 - * 0 1 . 0 - * 0 1 . 0 - * 1 1 . 0 - 2 0 . 0 - 8 0 . 0 - 2 0 . 0 9 0 . 0 0 1 . 0 * 1 1 . 0 - * 3 1 . 0 - 2 0 . 0 - 3 0 . 0 - 4 0 . 0 - 4 0 . 0 - * 4 1 . 0 - 9 0 . 0 - 1 0 . 0 * 5 1 . 0 - * 0 1 . 0 - 7 0 . 0 - 2 1 . 0 - 3 0 . 0 - 1 1 . 0 7 0 . 0 - 5 0 . 0 - * 3 1 . 0 - 0 0 . 0 8 0 . 0 * 9 0 . 0 - 9 0 . 0 - 1 0 . 0 - 2 0 . 0 - * 6 1 . 0 - 6 0 . 0 - 9 0 . 0 - 8 0 . 0 * 5 1 . 0 * 6 1 . 0 7 0 . 0 1 0 . 0 * 2 1 . 0 6 0 . 0 - 2 0 . 0 5 0 . 0 3 0 . 0 - 2 0 . 0 - 5 0 . 0 8 0 . 0 - * 0 1 . 0 - 1 0 . 0 - 0 1 . 0 - 2 0 . 0 - 2 0 . 0 * 6 1 . 0 * 0 2 . 0 * 6 1 . 0 - * 1 1 . 0 * 5 1 . 0 * 1 1 . 0 - 4 0 . 0 5 0 . 0 * 3 1 . 0 - 9 0 . 0 * 3 1 . 0 2 0 . 0 - * 1 3 . 0 * 4 2 . 0 5 0 . 0 , ) 5 0 . 0 < p ( n o i t a l e r r o c t n a c ï¬ i n g i s y l l a c i t s i t a t s s e t o n e d â * â . s m e t s y s d n a s t e s a t a d l a u d i v i d n i r o f s g n i t a r n a m u h d n a . t e s a t a d e m a s e h t n o s m e t s y s o w t g n i r a p m o c n e h w n o i t a l e r r o c r e g n o r t s
# N E G T
# l a u q
t a n
# f n i
c i r t e m
6 1 . 0 -
9 1 . 0 -
1 2 . 0 -
# R E T
3 1 . 0
5 1 . 0
0 3 . 0
# 1 U E L B
4 1 . 0
7 1 . 0
0 3 . 0
# 2 U E L B
2 1 . 0
7 1 . 0
7 2 . 0
# 3 U E L B
1 1 . 0
5 1 . 0
3 2 . 0
# 4 U E L B
9 0 . 0
1 1 . 0
0 2 . 0
# E G U O R
= ___E10__ _60°0 _ 900° |___ aM WIS ISIN
9 1 . 0 -
2 0 . 0
3 1 . 0
6 0 . 0
0 1 . 0
9 0 . 0
7 0 . 0
4 1 . 0
2 1 . 0 -
2 1 . 0
7 0 . 0
9 0 . 0
4 0 . 0
9 0 . 0
4 2 . 0
6 1 . 0
9 2 . 0
7 1 . 0
6 2 . 0
6 0 . 0 -
3 0 . 0
# R O E T E M
# R O P E L
# r E D
# w p c
# T S I
# M
# E R
# I
# I S
# N
# C
1 2 . 0 -
5 2 . 0 -
5 2 . 0
# n e l
2 1 . 0 -
7 1 . 0 -
3 3 . 0
# s p w
7 1 . 0 -
0 2 . 0 -
5 2 . 0
# s p s
3 1 . 0 -
=
7 0 . 0 -
1 0 . 0
# w p s
7 0 . 0 -
6 0 . 0 -
6 1 . 0
# l o p
0 0 . 0
6 0 . 0
2 0 . 0 -
# w p p
1 1 . 0 -
=
6 0 . 0 -
2 0 . 0 -
# p s m
3 1 . 0
8 1 . 0
3 2 . 0 -
# s r p
s c i r t e m n e e w t e b n o i t a l e r r o c n a m r a e p S
: 9 e l b a T
y l t n a c ï¬ i n g i s
# s e t o n e d
# t n o f
# d l o b
inf BAGEL nat qual inf SFHOTEL nat qual inf SFREST nat qual TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM RE cpw len wps sps spw pol ppw msp prs -0.19* 0.23* 0.21* 0.19* 0.18* 0.20* 0.21* 0.07 0.21* 0.25* 0.15* -0.08 0.05 0.14* 0.14* 0.14* 0.05 0.13* 0.06 0.02 -0.10 -0.19* 0.14* 0.15* 0.15* 0.14* 0.13* 0.09 0.07 0.16* 0.13* 0.09 0.03 -0.04 -0.22* -0.23* -0.19* 0.00 -0.05 0.11* -0.04 0.22* -0.15* 0.11* 0.12* 0.11* 0.10* 0.11* 0.06 0.01 0.12* 0.12* 0.07 0.09 -0.12* -0.24* -0.23* -0.21* -0.06 -0.10* 0.04 -0.11* 0.25* -0.10* 0.11* 0.10* 0.09* 0.08* 0.09* 0.07* 0.13* 0.10* 0.11* 0.01 0.01 0.07* 0.05 0.03 -0.01 -0.10* -0.04 -0.06 0.02 -0.05 -0.19* 0.18* 0.17* 0.16* 0.10* 0.15* 0.13* 0.15* 0.16* 0.15* -0.04 0.04 0.05 -0.14* -0.14* -0.18* -0.06 -0.10* -0.04 -0.06 0.12* -0.07* 0.07* 0.07* 0.07* 0.06 0.06 0.06 0.05 0.05 0.08* -0.09* 0.10* -0.02 -0.07* -0.06 -0.12* -0.14* -0.14* -0.13* -0.06 0.07 -0.09* 0.11* 0.09* 0.08* 0.09* 0.09* 0.10* 0.16* 0.08* 0.15* 0.15* 0.02 0.04 0.16* 0.14* 0.10* -0.11* -0.03 -0.11* 0.08* -0.13* -0.15* 0.14* 0.13* 0.12* 0.09* 0.15* 0.08* 0.12* 0.12* 0.18* -0.02 0.02 0.10* -0.15* -0.17* -0.18* -0.02 -0.08* 0.01 0.01 0.18* -0.08* 0.07* 0.06* 0.06* 0.05 0.06* 0.03 0.04 0.04 0.11* -0.02 0.06 0.06 -0.09* -0.10* -0.12* -0.07* -0.08* -0.04 0.01 0.13*
Table 10: Spearman correlation between metrics and human ratings for each dataset. â*â denotes statis- tically signiï¬cant correlation (p < 0.05).
inf TGEN nat qual inf LOLS nat qual inf RNNLG nat qual TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM RE cpw len wps sps spw pol ppw msp prs -0.21* 0.30* 0.30* 0.27* 0.23* 0.20* 0.25* 0.17* 0.26* 0.29* 0.16* -0.06 0.03 0.25* 0.33* 0.25* 0.01 0.16* -0.02 -0.02 -0.23* -0.19* 0.15* 0.17* 0.17* 0.15* 0.11 0.07 0.12 0.14* 0.09 0.04 0.09 -0.12 -0.25* -0.17* -0.20* -0.07 -0.06 0.06 -0.06 0.18* -0.16* 0.13 0.14 0.12 0.11 0.09 0.02 0.07 0.10 0.09 0.06 0.13 -0.19* -0.21* -0.12 -0.17* -0.13 -0.07 0.00 -0.11 0.13 -0.07* 0.08* 0.05 0.04 0.04 0.05 0.07* 0.13* 0.05 0.14* 0.14* -0.02 0.11* 0.17* 0.11* 0.09* -0.07* -0.02 -0.08* 0.10* -0.12* -0.15* 0.12* 0.11* 0.09* 0.04 0.09* 0.11* 0.13* 0.13* 0.13* 0.02 0.04 0.11* -0.12* -0.17* -0.19* -0.06* -0.09* 0.00 0.00 0.16* -0.11* 0.08* 0.07* 0.07* 0.04 0.05 0.09* 0.11* 0.09* 0.12* 0.00 0.07* 0.08* -0.10* -0.13* -0.17* -0.10* -0.11* -0.05 0.02 0.15* -0.02 0.07* 0.06* 0.06 0.06 0.07* 0.04 0.02 0.04 0.08* 0.05 0.02 -0.02 0.06 0.07* 0.03 -0.09* -0.08* -0.11* 0.02 -0.07* -0.13* 0.13* 0.14* 0.13* 0.11* 0.15* 0.06* 0.05 0.10* 0.15* -0.08* -0.01 0.02 -0.18* -0.17* -0.17* 0.01 -0.08* 0.00 -0.04 0.14* -0.08* 0.07* 0.08* 0.08* 0.08* 0.09* 0.01 0.00 0.02 0.10* -0.09* 0.06* -0.05 -0.08* -0.06 -0.08* -0.07* -0.09* -0.07* -0.07* 0.10*
Table 11: Spearman correlation between metrics and human ratings for each system. â*â denotes statis- tical signiï¬cance (p < 0.05).
M I S E R E T E M r E D I C R O P E L T S I N E G U O R 4 U E L B 3 U E L B 2 U E L B 1 U E L B R E T d n a r * 9 0 . 1 4 3 1 . 7 3 * 4 5 . 5 4 * 7 0 3 4 . * 8 5 . 1 4 * 7 0 . 3 4 * 7 0 . 3 4 * 8 0 . 2 4 * 7 5 . 2 4 * 8 5 . 1 4 * 8 5 . 1 4 * 5 0 . 5 4 3 1 7 3 . * 7 0 . 3 4 8 0 . 2 4 * 5 0 . 5 4 * 5 0 5 4 . * 3 5 . 6 4 * 5 5 . 4 4 * 4 0 . 6 4 * 5 0 . 5 4 * 6 0 . 4 4 * 4 5 . 5 4 * 4 0 . 6 4 * 3 0 . 7 4 8 0 2 4 . * 7 5 . 2 4 2 6 . 7 3 * 8 5 . 1 4 * 8 0 2 4 . * 9 5 . 0 4 * 9 0 . 1 4 * 7 0 . 3 4 * 6 5 . 3 4 * 9 5 . 0 4 * 0 1 . 0 4 * 7 0 . 3 4 * 4 5 . 5 4 7 1 3 3 . * 2 9 . 3 3 * 2 9 . 4 3 * 3 4 . 6 3 * 2 9 3 3 . * 6 1 . 2 3 1 4 . 1 3 * 3 4 . 6 3 * 7 6 . 4 3 * 8 6 . 5 3 * 8 1 . 5 3 * 8 6 . 5 3 * 2 9 . 4 3 8 3 5 2 . 8 9 6 4 . 9 1 7 3 . * 5 7 9 4 . 2 7 . 4 4 2 7 3 4 . 1 2 . 1 4 4 7 . 8 4 3 2 . 5 4 8 4 . 6 4 8 4 . 5 4 8 4 . 6 4 3 7 . 5 4 6 9 1 4 . 4 4 . 7 3 7 6 . 3 3 8 9 . 5 4 6 4 2 4 . 5 9 . 0 4 2 . 0 4 2 2 . 3 4 6 4 . 1 4 2 7 . 4 4 1 2 . 2 4 5 9 . 0 4 5 9 . 0 4 7 4 4 4 . * 6 6 2 4 . * 4 3 8 3 . 2 7 4 3 . 7 4 . 2 3 7 2 6 3 . 8 5 . 5 3 6 1 . 3 3 6 9 . 6 3 2 7 . 4 3 2 0 . 4 3 1 4 . 5 3 7 2 . 6 3 8 6 3 3 . 0 0 8 3 . 8 3 9 3 . 8 3 9 3 . 9 7 . 6 3 * 1 1 1 4 . 8 3 . 9 3 7 1 . 8 3 6 8 . 8 3 4 3 . 8 3 6 8 . 8 3 7 0 . 0 4 1 4 . 0 4 0 1 6 3 . 1 3 7 3 . 3 9 0 4 . 9 8 4 3 . 3 2 . 5 3 2 7 9 3 . 9 6 . 8 3 0 1 . 6 3 5 5 . 9 3 5 6 . 7 3 1 2 . 9 3 6 9 . 6 3 3 1 . 7 3 8 3 9 3 . . t n a u q * 3 8 . 2 4 * 7 1 . 8 3 * 9 7 . 6 3 * 7 2 6 3 . * 3 1 . 7 3 * 5 5 . 9 3 * 4 4 . 6 3 * 4 5 . 4 3 * 2 9 . 5 3 * 7 3 . 4 3 * 7 2 . 6 3 * 5 7 . 5 3 5 9 1 3 . 1 4 5 3 . 7 4 2 3 . 1 6 1 3 . 2 9 . 0 3 2 9 5 3 . 6 1 . 3 3 4 5 . 4 3 4 9 . 6 2 7 5 . 0 3 3 . 2 3 7 3 . 4 3 3 3 . 3 3 1 2 9 3 . 1 6 3 . 2 4 3 . 4 3 8 3 . 3 2 . 5 3 6 8 8 3 . 1 2 . 9 3 6 9 . 6 3 3 . 2 3 5 7 . 5 3 1 . 6 3 9 6 . 8 3 2 8 7 3 . 3 1 7 3 . . ) 5 0 . 0 < p ( e c n a c ï¬ i n g i s l a c i t s i t a t s g n i t o n e d â * â h t i w , s g n i t a r n a m u h e v i t a l e r g n i t c i d e r p s c i r t e m f o y c a r u c c A : 2 1 e l b a T
y c a r u c c A
# L E G A B
m r o f n i
l a r u t a n
y t i l a u q
# L E T O H F S
m r o f n i
l a r u t a n
y t i l a u q
# T S E R F S
m r o f n i
l a r u t a n
y t i l a u q
# , T S E R F S
m r o f n i
y t i l a u q
s s e n l a r u t a n
informativeness Bad Good and avg Bad naturalness Good and avg Bad TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR 0.48* 0.45* 0.49* 0.40* 0.41* 0.50* 0.26 0.40* 0.42* 0.45* 0.37* 0.07* 0.11* 0.09* 0.08* 0.07* 0.08* 0.08* 0.09* 0.09* 0.14* 0.12* 0.31* 0.26* 0.29* 0.25* 0.21* 0.28* 0.23* 0.23* 0.21* 0.24* 0.29* 0.15* 0.13* 0.13* 0.13* 0.08* 0.13* 0.08* 0.10* 0.12* 0.15* -0.03 0.08 0.07 0.05 0.01 0.01 0.07 0.08 0.03 0.02 0.03 0.21* 0.06* 0.04 0.04* 0.05* 0.04 0.04* 0.03 0.01 0.04 0.08* -0.08* SIM
Table 13: Spearman correlation between WBM scores and human ratings for utterances from the Bad bin and utterances from the Good and Average bins. â*â denotes statistically signiï¬cant correlation (p < 0.05), bold font denotes signiï¬cantly stronger correlation for the Bad bin compared to the Good and Average bins.
naturalness Inform Not inform Inform Not inform Inform Not inform informativeness quality TER BLEU1 BLEU2 BLEU3 BLEU4 ROUGE NIST LEPOR CIDEr METEOR SIM cpw len wps sps spw pol ppw msp prs -0.08* 0.11* 0.09* 0.07* 0.06* 0.08* 0.08* 0.09* 0.10* 0.14* 0.15* 0.12* 0.17* 0.11* 0.09* -0.06* -0.08* -0.14* 0.11* -0.10* -0.10 0.09 0.10 0.11* 0.11* 0.12* 0.05 0.16* 0.01 0.17* 0.09 -0.15* 0.08 0.19* 0.18* 0.09 0.05 -0.01 -0.03 -0.18* -0.17* 0.14* 0.14* 0.13* 0.09* 0.14* 0.10* 0.11* 0.16* 0.15* -0.01 0.09* -0.15* -0.19* -0.20* -0.03 -0.10* 0.00 0.00 0.18* -0.18* 0.20* 0.20* 0.20* 0.18* 0.22* 0.06 0.16* 0.04 0.22* -0.03 -0.14* -0.12* -0.03 -0.02 0.01 -0.03 -0.03 -0.08 0.04 -0.09* 0.07* 0.07* 0.06* 0.05* 0.06* 0.07* 0.05* 0.07* 0.09* -0.05* 0.01 -0.12* -0.12* -0.17* -0.12* -0.09* -0.03 -0.03 0.15* -0.11* 0.11* 0.13* 0.14* 0.14* 0.16* -0.06 0.04 0.02 0.18* -0.10 -0.11* -0.05 0.01 0.02 0.01 -0.03 -0.05 -0.08 0.02
Table 14: Spearman correlation between automatic metrics and human ratings for utterances of the inform MR type and utterances of other MR types. â*â denotes statistically signiï¬cant correlation (p < 0.05), bold font denotes signiï¬cantly stronger (absolute) correlation for inform MRs compared to non- inform MRs. | {
"id": "1612.07600"
} |
1707.06342 | ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | We propose an efficient and unified framework, namely ThiNet, to
simultaneously accelerate and compress CNN models in both training and
inference stages. We focus on the filter level pruning, i.e., the whole filter
would be discarded if it is less important. Our method does not change the
original network structure, thus it can be perfectly supported by any
off-the-shelf deep learning libraries. We formally establish filter pruning as
an optimization problem, and reveal that we need to prune filters based on
statistics information computed from its next layer, not the current layer,
which differentiates ThiNet from existing methods. Experimental results
demonstrate the effectiveness of this strategy, which has advanced the
state-of-the-art. We also show the performance of ThiNet on ILSVRC-12
benchmark. ThiNet achieves 3.31$\times$ FLOPs reduction and 16.63$\times$
compression on VGG-16, with only 0.52$\%$ top-5 accuracy drop. Similar
experiments with ResNet-50 reveal that even for a compact network, ThiNet can
also reduce more than half of the parameters and FLOPs, at the cost of roughly
1$\%$ top-5 accuracy drop. Moreover, the original VGG-16 model can be further
pruned into a very small model with only 5.05MB model size, preserving AlexNet
level accuracy but showing much stronger generalization ability. | http://arxiv.org/pdf/1707.06342 | Jian-Hao Luo, Jianxin Wu, Weiyao Lin | cs.CV | To appear in ICCV 2017 | null | cs.CV | 20170720 | 20170720 | 7 1 0 2
l u J 0 2 ] V C . s c [ 1 v 2 4 3 6 0 . 7 0 7 1 : v i X r a
# ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo1, Jianxin Wu1, and Weiyao Lin2 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2Shanghai Jiao Tong University, Shanghai, China luojh@lamda.nju.edu.cn, wujx2001@nju.edu.cn, wylin@sjtu.edu.cn
# Abstract
We propose an efï¬cient and uniï¬ed framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the ï¬lter level pruning, i.e., the whole ï¬lter would be dis- carded if it is less important. Our method does not change the original network structure, thus it can be perfectly sup- ported by any off-the-shelf deep learning libraries. We for- mally establish ï¬lter pruning as an optimization problem, and reveal that we need to prune ï¬lters based on statistics in- formation computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experi- mental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31à FLOPs reduction and 16.63à compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the param- eters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.
# 1. Introduction
In the past few years, we have witnessed a rapid develop- ment of deep neural networks in the ï¬eld of computer vision, from basic image classiï¬cation tasks such as the ImageNet recognition challenge [18, 28, 11], to some more advanced applications, e.g., object detection [7], semantic segmenta- tion [24], image captioning [16] and many others. Deep neural networks have achieved state-of-the-art performance in these ï¬elds compared with traditional methods based on manually designed visual features.
nario means a computing task must be accomplished with limited resource supply, such as computing time, storage space, battery power, etc. One of the main issues of deep neural networks is its huge computational cost and storage overhead, which constitute a serious challenge for a mobile device. For instance, the VGG-16 model [28] has 138.34 mil- lion parameters, taking up more than 500MB storage space,1 and needs 30.94 billion ï¬oat point operations (FLOPs) to classify a single image. Such a cumbersome model can easily exceed the computing limit of small devices. Thus, network compression has drawn a signiï¬cant amount of interest from both academia and industry.
Pruning is one of the most popular methods to reduce network complexity, which has been widely studied in the model compression community. In the 1990s, LeCun et al. [20] had observed that several unimportant weights can be removed from a trained network with negligible loss in accuracy. A similar strategy was also explored in [2]. This process resembles the biological phenomena in mammalian brain, where the number of neuron synapses has reached the peak in early childhood, followed by gradual pruning during its development. However, these methods are mainly based on the second derivative, thus are not applicable for todayâs deep model due to expensive memory and computation costs. Recently, Han et al. [10] introduced a simple pruning strategy: all connections with weights below a threshold are removed, followed by ï¬ne-tuning to recover its accuracy. This iterative procedure is performed several times, gener- ating a very sparse model. However, such a non-structured sparse model can not be supported by off-the-shelf libraries, thus specialized hardwares and softwares are needed for efï¬- cient inference, which is difï¬cult and expensive in real-world applications. On the other hand, the non-structured random connectivity ignores cache and memory access issues. As indicated in [32], due to the poor cache locality and jumping memory access caused by random connectivity, the practical acceleration is very limited (sometimes even slows down), even though the actual sparsity is relatively high.
In spite of its great success, a typical deep model is hard to be deployed on resource constrained devices, e.g., mobile phones or embedded gadgets. A resource constrained sce-
To avoid the limitations of non-structured pruning men-
11 MB = 220 â 1.048 million bytes, and 1 million is 106.
1
tioned above, we suggest that the ï¬lter level pruning would be a better choice. The beneï¬ts of removing the whole unim- portant ï¬lter have a great deal: 1) The pruned model has no difference in network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. 2) Memory footprint would be reduced dramatically. Such memory reduction comes not only from model parameter itself, but also from the intermediate activation, which is rarely considered in previous studies. 3) Since the pruned network structure has not be damaged, it can be further com- pressed and accelerated by other compression methods, e.g., the parameter quantization approach [33]. 4) More vision tasks, such as object detection or semantic segmentation, can be accelerated greatly using the pruned model.
In this paper, we propose a uniï¬ed framework, namely ThiNet (stands for âThin Netâ), to prune the unimportant ï¬lters to simultaneously accelerate and compress CNN mod- els in both training and test stages with minor performance degradation. With our pruned network, some important trans- fer tasks such as object detection or ï¬ne-grained recognition can run much faster (both training and inference), especially in small devices. Our main insight is that we establish a well- deï¬ned optimization problem, which shows that whether a ï¬lter can be pruned depends on the outputs of its next layer, not its own layer. This novel ï¬nding differentiates ThiNet from existing methods which prune ï¬lters using statistics calculated from their own layer.
We then compare the proposed method with other state- of-the-art criteria. Experimental results show that our ap- proach is signiï¬cantly better than existing methods, espe- cially when the compression rate is relatively high. We evaluate ThiNet on the large-scale ImageNet classiï¬cation task. ThiNet achieves 3.31à FLOPs reduction and 16.63à compression on VGG-16 model [28], with only 0.52% top-5 accuracy drop. The ResNet-50 model [11] has less redun- dancy compared with classic CNN models. ThiNet can still reduce 2.26à FLOPs and 2.06à parameters with roughly 1% top-5 accuracy drop. To explore the limits of ThiNet, we show that the original VGG-16 model can even be pruned into 5.05MB, but still preserving AlexNet level accuracy.
In addition, we also explore the performance of ThiNet in a more practical task, i.e., transfer learning on small-scale datasets. Experimental results demonstrate the excellent effectiveness of ThiNet, which achieves the best trade-off between model size and accuracy.
The key advantages and major contributions of this paper can be summarized as follows.
⢠We propose a simple yet effective framework, namely ThiNet, to simultaneously accelerate and compress CNN models. ThiNet shows signiï¬cant improvements over existing methods on numerous tasks.
⢠We formally establish ï¬lter pruning as an optimization problem, and reveal that we need to prune ï¬lters us-
2
ing statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods.
⢠In experiments, the VGG-16 model can be pruned into 5.05MB, showing promising generalization ability on transfer learning. Higher accuracy could be preserved with a more accurate model using ThiNet.
# 2. Related work
Many researchers have found that deep models suffer from heavy over-parameterization. For example, Denil et al. [4] demonstrated that a network can be efï¬ciently recon- structed with only a small subset of its original parameters. However, this redundancy seems necessary during model training, since the highly non-convex optimization is hard to be solved with current techniques [5, 13]. Hence, there is a great need to reduce model size after its training.
Some methods have been proposed to pursuit a balance between model size and accuracy. Han et al. [10] proposed an iterative pruning method to remove the redundancy in deep models. Their main insight is that small-weight con- nectivity below a threshold should be discarded. In practice, this can be aided by applying 4; or 2 regularization to push connectivity values becoming smaller. The major weakness of this strategy is the loss of universality and flexibility, thus seems to be less practical in the real applications.
In order to avoid these weaknesses, some attention has been focused on the group-wise sparsity. Lebedev and Lem- pitsky [19] explored group-sparse convolution by introduc- ing the group-sparsity regularization to the loss function, then some entire groups of weights would shrink to zeros, thus can be removed. Similarly, Wen et al. [32] proposed the Structured Sparsity Learning (SSL) method to regularize ï¬lter, channel, ï¬lter shape and depth structures. In spite of their success, the original network structure has been de- stroyed. As a result, some dedicated libraries are needed for an efï¬cient inference speed-up.
In line with our work, some ï¬lter level pruning strate- gies have been explored too. The core is to evaluate neuron importance, which has been widely studied in the commu- nity [34, 27, 21, 14, 23]. A simplest possible method is based on the magnitude of weights. Li et al. [21] measured the importance of each ï¬lter by calculating its absolute weight sum. Another practical criterion is to measure the sparsity of activations after the ReLU function. Hu et al. [14] believed that if most outputs of some neurons are zero, these activa- tions should be expected to be redundant. They compute the Average Percentage of Zeros (APoZ) of each ï¬lter as its importance score. These two criteria are simple and straight- forward, but not directly related to the ï¬nal loss. Inspired by this observation, Molchanov et al. [23] adopted Taylor expansion to approximate the inï¬uence to loss function in- duced by removing each ï¬lter.
input of filters of input of filters of input of layer i layer i layer i+1 layer i+1 layer i+2 Original Model prune weak Pruned oF Model oi | Fine-tuned , oo > Model â
Figure 1. Illustration of ThiNet. First, we focus on the dotted box part to determine several weak channels and their corresponding ï¬lters (highlighted in yellow in the ï¬rst row). These channels (and their associated ï¬lters) have little contribution to the overall performance, thus can be discarded, leading to a pruned model. Finally, the network is ï¬ne-tuned to recover its accuracy. (This ï¬gure is best viewed in color.)
Beyond pruning, there are also other strategies to obtain small CNN models. One popular approaches is parameter quantization [8, 3, 33, 9]. Low-rank approximation is also widely studied [5, 29]. Note that these methods are com- plementary to ï¬lter pruning, which can be combined with ThiNet for further improvement.
# 3. ThiNet
In this section, we will give a comprehensive introduc- tion to our ï¬lter level pruning approach: ThiNet. First, the overall framework will be presented. Next, a more detailed description of our selection algorithm would be presented. Finally, we will show our pruning strategy, which takes both efï¬ciency and effectiveness into consideration.
# 3.1. Framework of ThiNet
Pruning is a classic method used for reducing model complexity. Although vast differences exist (such as differ- ent criteria in selecting what should be pruned), the overall framework is similar in pruning ï¬lters inside a deep neural network. It can be summarized in one sentence: evaluate the importance of each neuron, remove those unimportant ones, and ï¬ne-tune the whole network.
This framework is illustrated in Figure 1. In the next sub- section, we will focus on the dotted box part to introduce our data-driven channel selection method, which determines the channels (and their associated ï¬lters) that are to be pruned away.
Given a pre-trained model, it would be pruned layer by layer with a predeï¬ned compression rate. We summarize our framework as follows:
1. Filter selection. Unlike existing methods that use layer iâs statistics to guide the pruning of layer iâs ï¬lters, we use layer i + 1 to guide the pruning in layer i. The key idea is: if we can use a subset of channels in layer
3
(i + 1)âs input to approximate the output in layer i + 1, the other channels can be safely removed from the input of layer i + 1. Note that one channel in layer (i + 1)âs input is produced by one ï¬lter in layer i, hence we can safely prune the corresponding ï¬lter in layer i.
2. Pruning. Weak channels in layer (i + 1)âs input and their corresponding ï¬lters in layer i would be pruned away, leading to a much smaller model. Note that, the pruned network has exactly the same structure but with fewer ï¬lters and channels. In other words, the original wide network is becoming much thinner. That is why we call our method âThiNetâ.
3. Fine-tuning. Fine-tuning is a necessary step to recover the generalization ability damaged by ï¬lter pruning. But it will take very long for large datasets and complex models. For time-saving considerations, we ï¬ne-tune one or two epochs after the pruning of one layer. In order to get an accurate model, more additional epochs would be carried out when all layers have been pruned.
# 4. Iterate to step 1 to prune the next layer.
# 3.2. Data-driven channel selection
We use a triplet (Z;, Vj, *) to denote the convolution process in layer i, where Z; ⬠RC**W js the input tensor, which has C' channels, H rows and W columns. And W; ⬠RPxCxKXK ig a set of filters with K x K kernel size, which generates a new tensor with D channels.
Our goal is to remove some unimportant ï¬lters in Wi. Note that, if a ï¬lter in Wi is removed, its corresponding channel in Ii+1 and Wi+1 would also be discarded. How- ever, since the ï¬lter number in layer i + 1 has not been changed, the size of its output tensor, i.e., Ii+2, would be kept exactly the same. Inspired by this observation, we believe that if we can remove several ï¬lters that has little inï¬uence on Ii+2 (which is also the output of layer i + 1), it would have little inï¬uence on the overall performance too. In other words, minimizing the reconstruction error of Ii+2 is closely related to the networkâs classiï¬cation performance.
# 3.2.1 Collecting training examples
In order to determine which channel can be removed safely, a training set used for importance evaluation would be col- lected. As illustrated in Figure 2, an element, denoted by y, is randomly sampled from the tensor Z;+2 (before ReLU). A corresponding filter W ⬠RC***¥ and sliding window x ⬠RCX*** (after ReLU) can also be determined accord- ing to its location. Here, some index notations are omitted for a clearer presentation. Normally, the convolution operation can be computed with a corresponding bias b as follows:
C K K 9= SOY YE Were X terre to c=1 ky=1 k2=1
input of layer i+1 filters of layer i+1 input of layer i+2 y:arandom sampled data Loe window W.: the corresponding filter
Figure 2. Illustration of data sampling and variablesâ relationship.
Now, if we further deï¬ne:
K K fe = > > We,ki ko X Le,ky,kos (2) ky=1k2=1
Eq. 1 can be simpliï¬ed as:
ll (3)
in which y = y â b. It is worthwhile to keep in mind that ¢ and Â¥ are random variables whose instantiations require fixed spatial locations indexed by c, k, and ky. A key observation is that channels in X = (#1, & ., 4c) is independent: &, with ry..,ife #¢.
In other words, if we can ï¬nd a subset S â {1, 2, . . . , C} and the equality
Ëy = Ëxc câS (4)
always holds, then we do not need any Ëxc if c /â S and these variables can be safely removed without changing the CNN modelâs result.
Of course, Eq. 4 cannot always be true for all instances of the random variables Ëx and Ëy. However, we can manually extract instances of them to ï¬nd a subset S such that Eq. 4 is approximately correct.
Given an input image, we ï¬rst apply the CNN model in the forward run to ï¬nd the input and output of layer i + 1. Then for any feasible (c, k1, k2) triplet, we can obtain a C- dimensional vector variable Ëx = {Ëx1, Ëx2, . . . , ËxC} and a scalar value Ëy using Eq. 1 to Eq. 3. Since Ëx and Ëy can be viewed as random variables, more instances can be sampled by choosing different input images, different channels, and different spatial locations.
# 3.2.2 A greedy algorithm for channel selection
Now, given a set of m (the product of number of images and number of locations) training examples {(Ëxi, Ëyi)}, the original channel selection problem becomes the following
4
Algorithm 1 A greedy algorithm for minimizing Eq. 6 Input: Training set {(Ëxi, Ëyi)}, and compression rate r Output: The subset of removed channels: T 1: T â â
; I â {1, 2, . . . , C}; 2: while |T | < C à (1 â r) do 3: min value â +â; 4: 5: 6: 7: for each item i â I do tmpT â T ⪠{i}; compute value from Eq. 6 using tmpT ; if value < min value then min value â value; min i â i; 8: 9: 10: 11: move min i from I into T ; 12: end while end if end for
optimization problem:
2 argmin ) > 9 -â Yo %,5 s 4 fea (5) st. |S] =Cxr, SC {1,2,...,C}.
Here, |S| is the number of elements in a subset S, and r is a pre-deï¬ned compression rate (i.e., how many channels are preserved). Equivalently, let T be the subset of removed channels (i.e., S ⪠T = {1, 2, . . . , C} and S â© T = â
), we can minimize the following alternative objective:
arg min)? > Rij 6)
|T | = C Ã (1 â r), T â {1, 2, . . . , C}.
Eq. 6 is equivalent to Eq. 5, but has faster speed because |T | is usually smaller than |S|. Solving Eq. 6 is still NP hard, thus we use a greedy strategy (illustrated in algorithm 1). We add one element to T at a time, and choose the channel leading to the smallest objective value in the current iteration. Obviously, this greedy solution is sub-optimal. But the gap can be compensated by ï¬ne-tuning. We have also tried some other sophisticated algorithms, such as sparse coding (speciï¬cally, the homotopy method [6]). However, our sim- ple greedy approach has better performance and faster speed according to our experiments.
# 3.2.3 Minimize the reconstruction error
So far, we have obtained the subset T such that the n-th channel in each ï¬lter of layer i + 1 can be safely removed if n â T . Hence, the corresponding ï¬lters in the previous layer i can be pruned too.
Now we will further minimize the reconstruction error (c.f . Eq. 5) by weighing the channels, which can be deï¬ned as:
WwW = argmin Gj; â wi *), 7 g Y i
where Ëxâ i indicates the training examples after channel se- lection. Eq. 7 is a classic linear regression problem, which has a unique closed-form solution using the ordinary least squares approach: Ëw = (XTX)â1XTy.
Each element in Ëw can be regarded as a scaling factor of corresponding ï¬lter channel such that W:,i,:,: = ËwiW:,i,:,:. From another point of view, this scaling operation provides a better initialization for ï¬ne-tuning, hence the network is more likely to reach higher accuracy.
# 3.3. Pruning strategy
There are mainly two types of different network archi- tectures: the traditional convolutional/fully-connected archi- tecture, and recent structural variants. The former is repre- sented by AlexNet [18] or VGGNet [28], while the latter mainly includes some recent networks like GoogLeNet [30] and ResNet [11]. The main difference between these two types is that more recent networks usually replace the FC (fully-connected) layers with a global average pooling layer [22, 34], and adopt some novel network structures like Inception in GoogLeNet or residual blocks in ResNet.
We use different strategies to prune these two types of net- works. For VGG-16, we notice that more than 90% FLOPs exist in the ï¬rst 10 layers (conv1-1 to conv4-3), while the FC layers contribute nearly 86.41% parameters. Hence, we prune the ï¬rst 10 layers for acceleration consideration, but replace the FC layers with a global average pooling layer. Although the proposed method is also valid for FC layers, we believe removing them is simpler and more efï¬cient.
For ResNet, there exist some restrictions due to its special structure. For example, the channel number of each block in the same group needs to be consistent in order to ï¬nish the sum operation (see [11] for more details). Thus it is hard to prune the last convolutional layer of each residual block directly. Since most parameters are located in the ï¬rst two layers, pruning the ï¬rst two layers is a good choice, which is illustrated in Figure 3.
# 4. Experiments
We empirically study the performance of ThiNet in this section. First, a comparison among several different ï¬l- ter selection criteria would be presented. Experimental re- sults show that our method is signiï¬cantly better than others. Then, we would report the performance on ILSCVR-12 [26]. Two widely used networks are pruned: VGG-16 [28] and ResNet-50 [11]. Finally, we focus on a more practical sce- nario to show the advantages of ThiNet. All the experiments
5
256-d 64%256x 1x1 relu 64x64%3%3 prune 50% >> relu 256%64x11 ReLU 256-d
Figure 3. Illustration of the ResNet pruning strategy. For each residual block, we only prune the ï¬rst two convolutional layers, keeping the block output dimension unchanged.
are conducted within Caffe [17].
# 4.1. Different ï¬lter selection criteria
There exist some heuristic criteria to evaluate the impor- tance of each ï¬lter in the literature. We compare our selec- tion method with two recently proposed criteria to demon- strate the effectiveness of our evaluation criterion. These criteria are brieï¬y summarized as follows:
e Weight sum [21]. Filters with smaller kernel weights tend to produce weaker activations. Thus, in this strat- egy the absolute sum of each filter is calculated as its importance score: s; = )> |W(i,:,:,:)|-
e APoZ (Average Percentage of Zeros) [14]. This criterion calculates the sparsity of each channel in output activations as its importance score: s; = Tes b DULG, == 0), where |Z(i,:,:)| is the elements number in i-th channel of tensor Z (af- ter ReLU), and I(-) denotes the indicator function.
To compare these different selection methods, we evalu- ate their performance on the widely used ï¬ne-grained dataset: CUB-200 [31], which contains 11,788 images of 200 differ- ent bird species (5994/5794 images for training/test, respec- tively). Except for labels, no additional supervised informa- tion (e.g., bounding box) is used.
Following the pruning strategy in Section 3.3, all the FC layers in VGG-16 are removed, and replaced with a global average pooling layer, and ï¬ne-tuned on new datasets. Start- ing from this ï¬ne-tuned model, we then prune the network layer by layer with different compression rate. Each prun- ing is followed by one epoch ï¬ne-tuning, and 12 epochs are performed in the ï¬nal layer to improve accuracy. This procedure is repeated several times with different channel selection strategies. Due to the random nature of ThiNet, we repeated our method 4 times and report the averaged result. For a fair comparison, all the settings are kept the same, except the selection method.
Figure 4 shows the pruning results on the CUB bird dataset. We also evaluated the performance of random se- lection with the same pruning strategy. In addition, another
S in Random Weight sum APoZ ThiNet w/o W ThiNet Top-1 Accuracy oo bo oR ed iv S Box 80% 60% 40% 20% 0% FLOPs Reduction
Figure 4. Performance comparison of different channel selection methods: the VGG-16-GAP model pruned on CUB-200 with dif- ferent compression rates. (This ï¬gure is best viewed in color and zoomed in.)
version of ThiNet without least squares (denoted by âThiNet w/o Ëwâ) is also evaluated to demonstrate the effectiveness of least squares in our method. Obviously, ThiNet achieves con- sistently and signiï¬cantly higher accuracy compared with other selection methods.
One interesting result is: random selection shows pretty good performance, even better than heuristic criteria in some cases. In fact, according to the property of distributed repre- sentations (i.e., each concept is represented by many neurons; and, each neuron participates in the representation of many concepts [12, 1]), randomly selected channels may be quite powerful in theory. However, this criterion is not robust. As shown in Figure 4, it can lead to very bad result and the accuracy is very low after all layers are compressed. Thus, random selection is not applicable in practice.
Weight sum has pretty poor accuracy on CUB-200. This result is reasonable, since it only takes the magnitude of ker- nel weights into consideration, which is not directly related to the ï¬nal classiï¬cation accuracy. In fact, small weights could still have large impact on the loss function. When we discard a large number of small ï¬lters at the same time, the ï¬nal accuracy can be damaged greatly. For example, if we removed 60% ï¬lters in conv1-1 using the small weight crite- rion, the top-1 accuracy is only 40.99% (before ï¬ne-tuning), while random criterion is 51.26%. By contrast, our method (ThiNet w/o w) can reach 68.24%, and even 70.75% with least squares (ThiNet). The accuracy loss of weight sum is so large that ï¬ne-tuning cannot completely recover it from the drop.
In contrast, our method shows much higher and robust results. The least squares approach does indeed aid to get a better weight initialization for ï¬ne-tuning, especially when the compression rate is relatively high.
6
# 4.2. VGG-16 on ImageNet
We now evaluate the performance of the proposed ThiNet method on large-scale ImageNet classiï¬cation task. The ILSCVR-12 dataset [26] consists of over one million train- ing images drawn from 1000 categories. We randomly select 10 images from each category in the training set to comprise our evaluation set (i.e., collected training examples for chan- nel selection). And for each input image, 10 instances are randomly sampled with different channels and different spa- tial locations as described in section 3.2.1. Hence, there are in total 100,000 training samples used for ï¬nding the optimal channel subset via Algorithm 1. We compared several dif- ferent choices of image and location number, and found that the current choice (10 images per class and 10 locations per image) is enough for neuron importance evaluation. Finally, top-1 and top-5 classiï¬cation performance are reported on the 50k standard validation set, using the single-view testing approach (central patch only).
During ï¬ne-tuning, images are resized to 256 à 256, then 224 à 224 random cropping is adopted to feed the data into network. Horizontal ï¬ip is also used for data augmentation. At the inference stage, we center crop the resized images to 224 à 224. No more tricks are used here. The whole network is pruned layer by layer and ï¬ne-tuned in one epoch with 10â3 learning rate. Since the last layer of each group (i.e., conv1-2, conv2-2, conv3-3) is more important (pruning these layers would lead to a big accuracy drop), we ï¬ne-tune these layers with additional one epoch of 10â4 learning rate to prevent accuracy drop too much. When pruning the last layer, more epochs (12 epochs) are adopted to get an accurate result with learning rate varying from 10â3 to 10â5. We use SGD with mini-batch size of 128, and other parameters are kept the same as the original VGG paper [28].
We summarize the performance of the ThiNet approach in Table 1. Here, âThiNet-Convâ refers to the model in which only the ï¬rst 10 convolutional layers are pruned with compression rate 0.5 (i.e., half of the ï¬lters are removed in each layer till conv4-3) as stated above. Because some useless ï¬lters are discarded, the pruned model can even outperform the original VGG-16 model. However, if we train this model from scratch, the top-1/top-5 accuracy are only 67.00%/87.45% respectively, which is much worse than our pruned network. Then the FC layers are removed, replaced with a GAP (global average pooling) layer and ï¬ne- tuned in 12 epochs with the same hyper-parameters, which is denoted by âThiNet-GAPâ. The classiï¬cation accuracy of GAP model is slightly lower than the original model, since the model size has been reduced dramatically. Further reduction can be obtained with a higher compression rate (denoted by âThiNet-Tinyâ), which would be discussed later. The actual speed-up of ThiNet is also reported. We test the forward/backward running time of each model using the ofï¬cial âtimeâ command in Caffe. This evaluation is
Table 1. Pruning results of VGG-16 on ImageNet using ThiNet. Here, M/B means million/billion (106/109), respectively; f./b. de- notes the forward/backward timing in milliseconds tested on one M40 GPU with batch size 32. Model Original2 68.34% 88.44% 138.34M 30.94B 189.92/407.56 ThiNet-Conv 69.80% 89.53% 131.44M 9.58B 76.71/152.05 Train from scratch 67.00% 87.45% 131.44M 9.58B 76.71/152.05 67.34% 87.92% 8.32M 9.34B 71.73/145.51 ThiNet-GAP 29.51/55.83 59.34% 81.97% 1.32M 2.01B ThiNet-Tiny 37.30/68.62 57.67% 80.39% 1.24M 1.72B SqueezeNet[15] 1 In this paper, we only consider the FLOPs of convolution operations, which is commonly used for computation complexity comparison. 2 For a fair comparison, the accuracy of original VGG-16 model is eval- uated on resized center-cropped images using pre-trained model as adopted in [10, 14]. The same strategy is also used in ResNet-50.
#Param. #FLOPs1 Top-1 Top-5 f./b. (ms)
Table 2. Comparison among several state-of-the-art pruning meth- ods on the VGG-16 network. Some exact values are not reported in the original paper and cannot be computed, thus we use â to denote the approximation value.
Method APoZ-1 [14] APoZ-2 [14] Taylor-1 [23] Taylor-2 [23] ThiNet-WS [21] ThiNet-Conv ThiNet-GAP Top-1 Acc. Top-5 Acc. -2.16% +1.81% â â +1.01% +1.46% -1.00% -0.84% +1.25% -1.44% -3.94% +0.69% +1.09% -0.52% #Param. â 2.04Ã 2.70Ã â 1Ã â 1Ã 1.05Ã 1.05Ã 16.63Ã #FLOPs â â 1Ã â 1Ã 2.68Ã 3.86Ã 3.23Ã 3.23Ã 3.31Ã
conducted on one M40 GPU with batch size 32 accelerated by cuDNN v5.1. Since convolution operations dominate the computational costs of VGG-16, reducing FLOPs would accelerate inference speed greatly, which is shown in Table 1. We then compare our approach with several state-of-the- art pruning methods on the VGG-16 model, which is shown in Table 2. These methods also focus on ï¬lter-level pruning, but with totally different selection criteria.
APoZ [14] aims to reduce parameter numbers, but its performance is limited. APoZ-1 prunes few layers (conv4, conv5 and the FC layers), but leads to signiï¬cant accuracy degradation. APoZ-2 then only prunes conv5-3 and the FC layers. Its accuracy is improved but this model almost does not reduce the FLOPs. Hence, there is a great need for compressing convolution layers.
In contrast, Molchanov et al. [23] pay their attention to model acceleration, and only prune the convolutional layers. They think a ï¬lter can be removed safely if it has little inï¬u- ence on the loss function. But the calculating procedure can be very time-consuming, thus they use Taylor expansion to approximate the loss change. Their motivation and goals are similar to ours, but with totally different selection criterion and training framework. As shown in Table 2, the ThiNet- Conv model is signiï¬cantly better than Taylor method. Our model can even improve classiï¬cation accuracy with more FLOPs reduction.
As for weight sum [21], they have not explored its perfor-
7
mance on VGG-16. Hence we simply replace our selection method with weight sum in the ThiNet framework, and re- port the ï¬nal accuracy denoted by âThiNet-WSâ. All the parameters are kept the same except for selection criterion. Note that different ï¬ne-tuning framework may lead to very different results. Hence, the accuracy may be different if Li et al. [21] had done this using their own framework. Because the rest setups are the same, it is fair to compare ThiNet-WS and ThiNet, and ThiNet has obtained better results.
To explore the limits of ThiNet, we prune VGG-16 with a larger compression rate 0.25, achieving 16à parameters reduction in convolutional layers. The conv5 layers are also pruned to get a smaller model. As for conv5-3, which is directly related to the ï¬nal feature representation, we only prune half of the ï¬lters for accuracy consideration.
Using these smaller compression ratios, we train a very small model. Denoted as âThiNet-Tinyâ in Table 1, it only takes 5.05MB disk space (1MB=220 bytes) but still has AlexNet-level accuracy (the top-1/top-5 accuracy of AlexNet is 57.2%/80.3%, respectively). ThiNet-Tiny has exactly the same level of model complexity as the recently proposed compact network: SqueezeNet [15], but showing high accu- racy. Although ThiNet-Tiny needs more FLOPs, its actual speed is even faster than SqueezeNet because it has a much simpler network structure. SqueezeNet adopts a special structure, namely the Fire module, which is parameter ef- ï¬cient but relies on manual network structure design. In contrast, ThiNet is a uniï¬ed framework, and higher accuracy would be obtained if we start from a more accurate model.
# 4.3. ResNet-50 on ImageNet
We also explore the performance of ThiNet on the re- cently proposed powerful CNN architecture: ResNet [11]. We select ResNet-50 as the representative of the ResNet family, which has exactly the same architecture and little difference with others.
Similar to VGG-16, we prune ResNet-50 from block 2a to 5c iteratively. Except for ï¬lters, the corresponding channels in batch-normalization layer are also discarded. After pruning, the model is ï¬ne-tuned in one epoch with ï¬xed learning rate 10â4. And 9 epochs ï¬ne-tuning with learning rate changing from 10â3 to 10â5 is performed at the last round to gain a higher accuracy. Other parameters are kept the same as our VGG-16 pruning experiment.
Because ResNet is a recently proposed model, the liter- ature lack enough works that compress this network. We report the performance of ThiNet on pruning ResNet-50, which is shown in Table 3. We prune this model with 3 different compression rates (preserve 70%, 50%, 30% ï¬l- ters in each block respectively). Unlike VGG-16, ResNet is more compact. There exists less redundancy, thus pruning a large amount of ï¬lters seems to be more challenging. In spite of this, our method ThiNet-50 can still prune more than
Table 3. Overall performance of pruning ResNet-50 on ImageNet via ThiNet with different compression rate. Here, M/B means million/billion respectively, f./b. denotes the forward/backward speed tested on one M40 GPU with batch size 32. #FLOPs Top-5 Top-1 72.88% 91.14% 25.56M 7.72B 72.04% 90.67% 16.94M 4.88B 71.01% 90.02% 12.38M 3.41B 2.20B 68.42% 88.30% 8.66M
Model Original ThiNet-70 ThiNet-50 ThiNet-30 #Param. f./b. (ms) 188.27/269.32 169.38/243.37 153.60/212.29 144.45/200.67
half of the parameters with roughly 1% top-5 accuracy drop. Further pruning can also be carried out, leading to a much smaller model at the cost of more accuracy loss.
However, reduced FLOPs can not bring the same level of acceleration in ResNet. Due to the structure constraints of ResNet-50, non-tensor layers (e.g., batch normalization and pooling layers) take up more than 40% of the inference time on GPU. Hence, there is a great need to accelerate these non-tensor layers, which should be explored in the future.
In this experiment, we only prune the ï¬rst two layers of each block in ResNet for simplicity, leaving the block output and projection shortcuts unchanged. Pruning these parts would lead to further compression, but can be quite difï¬cult, if not entirely impossible. And this exploration seems to be a promising extension for the future work.
# 4.4. Domain adaptation ability of the pruned model
One of the main advantages of ThiNet is that we have not changed network structure, thus a model pruned on Ima- geNet can be easily transfered into other domains.
To help us better understand this beneï¬t, let us consider a more practical scenario: get a small model on a domain- speciï¬c dataset. This is a very common requirement in the real-world applications, since we will not directly apply ImageNet models in a real application. To achieve this goal, there are two feasible strategies: starting from a pre-trained ImageNet model then prune on the new dataset, or train a small model from scratch. In this section, we argue that it would be a better choice if we ï¬ne-tune an already pruned model which is compressed on ImageNet.
These strategies are compared on two different domain- speciï¬c dataset: CUB-200 [31] for ï¬ne-grained classiï¬ca- tion and Indoor-67 [25] for scene recognition. We have introduced CUB-200 in section 4.1. As for Indoor-67, we follow the ofï¬cial train/test split (5360 training and 1340 test images) to organize this dataset. All the models are ï¬ne-tuned with the same hyper-parameters and epochs for a fair comparison. Their performance is shown in Table 4.
We ï¬rst ï¬ne-tune the pre-trained VGG-16 model on the new dataset, which is a popular strategy adopted in numer- ous recognition tasks. As we can see, the ï¬ne-tuned model has the highest accuracy at the cost of huge model size and slow inference speed. Then, we use the proposed ThiNet approach to prune some unimportant ï¬lters (denoted by âFT
8
Table 4. Comparison of different strategies to get a small model on CUB-200 and Indoor-67. âFTâ stands for âFine Tuneâ.
Dataset CUB-200 Indoor-67 Strategy VGG-16 FT & prune Train from scratch ThiNet-Conv ThiNet-GAP ThiNet-Tiny AlexNet VGG-16 FT & prune Train from scratch ThiNet-Conv ThiNet-GAP ThiNet-Tiny AlexNet #Param. 135.07M 7.91M 7.91M 128.16M 7.91M 1.12M 57.68M 134.52M 7.84M 7.84M 127.62M 7.84M 1.08M 57.68M #FLOPs 30.93B 9.34B 9.34B 9.58B 9.34B 2.01B 1.44B 30.93B 9.34B 9.34B 9.57B 9.34B 2.01B 1.44B Top-1 72.30% 66.90% 44.27% 70.90% 69.43% 65.45% 57.28% 72.46% 64.70% 38.81% 72.31% 70.22% 62.84% 59.55%
& pruneâ), converting the cumbersome model into a much smaller one. With small-scale training examples, the accu- racy cannot be recovered completely, i.e., the pruned model can be easily trapped into bad local minima. However, if we train a network from scratch with the same structure, its accuracy can be much lower.
We suggest to ï¬ne-tune the ThiNet model, which is ï¬rst pruned using the ImageNet data. As shown in Table 4, this strategy gets the best trade-off between model size and clas- siï¬cation accuracy. It is worth noting that the ThiNet-Conv model can even obtain a similar accuracy as the original VGG-16, but is smaller and much faster.
We also report the performance of ThiNet-Tiny on these two datasets. Although ThiNet-Tiny has the same level of accuracy as AlexNet on ImageNet, it shows much stronger generalization ability. This tiny model can achieve 3% â¼ 8% higher classiï¬cation accuracy than AlexNet when transferred into domain-speciï¬c tasks with 50à fewer parameters. And its model size is small enough to be deployed on resource constrained devices.
# 5. Conclusion
In this paper, we proposed a uniï¬ed framework, namely ThiNet, for CNN model acceleration and compression. The proposed ï¬lter level pruning method shows signiï¬cant im- provements over existing methods.
In the future, we would like to prune the projection short- cuts of ResNet. An alternative method for better channel selection is also worthy to be studied. In addition, extensive exploration on more vision tasks (such as object detection or semantic segmentation) with the pruned networks is an interesting direction too. The pruned networks will greatly accelerate these vision tasks.
# Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant No. 61422203.
# References
[1] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. TPAMI, 35(8):1798â 1828, 2013. 6
[2] G. Chechik, I. Meilijson, and E. Ruppin. Synaptic pruning in development: A computational account. Neural computation, 10(7):1759â1777, 1998. 1
[3] W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In ICML, pages 2285â2294, 2015. 3
[4] M. Denil, B. Shakibi, L. Dinh, and N. de Freitas. Predicting parameters in deep learning. In NIPS, pages 2148â2156, 2013. 2
E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NJPS, pages 1269-1277, 2014. 2,3 D. L. Donoho and Y. Tsaig. Fast solution of ¢:-norm mini- mization problems when the solution may be sparse. JEEE Trans. Information Theory, 54(11):4789-4812, 2008. 4 R. Girshick. Fast R-CNN. In JCCV, pages 1440-1448, 2015.
1
[8] Y. Gong, L. Liu, M. Yang, and L. Bourdev. Compressing deep convolutional networks using vector quantization. In arXiv preprint arXiv:1412.6115, pages 1â10, 2014. 3
[9] S. Han, H. Mao, and W. J. Dally. Deep compression: Com- pressing deep neural networks with pruning, trained quan- tization and huffman coding. In ICLR, pages 1â14, 2016. 3
[10] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural network. In NIPS, pages 1135â1143, 2015. 1, 2, 7
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770â778, 2016. 1, 2, 5, 7
[12] G. Hinton. Learning distributed representations of concepts. In CogSci, pages 1â12, 1986. 6
[13] G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by pre- venting co-adaptation of feature detectors. In arXiv preprint arXiv:1207.0580, pages 1â18, 2012. 2
[14] H. Hu, R. Peng, Y. W. Tai, and C. K. Tang. Network trimming: A data-driven neuron pruning approach towards efï¬cient deep architectures. In arXiv preprint arXiv:1607.03250, pages 1â9, 2016. 2, 5, 7
[15] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. SqueezeNet: AlexNet-level accuracy with 50Ã fewer parameters and <0.5 MB model size. In arXiv preprint arXiv:1602.07360, pages 1â13, 2016. 7 [16] X. Jia, E. Gavves, B. Fernando, and T. Tuytelaars. Guiding the long-short term memory model for image caption generation. In ICCV, pages 2407â2415, 2015. 1
[17] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Learning distributed representations of concepts. In ACM MM, pages 675â678, 2014. 5
9
Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, pages 1097â1105, 2012. 1, 5
[19] V. Lebedev and V. Lempitsky. Fast convnets using group-wise brain damage. In CVPR, pages 2554â2564, 2016. 2
[20] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In NIPS, pages 598â605, 1990. 1
[21] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning ï¬lters for efï¬cient ConvNets. In ICLR, pages 1â13, 2017. 2, 5, 7
[22] M. Lin, Q. Chen, and S. Yan. Network in network. In arXiv preprint arXiv:1312.4400, pages 1â10, 2013. 5
[23] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning convolutional neural networks for resource efï¬cient transfer learning. In ICLR, pages 1â17, 2017. 2, 7
[24] H. Noh, S. Hong, and B. Han. Learning deconvolution net- work for semantic segmentation. In ICCV, pages 1520â1528, 2015. 1
[25] A. Quattoni and A.Torralba. Recognizing indoor scenes. In CVPR, pages 413â420, 2009. 8
[26] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and F.-F. Li. ImageNet large scale visual recognition chal- lenge. IJCV, 115(3):211â252, 2015. 5, 6
[27] R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In arXiv preprint arXiv:1610.02391, pages 1â24, 2016. 2
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, pages 1â14, 2015. 1, 2, 5, 6
[29] V. Sindhwani, T. Sainath, and S. Kumar. Structured trans- forms for small-footprint deep learning. In NIPS, pages 3088â 3096, 2015. 3
[30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1â9, 2015. 5
[31] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. 5, 8
[32] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In NIPS, pages 2074â2082, 2016. 1, 2
[33] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, pages 4820â4828, 2016. 2, 3
[34] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In NIPS, pages 2921â2929, 2016. 2, 5 | {
"id": "1602.07360"
} |
1707.06658 | RAIL: Risk-Averse Imitation Learning | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications. | http://arxiv.org/pdf/1707.06658 | Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | cs.LG, cs.AI | Accepted for presentation in Deep Reinforcement Learning Symposium at
NIPS 2017 | null | cs.LG | 20170720 | 20171129 | 7 1 0 2
v o N 9 2 ] G L . s c [
4 v 8 5 6 6 0 . 7 0 7 1 : v i X r a
# RAIL: Risk-Averse Imitation Learning
# Anirban Santaraâ IIT Kharagpur anirban_santara@iitkgp.ac.in
# Abhishek Naikâ Balaraman Ravindran IIT Madras {anaik,ravi}@cse.iitm.ac.in
# Dipankar Das Dheevatsa Mudigere Sasikanth Avancha Bharat Kaul
# Parallel Computing Lab - Intel Labs, India {dipankar.das,dheevatsa.mudigere,sasikanth.avancha,bharat.kaul}@intel.com
# Abstract
Imitation learning algorithms learn viable policies by imitating an expertâs behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expertâs behavior is available as a ï¬xed set of trajectories. We evaluate in terms of the expertâs cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL- agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-at-Risk (CV aR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.
# Introduction
Reinforcement learning (RL) [Sutton and Barto, 1998] is used to learn an effective policy of choosing actions in order to achieve a speciï¬ed goal in an environment. The goal is communicated to the agent through a scalar cost and the agent learns a policy that minimizes the expected total cost incurred over a trajectory. RL algorithms, along with efï¬cient function approximators like deep neural networks, have achieved human-level or beyond human-level performance at many challenging planning tasks like continuous-control [Lillicrap et al., 2015, Schulman et al., 2015] and game-playing [Silver et al., 2016, Mnih et al., 2015]. In classical RL, the cost function is handcrafted based on heuristic assumptions about the goal and the environment. This is challenging in most real-world applications and also prone to subjectivity induced bias. Imitation learning or Learning from Demonstration (LfD) [Argall et al., 2009, Schaal, 1997, Atkeson and Schaal, 1997, Abbeel and Ng, 2011, 2004, Ng et al., 2000] addresses this challenge by providing methods of learning policies through imitation of an expertâs behavior without the need of a handcrafted cost function. In this paper we study the reliability of existing imitation learning algorithms when it comes to learning solely from a ï¬xed set of trajectories demonstrated by an expert with no interaction between the agent and the expert during training.
âAuthors contributed equally as a part of their internship at Parallel Computing Lab - Intel Labs, India.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Expert GAIL 80% y , 80% 7 o% 6% 60% | sm _ 0% om s am s 4% Hopper-v1 SB 40%) Sad 3 40% » a a a > 8 . a vs | 9 20% ow hi LMUELL 20% | A *=2163 -1442 -721â«8 Sita 1659-1106 553 LY \ \ Se 0% + __1._________. 4. ee AJ ~4000 -3000 -2000 -1000 0 ~4000 â3000 -2000 -1000 0 cost cost 19%f ] 19% f om o* 14%} 5% 14%} % © < £ 4% 2 4% Humanoid-vl 3 9%, o% SZ 9%) Sd S$ 2% Ss 2% Ey Ey 4% | â ot â ela lta 4a =10000 -7500 -5000 -2500 0 cost 0
Figure 1: Histograms of the costs of 250 trajectories generated by the expert and GAIL agents at high-dimensional continuous control tasks, Hopper-v1 and Humanoid-v1, from OpenAI Gym. The inset diagrams show zoomed-in views of the tails of these distributions (the region beyond 2Ï of the mean). We observe that the GAIL agents produce tails heavier than the expert, indicating that GAIL is more prone to generating high-cost trajectories.
Imitation learning algorithms fall into two broad categories. The ï¬rst category, known as Behavioral Cloning [Pomerleau, 1989, Bojarski et al., 2016, 2017], uses supervised learning to ï¬t a policy function to the state-action pairs from expert-demonstrated trajectories. Despite its simplicity, Behavioral Cloning fails to work well when only a limited amount of data is available. These algorithms assume that observations are i.i.d. and learn to ï¬t single time-step decisions. Whereas, in sequential decision making problems where predicted actions affect the future observations (e.g. driving), the i.i.d. assumption is violated. As a result, these algorithms suffer from the problem of compounding error due to covariate shift [Ross and Bagnell, 2010, Ross et al., 2011]. Approaches to ameliorate the issue of compounding error like SMILe [Ross and Bagnell, 2010], SEARN [Daumé et al., 2009], CPI [Kakade and Langford, 2002] suffer from instability in practical applications [Ross et al., 2011] while DAGGER [Ross et al., 2011] and AGGREVATE [Ross and Bagnell, 2014] require the agent to query the expert during training which is not allowed in our setting of learning from a ï¬xed set of expert demonstrations. Another drawback of Behavioral Cloning is that it does not allow the agent to explore alternate policies for achieving the same objective that might be efï¬cient in some sense other than what the expert cared for.
The second category of algorithms is known as Inverse Reinforcement Learning (IRL) (Russell [1998], Ng et al. [2000], Abbeel and Ng [2011]). It attempts to uncover the underlying reward function that the expert is trying to maximize from a set of expert-demonstrated trajectories. This reward function succinctly encodes the expertâs behavior and can be used by an agent to learn a policy through an RL algorithm. The method of learning policies through RL after IRL is known as Apprenticeship Learning (Abbeel and Ng [2004]). IRL algorithms ï¬nd reward functions that prioritize entire trajectories over others. Unlike behavioral cloning, they do not ï¬t single time-step decisions, and hence they do not suffer from the issue of compounding error. However, IRL algorithms are indirect because they learn a reward function that explains expert behavior but do not tell the learner how to act directly (Ho and Ermon [2016]). The job of learning an actionable policy is left to RL algorithms. Moreover, IRL algorithms are computationally expensive and have scalability issues in large environments (Finn et al. [2016], Levine and Koltun [2012]).
2
The recently proposed Generative Adversarial Imitation Learning (GAIL) algorithm [Ho and Ermon, 2016] presents a novel mathematical framework in which the agent learns to act by directly extracting a policy from expert-demonstrated trajectories, as if it were obtained by RL following IRL. The authors show that unlike Behavioral Cloning, this method is not prone to the issue of compounding error and it is also scalable to large environments. Currently, GAIL provides state-of-the-art performance at several benchmark control tasks, including those in Table 1.
Risk sensitivity is integral to human learning [Nagengast et al., 2010, Niv et al., 2012], and risk- sensitive decision-making problems, in the context of MDPs, have been investigated in various ï¬elds, e.g., in ï¬nance [Ruszczy´nski, 2010], operations research [Howard and Matheson, 1972, Borkar, 2002], machine learning [Heger, 1994, Mihatsch and Neuneier, 2002] and robotics [Shalev-Shwartz et al., 2016, 2017, Abbeel et al., 2007, Rajeswaran et al., 2016]. [Garcıa and Fernández, 2015] give a comprehensive overview of different risk-sensitive RL algorithms. They fall in two broad categories. The ï¬rst category includes methods that constrain the agent to safe states during exploration while the second modiï¬es the optimality criterion of the agent to embed a term for minimizing risk. Studies on risk-minimization are rather scarce in the imitation learning literature. [Majumdar et al., 2017] take inspiration from studies like [Glimcher and Fehr, 2013, Shen et al., 2014, Hsu et al., 2005] on modeling risk in human decision-making and conservatively approximate the expertâs risk preferences by ï¬nding an outer approximation of the risk envelope. Much of the literature on imitation learning has been developed with average-case performance at the center, overlooking tail-end events. In this work, we aim to take an inclusive and direct approach to minimizing tail risk of GAIL-learned policies at test time irrespective of the expertâs risk preferences.
In order to evaluate the worst-case risk of deploying GAIL-learned policies, we studied the distribu- tions (see Figure 1) of trajectory-costs (according to the expertâs cost function) for the GAIL agents and experts at different control tasks (see Table 1). We observed that the distributions for GAIL are more heavy-tailed than the expert, where the tail corresponds to occurrences of high trajectory-costs. In order to quantify tail risk, we use Conditional-Value-at-Risk (CV aR) [Rockafellar and Uryasev, 2000]. CV aR is deï¬ned as the expected cost above a given level of conï¬dence and is a popular and coherent tail risk measure. The heavier the tail, the higher the value of CV aR. We observe that the value of CV aR is much higher for GAIL than the experts at most of the tasks (see Table 1) which again suggests that the GAIL agents encounter high-cost trajectories more often than the experts. Since high trajectory-costs may correspond to events of catastrophic failure, GAIL agents are not reliable in risk-sensitive applications. In this work, we aim to explicitly minimize expected worst-case risk for a given conï¬dence bound (quantiï¬ed by CV aR) along with the GAIL objective, such that the learned policies are more reliable than GAIL, when deployed, while still preserving the average performance of GAIL. [Chow and Ghavamzadeh, 2014] developed policy gradient and actor-critic algorithms for mean-CV aR optimization for learning policies in the classic RL setting. However these algorithms are not directly applicable in our setting of learning a policy from a set of expert-demonstrated trajectories. We take inspiration from this work and make the following contributions:
1. We formulate the Risk-Averse Imitation Learning (RAIL) algorithm which optimizes CV aR in addition to the original GAIL objective.
2. We evaluate RAIL at a number of benchmark control tasks and demonstrate that it obtains policies with lesser tail risk at test time than GAIL.
The rest of the paper is organized as follows. Section 2 builds the mathematical foundation of the paper by introducing essential concepts of imitation learning. Section 3 deï¬nes relevant risk- measures and describes the proposed Risk-Averse Imitation Learning algorithm. Section 4 speciï¬es our experimental setup and Section 5 outlines the evaluation metrics. Finally, Section 6 presents the results of our experiments comparing RAIL with GAIL followed by a discussion of the same and Section 7 concludes the paper with scope of future work.
# 2 Mathematical Background
Let us consider a Markov Decision Process (MDP), M = (S,A,7,c,po,7), where S denotes the set of all possible states, A denotes the set of all possible actions that the agent can take, T:S*xAxS â [0,1] is the state transition function such that, T(sâ|s,a) is a probability distribution over next states, sâ ⬠S given current state s ⬠S and actiona ⬠A,c: S x Aâ Ris the cost function which generates a real number as feedback for every state-action pair, po : S â [0, 1] gives the initial state distribution, and 7 is a temporal discount factor.
3
A policy Ï : S à A â [0, 1] is a function such that Ï(a|s) gives a probability distribution over actions, a â A in a given state, s â S. Let ξ = (s0, a0, s1, . . . , sLξ ) denote a trajectory of length Lξ, obtained by following a policy Ï. We deï¬ne expectation of a function f (·, ·) deï¬ned on S à A with respect to a policy Ï as follows:
Le-1 E,[f(s,a)] = Eee | D> 7'F (si, a1) oo) t=0
# 2.1 Generative Adversarial Imitation Learning
Apprenticeship learning or Apprenticeship Learning via Inverse Reinforcement Learning algorithms [Abbeel and Ng, 2004] ï¬rst estimate the expertâs reward function using IRL and then ï¬nd the optimal policy for the recovered reward function using RL. Mathematically, this problem can be described as:
RL ⦠IRL(ÏE) = argmin ÏâÎ max câC EÏ[c(s, a)] â EÏE [c(s, a)] â H(Ï) (2)
where, ÏE denotes the expert-policy. c(·, ·) denotes the cost function. Î and C denote the hypothesis classes for policy and cost functions. H(Ï) denotes entropy of policy Ï. The term âH(Ï) provides causal-entropy regularization [Ziebart, 2010, Ziebart et al., 2008] which helps in making the policy optimization algorithm unbiased to factors other than the expected reward.
[Ho and Ermon, 2016] proposed Generative Adversarial Imitation Learning (GAIL) which packs the two step process of RL ⦠IRLÏ(ÏE) into a single optimization problem with special considerations for scalability in large environments. The name is due to the fact that this objective function can be optimized using the Generative Adversarial Network (GAN) [Goodfellow et al., 2014] framework. The following is objective function of GAIL:
argmin ÏâÎ max Dâ(0,1)SÃA EÏ[log(D(s, a))] + EÏE [log(1 â D(s, a))] â H(Ï) (3)
Here, the agentâs policy, Ï, acts as a generator of state-action pairs. D is a discriminative binary classiï¬er of the form D : S à A â (0, 1), known as discriminator, which given a state-action pair (s, a), predicts the likelihood of it being generated by the generator. A two-player adversarial game is started, wherein the generator tries to generate (s, a) pairs that closely match the expert, while the discriminator tries to correctly classify the (s, a) pairs of the expert and the agent. At convergence, the agentâs actions resemble those of the expert in any given state.
The generator and the discriminator are assigned parameterized models Ïθ and Dw respectively. The training algorithm alternates between a gradient ascent step with respect to the discriminator parameters, w, and a policy-gradient descent step with respect to the generator parameters, θ. Following the example of [Ho and Ermon, 2016] we use multi-layer perceptrons (neural networks with fully-connected layers) [Haykin, 1998] to model both the generator and the discriminator.
# 3 Risk-Averse Imitation Learning
In this section, we develop the mathematical formulation of the proposed Risk-Averse Imitation Learning (RAIL) algorithm. We introduce CV aR [Rockafellar and Uryasev, 2000] as a measure of tail risk, and apply it in the GAIL-framework to minimize the tail risk of learned policies.
# 3.1 Conditional-Value-at-Risk
In the portfolio-risk optimization literature, tail risk is a form of portfolio risk that arises when the possibility that an investment moving more than three standard deviations away from the mean is greater than what is shown by a normal distribution [Investopedia, 2017]. Tail risk corresponds to events that have a small probability of occurring. When the distribution of market returns is heavy-tailed, tail risk is high because there is a probability, which may be small, that an investment will move beyond three standard deviations.
Conditional-Value-at-Risk (CV aR) [Rockafellar and Uryasev, 2000] is the most conservative mea- sure of tail risk [Dalleh, 2011]. Unlike other measures like Variance and Value at Risk (V aR), it can
4
be applied when the distribution of returns is not normal. Mathematically, let Z be a random variable. Let α â [0, 1] denote a probability value. The Value-at-Risk of Z with respect to conï¬dence level α, denoted by V aRα(Z), is deï¬ned as the minimum value z â R such that with probability α, Z will not exceed z.
V aRα(Z) = min(z | P (Z ⤠z) ⥠α) (4)
CV aRα(Z) is deï¬ned as the conditional expectation of losses above V aRα(Z):
CV aRα(Z) = E [Z | Z ⥠V aRα(Z)] = min νâR Hα(Z, ν) (5)
where Hα(Z, ν) is given by:
H,(Z,v) = {v + 7 [(Z âv)*]}; (x)* = max(x,0) (6)
# 3.2 RAIL Framework
We use CV aR to quantify the tail risk of the trajectory-cost variable RÏ(ξ|c(D)), deï¬ned in the context of GAIL as:
Le-1 R*(Ele(D)) = S> ye(D(se,ae)) ) t=0
where c(·) is order-preserving. Next, we formulate the optimization problem to optimize CV aR of RÏ(ξ|c(D)) as: Hα(RÏ(ξ|c(D)), ν) CV aRα(RÏ(ξ|c(D))) = min Ï,ν
min Ï max c max c (8)
Integrating this with the GAIL objective of equation 3, we have the following:
â =mi F _H E, [log(D(s, BE pelea TEP pelea | ~ H+ Ealloo(P(s a) +E, [log(1 â D(s, a))] + Acvar Ha(R* (Ele(D)), v)} cc)
Note that as c(·) is order-preserving, the maximization with respect to c in equation 8 is equivalent to maximization with respect to D in equation 9. λCV aR is a constant that controls the amount of weightage given to CV aR optimization relative to the original GAIL objective. Equation 9 comprises the objective function of the proposed Risk-Averse Imitation Learning (RAIL) algorithm. Algorithm 1 gives the pseudo-code. Appendix A derives the expressions of gradients of the CV aR term Hα(RÏ(ξ|c(D))ν) with respect to Ï, D, and ν. When α â 0, namely the risk-neutral case, CV aR is equal to the mean of all trajectory costs and hence, RAIL â GAIL. We use Adam algorithm [Diederik Kingma, 2015] for gradient ascent in the discriminator and Trust Region Policy Optimization (TRPO) [Schulman et al., 2015] for policy gradient descent in the generator. The CV aR term ν is trained by batch gradient descent [Haykin, 1998].
# 4 Experimental Setup
We compare the tail risk of policies learned by GAIL and RAIL for ï¬ve continuous control tasks listed in Table 1. All these environments, were simulated using MuJoCo Physics Simulator [Todorov et al., 2012]. Each of these environments come packed with a âtrue" reward function in OpenAI Gym [Brockman et al., 2016]. [Ho and Ermon, 2016] trained neural network policies using Trust Region Policy Optimization (TRPO) [Schulman et al., 2015] on these reward functions to achieve state-of-the-art performance and have made the pre-trained models publicly available for all these environments as a part of their repository [OpenAI-GAIL, 2017]. They used these policies to generate the expert trajectories in their work on GAIL [Ho and Ermon, 2016]. For a fair comparison, we use the same policies to generate expert trajectories in our experiments. Table 1 gives the number of expert trajectories sampled for each environment. These numbers correspond to the best results reported in [Ho and Ermon, 2016].
5
# Algorithm 1 Risk-Averse Imitation learning (RAIL)
Input: Expert trajectories ξE â¼ ÏE, hyper-parameters α, β, λCV aR Output: Optimized learnerâs policy Ï 1: Initialization: θ â θ0, w â w0, ν â ν0, λ â λCV aR 2: repeat 3: 4: 5: Sample trajectories ξi â¼ Ïθi Estimate ËHα(DÏ(ξ|c(D)), ν) = ν + 1 1âα Gradient ascent on discriminator parameters using: Eξi[(DÏ(ξ|c(D)) â ν)+] âwiJ = ËEξi[âwi log(D(s, a))] + ËEξE [âwi log(1 â D(s, a))] + λCV aRâwiHα(RÏ(ξ|c(D)), ν) 6: KL-constrained natural gradient descent step (TRPO) on policy parameters using: âθiJ = E(s,a)â¼Î¾i [âθilogÏθ(a|s)Q(s, a)] â âθiH(Ïθ) +λCV aRâθiHα(RÏ(ξ|c(D)), ν) where Q(¯s, ¯a) = E(s,a)â¼Î¾i[log(Dwi+1 (s, a))|s0 = ¯s, a0 = ¯a] 7: Gradient descent on CVaR parameters: âνiJ = âνiHα(RÏ(ξ|c(D)), ν) 8: until i == max_iter
Again, following [Ho and Ermon, 2016], we model the generator (policy), discriminator and value function (used for advantage estimation [Sutton and Barto, 1998] for the generator) with multi-layer perceptrons of the following architecture: observationDim - fc_100 - tanh - fc_100 - tanh - outDim, where fc_100 means fully connected layer with 100 nodes, tanh represents the hyperbolic-tangent activation function of the hidden layers, observationDim stands for the dimensionality of the observed feature space, outDim is equal to 1 for the discriminator and value function networks and equal to the twice of the dimensionality of the action space (for mean and standard deviation of the Gaussian from which the action should be sampled) for the policy network. For example, in case of Humanoid-v1, observationDim = 376 and outDim = 34 in the policy network. The value of the CV aR coefï¬cient λCV aR is set as given by Table 1 after a coarse hyperparameter search. All other hyperparameters corresponding to the GAIL component of the algorithm are set identical to those used in [Ho and Ermon, 2016] and their repository [OpenAI-GAIL, 2017] for all the experiments. The value of α in the CV aR term is set to 0.9 and its lone parameter, ν, is trained by batch gradient descent with learning rate 0.01.
# 5 Evaluation Metrics
In this section we deï¬ne the metrics we use to evaluate the efï¬cacy of RAIL at reducing the tail risk of GAIL learned policies. Given an agent Aâs policy ÏA we roll out N trajectories T = {ξi}N i=1 from it and estimate V aRα and CV aRα as deï¬ned in Section 3.1. V aRα denotes the value under
Table 1: Hyperparameters for the RAIL experiments on various continuous control tasks from OpenAI Gym. For a fair comparison, the number of training iterations and expert trajectories are same as those used by [Ho and Ermon, 2016].
Task Reacher-v1 HalfCheetah-v1 Hopper-v1 Walker-v1 Humanoid-v1 #training iterations 200 500 500 500 1500 #expert trajectories 18 25 25 25 240 λCV aR 0.25 0.5 0.5 0.25 0.75
6
Reacher-v1. HalfCheetah-v1 Hopper-v1 150 1000 100 -2000 cost cost 3000 400 500 0 100 200300 400» 500 iterations iterations iterations Walker-v1. Humanoid-v1. ween Expert -2000 â= GAIL 4000 cost cost â RAL ~6000 -2000 x-axis: training iterations y-axis: mean trajectory-cost 10000. -"â---~ ae epee 0 100 200 00 400-500, 0 250 500 750 1000 1250 1500 iterations iterations
Figure 2: Convergence of mean trajectory-cost during training. The faded curves corresponds to the original value of mean trajectory-cost which varies highly between successive iterations. The data is smoothened with a moving average ï¬lter of window size 21 to demonstrate the prevalent behavior and plotted with solid curves. RAIL converges almost as fast as GAIL at all the ï¬ve continuous-control tasks, and at times, even faster.
which the trajectory-cost remains with probability α and CV aRα gives the expected value of cost above V aRα. Intuitively, CV aRα gives the average value of cost of the worst cases that have a total probability no more than (1 â α). The lower the value of both these metrics, the lower is the tail risk.
In order to compare tail risk of an agent with respect to the expert, E, we deï¬ne percentage relative- V aRα as follows:
V aRα(A|E) = 100 à V aRα(E) â V aRα(A) |V aRα(E)| % (10)
Similarly, we deï¬ne percentage relative-CV aRα as:
CV aRα(A|E) = 100 à CV aRα(E) â CV aRα(A) |CV aRα(E)| % (11)
The higher these numbers, the lesser is the tail risk of agent A. We deï¬ne Gain in Reliability (GR) as the difference in percentage relative tail risk between RAIL and GAIL agents.
GR-VaR = VaR,(RAIL\E) â VaRo(GAIL|E) (12)
GR-V aR = V aRα(RAIL|E) â V aRα(GAIL|E) GR-CV aR = CV aRα(RAIL|E) â CV aRα(GAIL|E)
GR-CVaR = CVaR,(RAIL|E) â CVaR,(GAIL|E) (13)
Table 2: Comparison of expert, GAIL, and RAIL in terms of the tail risk metrics - V aR0.9 and CV aR0.9. All the scores are calculated on samples of 50 trajectories. With smaller values of V aR and CV aR, our method outperforms GAIL in all the 5 continuous control tasks and also outperforms the expert in many cases.
Environment Reacher-v1 Hopper-v1 HalfCheetah-v1 Walker-v1 Humanoid-v1 VaR Observation Action Expert GAIL 9.55 Dimensionality CVaR Expert GAIL 13.25 Ours 7.28 Ours 9.41 11 11 17 17 376 2 3 6 6 17 5.88 6.34 -3754.71 -1758.19 -3745.90 -2674.65 -1347.60 -3727.94 -3431.59 -2688.34 -3150.31 -3356.67 -2220.64 -2945.76 -5402.52 -5314.05 -5404.00 -2310.54 -3359.29 -3939.99 -9839.79 -2641.14 -9252.29 -4591.43 -1298.80 -4640.42
7
(12) (13)
Table 3: Values of percentage relative tail risk measures and gains in reliability on using RAIL over GAIL for different continuous control tasks.
Environment Reacher-v1 Hopper-v1 HalfCheetah-v1 Walker-v1 Humanoid-v1 V aR0.9(A|E)(%) GAIL -62.41 -53.17 -21.66 -1.64 -73.16 RAIL -23.81 -0.23 -8.20 0.03 -5.97 GR-VaR (%) 38.61 52.94 13.46 1.66 67.19 CV aR0.9(A|E) (%) GAIL -108.99 -49.62 -33.84 45.39 -71.71 RAIL -48.42 39.38 -12.24 70.52 1.07 GR-CVaR (%) 60.57 89.00 21.60 25.13 72.78
# 6 Experimental Results and Discussion
In this section, we present and discuss the results of comparison between GAIL and RAIL. The expertâs performance is used as a benchmark. Tables 2 and 3 present the values of our evaluation metrics for different continuous-control tasks. We set α = 0.9 for V aRα and CV aRα and estimate all metrics with N = 50 sampled trajectories (as followed by [Ho and Ermon, 2016]). The following are some interesting observations that we make:
⢠RAIL obtains superior performance than GAIL at both tail risk measures â V aR0.9 and CV aR0.9, without increasing sample complexity. This shows that RAIL is a superior choice than GAIL for imitation learning in risk-sensitive applications.
⢠The applicability of RAIL is not limited to environments in which the distribution of trajectory-cost is heavy-tailed for GAIL. [Rockafellar and Uryasev, 2000] showed that if the distribution of the risk variable Z be normal, CV aRα(Z) = µZ + a(α)ÏZ, where a(α) is a constant for a given α, µZ and ÏZ are the mean and standard deviation of Z. Thus, in the absence of a heavy tail, minimization of CV aRα of the trajectory cost aids in learning better policies by contributing to the minimization of the mean and standard deviation of trajectory cost. The results on Reacher-v1 corroborate our claims. Although the histogram does not show a heavy tail (Figure 3 in Appendix B), the mean converges ï¬ne (Figure 2) and tail risk scores are improved (Table 2) which indicates the distribution of trajectory-costs is more condensed around the mean than GAIL. Thus we can use RAIL instead of GAIL, no matter whether the distribution of trajectory costs is heavy-tailed for GAIL or not.
⢠Figure 2 shows the variation of mean trajectory cost over training iterations for GAIL and RAIL. We observe that RAIL converges almost as fast as GAIL at all the continuous-control tasks in discussion, and at times, even faster.
⢠The success of RAIL in learning a viable policy for Humanoid-v1 suggests that RAIL is scalable to large environments. Scalability is one of the salient features of GAIL. RAIL preserves the scalability of GAIL while showing lower tail risk.
RAIL agents show lesser tail risk than GAIL agents after training has been completed. However it still requires the agent to act in the real world and sample trajectories (line 3 in Algorithm 1) during training. One way to rule out environmental interaction during training is to make the agent act in a simulator while learning from the expertâs real-world demonstrations. The setting changes to that of third person imitation learning [Stadie et al., 2017]. The RAIL formulation can be easily ported to this framework but we do not evaluate that in this paper.
# 7 Conclusion
This paper presents the RAIL algorithm which incorporates CV aR optimization within the original GAIL algorithm to minimize tail risk and thus improve the reliability of learned policies. We report signiï¬cant improvement over GAIL at a number of evaluation metrics on ï¬ve continuous-control tasks. Thus the proposed algorithm is a viable step in the direction of learning low-risk policies by imitation learning in complex environments, especially in risk-sensitive applications like robotic surgery and autonomous driving. We plan to test RAIL on ï¬elded robotic applications in the future.
Acknowledgments The authors would like to thank Apoorv Vyas of Intel Labs and Sapana Chaudhary of IIT Madras for helpful discussions. Anirban Santaraâs travel was supported by Google India under the Google India PhD Fellowship Award.
8
# References
Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-ï¬rst international conference on Machine learning, page 1. ACM, 2004.
Pieter Abbeel and Andrew Y Ng. Inverse reinforcement learning. In Encyclopedia of machine learning, pages 554â558. Springer, 2011.
Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y Ng. An application of reinforcement learning to aerobatic helicopter ï¬ight. In Advances in neural information processing systems, pages 1â8, 2007.
Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469 â 483, 2009. ISSN 0921-8890. doi: http://dx.doi.org/10.1016/j.robot.2008.10.024. URL http://www.sciencedirect. com/science/article/pii/S0921889008001772.
Christopher G Atkeson and Stefan Schaal. Robot learning from demonstration. In ICML, volume 97, pages 12â20, 1997.
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, and Urs Muller. Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911, 2017.
Vivek S Borkar. Q-learning for risk-sensitive control. Mathematics of operations research, 27(2): 294â311, 2002.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Yinlam Chow and Mohammad Ghavamzadeh. Algorithms for cvar optimization in mdps. In Advances in neural information processing systems, pages 3509â3517, 2014.
Nivine Dalleh. Why is CVaR superior to VaR?(c2009). PhD thesis, 2011.
Hal Daumé, John Langford, and Daniel Marcu. Search-based structured prediction. Machine learning, 75(3):297â325, 2009.
Jimmy Ba Diederik Kingma. Adam: A method for stochastic optimization. arXiv:1310.5107 [cs.CV], 2015.
Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pages 49â58, 2016.
Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437â1480, 2015.
Paul W Glimcher and Ernst Fehr. Neuroeconomics: Decision making and the brain. Academic Press, 2013.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural informa- tion processing systems, pages 2672â2680, 2014.
Simon Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2nd edition, 1998. ISBN 0132733501.
Matthias Heger. Consideration of risk in reinforcement learning. In Proceedings of the Eleventh International Conference on Machine Learning, pages 105â111, 1994.
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565â4573, 2016.
9
Ronald A Howard and James E Matheson. Risk-sensitive markov decision processes. Management science, 18(7):356â369, 1972.
Ming Hsu, Meghana Bhatt, Ralph Adolphs, Daniel Tranel, and Colin F Camerer. Neural systems responding to degrees of uncertainty in human decision-making. Science, 310(5754):1680â1683, 2005.
Investopedia. Deï¬nition of tail risk. http://www.investopedia.com/terms/t/ tailrisk.asp, 2017. Accessed: 2017-09-11.
Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In ICML, volume 2, pages 267â274, 2002.
Sergey Levine and Vladlen Koltun. Continuous inverse optimal control with locally optimal examples. arXiv preprint arXiv:1206.4617, 2012.
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015. URL http://arxiv.org/abs/1509.02971.
Anirudha Majumdar, Sumeet Singh, Ajay Mandlekar, and Marco Pavone. Risk-sensitive inverse reinforcement learning via coherent risk models. 2017.
Oliver Mihatsch and Ralph Neuneier. Risk-sensitive reinforcement learning. Machine learning, 49 (2-3):267â290, 2002.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Arne J Nagengast, Daniel A Braun, and Daniel M Wolpert. Risk-sensitive optimal feedback control accounts for sensorimotor behavior under uncertainty. PLoS computational biology, 6(7):e1000857, 2010.
Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663â670, 2000.
Yael Niv, Jeffrey A Edlund, Peter Dayan, and John P OâDoherty. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. Journal of Neuroscience, 32(2): 551â562, 2012.
OpenAI-GAIL. Imitation learning github repository. https://github.com/openai/ imitation.git, 2017. Accessed: 2017-06-27.
Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in neural information processing systems, pages 305â313, 1989.
Aravind Rajeswaran, Sarvjeet Ghotra, Sergey Levine, and Balaraman Ravindran. Epopt: Learning robust neural network policies using model ensembles. 5th International Conference on Learning Representations, 2016.
R Tyrrell Rockafellar and Stanislav Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21â42, 2000.
Stéphane Ross and Drew Bagnell. Efï¬cient reductions for imitation learning. In Proceedings of the thirteenth international conference on artiï¬cial intelligence and statistics, pages 661â668, 2010.
Stephane Ross and J Andrew Bagnell. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
Stéphane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In International Conference on Artiï¬cial Intelligence and Statistics, pages 627â635, 2011.
10
Stuart Russell. Learning agents for uncertain environments. In Proceedings of the eleventh annual conference on Computational learning theory, pages 101â103. ACM, 1998.
Andrzej Ruszczy´nski. Risk-averse dynamic programming for markov decision processes. Mathemat- ical programming, 125(2):235â261, 2010.
Stefan Schaal. Learning from demonstration. In Advances in neural information processing systems, pages 1040â1046, 1997.
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. URL http://arxiv.org/abs/1502. 05477.
Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016.
Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. On a formal model of safe and scalable self-driving cars. arXiv preprint arXiv:1708.06374, 2017.
Yun Shen, Michael J Tobia, Tobias Sommer, and Klaus Obermayer. Risk-sensitive reinforcement learning. Neural computation, 26(7):1298â1328, 2014.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
Bradly C Stadie, Pieter Abbeel, and Ilya Sutskever. Third-person imitation learning. arXiv preprint arXiv:1703.01703, 2017.
R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. A Bradford book. Bradford Book, 1998. ISBN 9780262193986. URL https://books.google.co.in/books?id= CAFR6IBF4xYC.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012.
Brian D Ziebart. Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. PhD thesis, Carnegie Mellon University, 2010.
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433â1438. Chicago, IL, USA, 2008.
11
# Appendix
# A Calculation of Gradients of the CVaR term
In this section we derive expressions of gradients of the CVaR term in equation 9 w.r.t. Ï, D, and ν. Let us denote Hα(DÏ(ξ|c(D)), ν) by LCV aR. Our derivations are inspired by those shown by Chow and Ghavamzadeh [2014].
⢠Gradient of LCV aR w.r.t. D:
Vo Levan = Vo |v + Ee [(Dâ¢(E|ce(D)) â v)*] = A oFer Vp D'El(DUD"(E\e(D)) > »)] AD
where 1(·) denotes the indicator function. Now,
Vp D*(é|c(D)) = Ve D*(E|c(D)) Vo e(D) (A2) Le-1 V. D7 (Ele(D)) = Ve Sy ytelse, ay) t=0 Le-l = > yt t=0 _ lays =F (A3)
Substituting equation A.3 in A.2 and then A.2 in A.1, we have the following:
1 1-7" Vo Lover = 7+ Ber [+= uD") > )VveD)] (Aa)
# ⢠Gradient of LCV aR w.r.t. Ï:
Vi lovar = Va Ha(D"( (let? 2) =, [rt tates â Dr (gleD)) â »)*] = a Benn [(D" (Ele(D)) â v)*] = oes [(Vx logP(Eln))(Dâ¢(Ele(D)) â »)*] (AS)
# ⢠Gradient of LCV aR w.r.t. ν:
1 Vi Lover = Vily+ Toa Be [(D (Ele(D)) â v)*] = 14 Bee [Ve (D*(Ele(D)) -»)*] = 1~ pA Beg [L(D*(Ele(D)) > v)] (A6)
12
# B Additional ï¬gures
40% 30% 20% population 10% 0% 0 50 100 150 200 trajectory-cost
Figure 3: Histogram of costs of 250 trajectories generated by a GAIL-learned policy for Reacher-v1. The distribution shows no heavy tail. From Table 2 and Figure 2, we observe that RAIL performs as well as GAIL even in cases where the distribution of trajectory costs is not heavy-tailed.
13 | {
"id": "1703.01703"
} |
1707.06347 | Proximal Policy Optimization Algorithms | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time. | http://arxiv.org/pdf/1707.06347 | John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov | cs.LG | null | null | cs.LG | 20170720 | 20170828 | 7 1 0 2
g u A 8 2
] G L . s c [
2 v 7 4 3 6 0 . 7 0 7 1 : v i X r a
# Proximal Policy Optimization Algorithms
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov
OpenAI
{joschu, filip, prafulla, alec, oleg}@openai.com
# Abstract
We propose a new family of policy gradient methods for reinforcement learning, which al- ternate between sampling data through interaction with the environment, and optimizing a âsurrogateâ objective function using stochastic gradient ascent. Whereas standard policy gra- dient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimiza. tion (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, includ ing simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
# 1 Introduction
In recent years, several different approaches have been proposed for reinforcement learning with neural network function approximators. The leading contenders are deep Q-learning [Mni+15], âvanillaâ policy gradient methods [Mni+16], and trust region / natural policy gradient methods [Sch+15b]. However, there is room for improvement in developing a method that is scalable (to large models and parallel implementations), data efficient, and robust (i.e., successful on a variety of problems without hyperparameter tuning). Q-learning (with function approximation) fails on many simple problems! and is poorly understood, vanilla policy gradient methods have poor data effiency and robustness; and trust region policy optimization (TRPO) is relatively complicated, and is not compatible with architectures that include noise (such as dropout) or parameter sharing between the policy and value function, or with auxiliary tasks).
This paper seeks to improve the current state of affairs by introducing an algorithm that attains he data efficiency and reliable performance of TRPO, while using only first-order optimization. We propose a novel objective with clipped probability ratios, which forms a pessimistic estimate ie., lower bound) of the performance of the policy. To optimize policies, we alternate between sampling data from the policy and performing several epochs of optimization on the sampled data.
Our experiments compare the performance of various different versions of the surrogate objec- ive, and find that the version with the clipped probability ratios performs best. We also compare PPO to several previous algorithms from the literature. On continuous control tasks, it performs etter than the algorithms we compare against. On Atari, it performs significantly better (in terms of sample complexity) than A2C and similarly to ACER though it is much simpler.
âWhile DQN works well on game environments like the Arcade Learning Environment [Bel+15] with discrete action spaces, it has not been demonstrated to perform well on continuous control benchmarks such as those in OpenAI Gym [Bro+16] and described by Duan et al. [Dua+16].
# 2 Background: Policy Optimization
# 2.1 Policy Gradient Methods
Policy gradient methods work by comp uting an estimator of the policy gradient and plugging it into a stochastic gradient ascent algorithm. The most commonly used gradient estimator has the form
g= (1) on Vo log 79 (at | si)A]
where 7 is a stochastic policy and A, Here, the expectation E,[...] indicates algorithm that alternates between sampl differentiation software work by constrt gradient estimator; the estimator g is o is an estimator of the advantage function at timestep t. he empirical average over a finite batch of samples, in an ing and optimization. Implementations that use automatic acting an objective function whose gradient is the policy tained by differentiating the objective
LPG ) | log 7 (2) ar | si)Ar].
While it is appealing to perform multi trajectory, doing so is not well-justified, updates (see Section 6.1; results are noâ penaltyâ setting). le steps of optimization on this loss L? using the same and empirically it often leads to destructively large policy shown but were similar or worse than the âno clipping or
# 2.2 Trust Region Methods
In TRPO [Sch+15b], an objective func ion (the âsurrogateâ objective) is maximized subject to a constraint on the size of the policy update. Specifically,
(3) maximize 6 | To(ae | ro al Tora (At | St
subject to Br[KL[m,.4(+ | $1), 7a(- | s1)]] < 6. (4)
Here, Aoi is the vector of policy parameters before the update. This problem can efficiently be approximately solved using the conjuga to the objective and a quadratic approxi e gradient algorithm, after making a linear approximation imation to the constraint.
The theory justifying TRPO actua ly suggests using a penalty instead of a constraint, i.e., solving the unconstrained optimization roblem
ae To (at maximize ti 6 TOo1a (at | st) | st) on At â BKL[m9,.4(- | s+), 70(-| $0)]
for some coefficient 3. This follows from the max KL over states instead of the m performance of the policy 7. TRPO uses a hard constraint raâ to choose a single value of 3 that perfor problem, where the the charact of a first-order algorithm that that it is not sufficient to sim objective Equation (5) with SGD; addit emulates eristics change over the course of learning. Hence, to achieve our goal ly choose a fixed penalty coe the fact that a certain surrogate objective (which computes ean) forms a lower bound (i.e., a pessimistic bound) on the her than a penalty because it is har ms well across different problemsâor even within a single the monotonic improvement of TRPO, experiments show ficient 3 and optimize the penalize ional modifications are required.
# 3 Clipped Surrogate Objective
Let r;(@) denote the probability ratio r;(0) = Talal si) 56 r(Oo1a) = 1. TRPO maximizes a Taq (at | 8)? âsurrogateâ objective
> LoPl 6g z T(at | st) Ai| altel st) t [r(@) Ar] : (6
The superscript CPI refers to conservative policy iteration [KL02], where this objective was pro- posed. Without a constraint, maximization of L°?! would lead to an excessively large policy update; hence, we now consider how to modify the objective, to penalize changes to the policy tha move r;(@) away from 1.
The main objective we propose is the following:
LCP g) = B, | min(r,(0) Ap, clip(r:(9), 1 â 61 +)A,)| (7
where epsilon is a hyperparameter, say, ⬠= 0.2. The motivation for this objective is as follows. The first term inside the min is LCP!. The second term, clip(r;(0), 1â¢, 1+â¬)A;, modifies the surrogate objective by clipping the probability ratio, which removes the incentive for moving r; outside of the interval [1 â ¢«,1 +]. Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse. Note that L°//?(6) = LC?! (6) to first order around Ogi (i.e., where r = 1), however, they become different as 6 moves away from iq. Figure 1 plots a single term (i.e., a single ¢t) in LC/!P. note that the probability ratio r is clipped at 1â or 1+ depending on whether the advantage is positive or negative.
A<0 [CLIP A>0 l-el ' +ât r f 1 1 mI r 0 ll+e LOLP
Figure 1: Plots showing one term (i.e., a single timestep) of the surrogate function LC!â as a function of the probability ratio r, for positive advantages (left) and negative advantages (right). The red circle on each plot shows the starting point for the optimization, i.e., r = 1. Note that L°/? sums many of these terms.
Figure 2 provides another source of intuition about the surrogate objective LC/!â. It shows how several objectives vary as we interpolate along the policy update direction, obtained by proximal policy optimization (the algorithm we will introduce shortly) on a continuous control problem. We can see that LC//P is a lower bound on L°P!, with a penalty for having too large of a policy update.
â Edkte] â Les EtrAd â Ellclip(r,1â¢,1+ Ad ââ LHP = E[min(r Ae, clip(r;, 1 -â â¬,1 + 2)A)] 0.12 0.10 0.08 0.06 0.04 0.02 0.00 4» â0.02 Linear interpolation factor
Figure 2: Surrogate objectives, as we interpolate between the initial policy parameter @o1a, and the updated policy parameter, which we compute after one iteration of PPO. The updated policy has a KL divergence of about 0.02 from the initial policy, and this is the point at which LCâ! is maximal. This plot corresponds to the first policy update on the Hopper-vl problem, using hyperparameters provided in Section 6.1.
# 4 Adaptive KL Penalty Coefficient
Another approach, which can be used as an alternative to the clipped surrogate objective, or in addition to it, is to use a penalty on KL divergence, and to adapt the penalty coefficient so that we achieve some target value of the KL divergence dtarg each policy update. In our experiments, we found that the KL penalty performed worse than the clipped surrogate objective, however, weâve included it here because itâs an important baseline.
In the simplest instantiation of this algorithm, we perform the following steps in each policy update:
e Using several epochs of minibatch SGD, optimize the KL-penalized objective
nS Tg (a, S. ~ r&EPEN (g) = {TL 4) grL fray | 50), (| 50] (8) Tota (a | St)
e Compute d= Ee [KL [rt04 (- | se), 7o(-| se)]]
â Ifd < dtarg/1.5, 8 â 6/2
â Ifd> dtarg X 1.5, 8+ Bx 2
The updated £ is used for the next policy update. With this scheme, we occasionally see policy updates where the KL divergence is significantly different from dtarg, however, these are rare, and 8 quickly adjusts. The parameters 1.5 and 2 above are chosen heuristically, but the algorithm is not very sensitive to them. The initial value of 3 is a another hyperparameter but is not important in practice because the algorithm quickly adjusts it.
# 5 Algorithm
The surrogate losses from the previous sections can be computed and differentiated with a minor change to a typical policy gradient implementation. For implementations that use automatic dif- ferentation, one simply constructs the loss L¢/!? or LK4PFN instead of L?@, and one performs multiple steps of stochastic gradient ascent on this objective.
Most techniques for computing variance-reduced advantage-function estimators make use a learned state-value function V(s); for example, generalized advantage estimation [Sch+15a], or the
finite-horizon estimators in [Mni+16]. If using a neural network architecture that shares parameters between the policy and value function, we must use a loss function that combines the policy surrogate and a value function error term. This objective can further be augmented by adding an entropy bonus to ensure sufficient exploration, as suggested in past work [Wil92; Mni+16]. Combining these terms, we obtain the following objective, which is (approximately) maximized each iteration:
LE MPHVESS (9) = Ey [Ly 1? (0) â ey" (8) + c25[m6](s1)] (9)
where c1,¢2 are coefficients, and S denotes an entropy bonus, and LY (Vo(se) â Vi"8)?. is a squared-error loss
One style of policy gradient implementation, popularized in [Mni+16] and well-suited for use with recurrent neural networks, runs the policy for T timesteps (where T is much less than the episode length), and uses the collected samples for an update. This style requires an advantage estimator that does not look beyond timestep T. The estimator used by [Mni+16] is
A, V(si) ret rege $e FI pi $V (87) (10)
where t specifies the time index in [0, T], within a given length-T trajectory segment. Generalizing this choice, we can use a truncated version of generalized advantage estimation, which reduces to Equation (10) when A = 1:
Ap = bt + (VA)beH Hee $e FAT Hp, (11)
where 6; = r¢ + yV(si41) â V(st) (12)
A proximal policy optimization (PPO) algorithm that uses fixed-length trajectory segments is shown below. Each iteration, each of N (parallel) actors collect T timesteps of data. Then we construct the surrogate loss on these NT timesteps of data, and optimize it with minibatch SGD (or usually for better performance, Adam [KB14]), for K epochs.
Algorithm 1 PPO, Actor-Critic Style
1 PPO, Actor-Critic Style iteration=1,2,... do for actor=1,2,...,N do Run policy 7,,, in environment for T timesteps Compute advantage estimates A,,..., Ar end for Optimize surrogate L wrt 0, with K epochs and minibatch size M < NT O14 â 8 end for
# for
# 6 Experiments
# 6.1 Comparison of Surrogate Objectives
First, we compare several different surrogate objectives under different hyperparameters. Here, we compare the surrogate objective L©//â to several natural variations and ablated versions.
No clipping or penalty: L;(0) = r1(0) At Clipping: L,(0) = min(r;(9) A;, clip(r:(0)), 1 â â¬, 1 + ©) Ay KL penalty (fixed or adaptive) L1(0) = r+(0) At â 8 KL [r19,14, 74]
ol
For the KL penalty, one can either use a fixed penalty coefficient 6 or an adaptive coefficient as described in Section 4 using target KL value dtarg. Note that we also tried clipping in log space, but found the performance to be no better.
Because we are searching over hyperparameters for each algorithm variant, we chose a compu- tationally cheap benchmark to test the algorithms on. Namely, we used 7 simulated robotics tasksâ implemented in OpenAI Gym [Bro+16], which use the MuJoCo [TET12] physics engine. We do one million timesteps of training on each one. Besides the hyperparameters used for clipping (⬠and the KL penalty (8, dtarg), which we search over, the other hyperparameters are provided in in Table 3.
To represent the policy, we used a fully-connected MLP with two hidden layers of 64 units and tanh nonlinearities, outputting the mean of a Gaussian distribution, with variable standar deviations, following [Sch+15b; Dua+16]. We donât share parameters between the policy and value function (so coefficient c, is irrelevant), and we donât use an entropy bonus. â
Each algorithm was run on all 7 environment run of the algorithm by computing the average and scaled the scores for each environment so thaâ result was set to 1, and averaged over 21 runs to s, with 3 random seeds on each. We scored each otal reward of the last 100 episodes. We shifte the random policy gave a score of 0) and the bes roduce a single scalar for each algorithm setting.
The results are shown in Table 1. Note that the score is negative for the setting without clipping or penalties, because for one environment (half cheetah) it leads to a very negative score, which is worse than the initial random policy.
algorithm avg. normalized score No clipping or penalty -0.39 Clipping, « = 0.1 0.76 Clipping, « = 0.2 0.82 Clipping, « = 0.3 0.70 Adaptive KL dtarg = 0.003 0.68 Adaptive KL dtarg = 0.01 0.74 Adaptive KL dtarg = 0.03 0.71 Fixed KL, 6 = 0.3 0.62 Fixed KL, 6 = 1. 0.71 Fixed KL, 6 = 3. 0.72 Fixed KL, 6 = 10. 0.69
Table 1: Results from continuous control benchmark. Average normalized scores (over 21 runs of the algorithm, on 7 environments) for each algorithm / hyperparameter setting . 3 was initialized at 1.
# 6.2 Comparison to Other Algorithms in the Continuous Domain
Next, we compare PPO (with the âclippedâ surrogate objective from Section 3) to several other methods from the literature, which are considered to be effective for continuous problems. We com- pared against tuned implementations of the following algorithms: trust region policy optimization [Sch+15b], cross-entropy method (CEM) [SLO06], vanilla policy gradient with adaptive stepsize®,
?HalfCheetah, Hopper, InvertedDoublePendulum, InvertedPendulum, Reacher, Swimmer, and Walker2d, all â-v1â 3 After each batch of data, the Adam stepsize is adjusted based on the KL divergence of the original and updated policy, using a rule similar to the one shown in Section 4. An implementation is available at https: //github.com/ berkeleydeeprlcourse/homework/tree/master/hw4.
A2C [Mni+16], A2C with trust region [Wan+16]. A2C stands for advantage actor critic, and is a synchronous version of A3C, which we found to have the same or better performance than the asynchronous version. For PPO, we used the hyperparameters from the previous section, with e⬠= 0.2. We see that PPO outperforms the previous methods on almost all the continuous control environments.
2000 1500 1000 500 500 100 HalfCheetah-vt 2500 2000 1500 1000 500 1000000 Reacher-v1 120 100 80 60 40 20 o Hopper-v1 âSwimmer-v1 In parr genres rarer 8000 6000 4000 2000 1000000 3000 2000 1000 InvertedDoublePendulum-v1 Walker2d-v1 1000000 1000 800 600 400 200 0 InvertedPendulum-v1 1000000 A2Cc A2C + Trust Region cEM PPO (Clip) Vanilla PG, Adaptive TRPO 120 0 1000000 0 1000000 0 1000000
Figure 3: Comparison of several algorithms on several MuJoCo environments, training for one million timesteps.
# 6.3. Showcase in the Continuous Domain: Humanoid Running and Steering
To showcase the performance of PPO on high-dimensional continuous control problems, we train on a set of problems involving a 3D humanoid, where the robot must run, steer, and get up off the ground, possibly while being pelted by cubes. The three tasks we test on are (1) Ro- boschoolHumanoid: forward locomotion only, (2) RoboschoolHumanoidFlagrun: position of target is randomly varied every 200 timesteps or whenever the goal is reached, (3) RoboschoolHumanoid- FlagrunHarder, where the robot is pelted by cubes and needs to get up off the ground. See Figure 5 for still frames of a learned policy, and Figure 4 for learning curves on the three tasks. Hyperpa- rameters are provided in Table 4. In concurrent work, Heess et al. [Hee+17] used the adaptive KL variant of PPO (Section 4) to learn locomotion policies for 3D robots.
RoboschoolHumanoid-v0 4000 3000 2000 1000 2500 2000 1500 1000 Timestep RoboschoolHumanoidFlagrun-vO Timestep 3000 2000 1000 100M 0 Timestep RoboschoolHumanoidFlagrunHarder-vO 100M
Figure 4: Learning curves from PPO on 3D humanoid control tasks, using Roboschool.
Figure 5: Still frames of the policy learned from RoboschoolHumanoidFlagrun. In the first six frames, the robot runs towards a target. Then the position is randomly changed, and the robot turns and runs toward the new target. 6.4 Comparison to Other Algorithms on the Atari Domain We also ran PPO on the Arcade Learning Environment [Bel+15] benchmark and compared against well-tuned implementations of A2C [Mni+16] and ACER [Wan+16]. For all three algorithms, we used the same policy network architechture as used in [Mni+16]. The hyperparameters for PPO are provided in Table 5. For the other two algorithms, we used hyperparameters that were tuned to maximize performance on this benchmark. A table of results and learning curves for all 49 games is provided in Appendix B. We consider the following two scoring metrics: (1) average reward per episode over entire training period (which favors fast learning), and (2) average reward per episode over last 100 episodes of training (which favors final performance). Table 2 shows the number of games âwonâ by each algorithm, where we compute the victor by averaging the scoring metric across three trials. | A2C ACER PPO Tie (1) avg. episode reward over all of training 1 18 30 0 (2) avg. episode reward over last 100 episodes 1 28 19 1 Table 2: Number of games âwonâ by each algorithm, where the scoring metric is averaged across three trials. 7 Conclusion We have introduced proximal policy optimization, a family of policy optimization methods that use multiple epochs of stochastic gradient ascent to perform each policy update. These methods have the stability and reliability of trust-region methods but are much simpler to implement, requiring only few lines of code change to a vanilla policy gradient implementation, applicable in more general settings (for example, when using a joint architecture for the policy and value function), and have better overall performance. 8 Acknowledgements Thanks to Rocky Duan, Peter Chen, and others at OpenAI for insightful comments.
| A2C ACER PPO Tie (1) avg. episode reward over all of training 1 18 30 0 (2) avg. episode reward over last 100 episodes 1 28 19 1
# References
Bel+15] M. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. âThe arcade learning environ- ment: An evaluation platform for general agentsâ. In: Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.
Bro+16] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. âOpenAI Gymâ. In: arXiv preprint arXiv:1606.01540 (2016).
Dua+16] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. âBenchmarking Deep Reinforcement Learning for Continuous Controlâ. In: arXiv preprint arXiv:1604.06778 (2016).
Hee+17| N. Heess, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, A. Eslami, M. Riedmiller, et al. âEmergence of Locomotion Behaviours in Rich Envi- ronmentsâ. In: arXiv preprint arXiv:1707.02286 (2017).
KL0Q] S. Kakade and J. Langford. âApproximately optimal approximate reinforcement learn- ingâ. In: ICML. Vol. 2. 2002, pp. 267-274.
KB14| D. Kingma and J. Ba. âAdam: A method for stochastic optimizationâ. In: arXiv preprint arXiv:1412.6980 (2014).
Mni-+15] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. âHuman-level control through deep reinforcement learningâ. In: Nature 518.7540 (2015), pp. 529-533.
Mni+16] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. âAsynchronous methods for deep reinforcement learningâ. In: arXiv preprint arXiv:1602.01783 (2016).
Sch+ 15a] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. âHigh-dimensional contin- uous control using generalized advantage estimationâ. In: arXiv preprint arXiv:1506.02488 (2015).
Sch+15b] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. âTrust region policy optimizationâ. In: CoRR, abs/1502.05477 (2015).
SLO6] I. Szita and A. Lorincz. âLearning Tetris using the noisy cross-entropy methodâ. In: Neural computation 18.12 (2006), pp. 2936-2941.
TET 12] E. Todorov, T. Erez, and Y. Tassa. âMuJoCo: A physics engine for model-based con- trolâ. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Con- ference on. IEEE. 2012, pp. 5026-5033.
Wan-+16] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. âSample Efficient Actor-Critic with Experience Replayâ. In: arXiv preprint arXiv:1611.01224 (2016).
Wil92| R. J. Williams. âSimple statistical gradient-following algorithms for connectionist re- inforcement learningâ. In: Machine learning 8.3-4 (1992), pp. 229-256.
# A Hyperparameters
Hyperparameter | Value Horizon (T) 2048 Adam stepsize 3x 1074 Num. epochs 10 Minibatch size 64 Discount (Â¥) 0.99 GAE parameter (A) | 0.95
Table 3: PPO hyperparameters used for the Mujoco 1 million timestep benchmark.
Number of actors Log stdev. of action distribution Hyperparameter Value Horizon (T) 512 Adam stepsize * Num. epochs 15 Minibatch size 4096 Discount (7) 0.99 GAE parameter (A) 0.95 32 (locomotion), 128 (flagrun) LinearAnneal(â0.7, â1.6)
Table 4: PPO hyperparameters used for the Roboschool experiments. Adam stepsize was adjusted based on the target value of the KL divergence.
Hyperparameter Value Horizon (T) Adam stepsize Num. epochs Minibatch size Discount (7) GAE parameter (A) Number of actors Clipping parameter ⬠128 2.5x 1074 xa 3 32 x 8 0.99 0.95 8 O.1lxa VF coeff. cy (9) Entropy coeff. cz (9) 1 0.01
Table 5: PPO hyperparameters used in Atari experiments. a is linearly annealed from 1 to 0 over the course of learning.
# B- Performance on More Atari Games
Here we include a comparison of PPO against A2C on a larger collection of 49 Atari games. Figure 6 shows the learning curves of each of three random seeds, while Table 6 shows the mean performance.
Alien 2000 1000 Atlantis 3000000 2000000 1000000 \ Boxing 100 s DemonAttack 40000 20000 \ Frostbite 4 100 Kangaroo 10000 5000 â NameThisGame 10000 7500 5000 \ 2500 Riverraid 10000 7500 5000 2500 \ StarGunner 40000 20000 \ Venture 10 - ° $ Frames BankHeist 1000 -10.0 12.5 -15.0 17.5 40000 20000 VideoPinball 150000 100000 50000 3 & \: BattleZone 20000 15000 10000 5000 Centipede 10000 5000 o 8 8 8 Gravitar â KungFu Master Robotank 6 4 2 TimePilot Frames Astenx 7500 2500 : : MN FishingDerby LT IceHockey PrivateEye : Seaquest 1500 1000 } \ Tutankham Y Zaxxon N ° g = Frames 2500 : . 3 i=) - wo o 98 w o 100000 : \ 30 20 10 600 400 200 15000 10000 5000 500 200000 100000 Asteroids Bowling CrazyClimber Freeway SD TU Spacelnvaders UpNDown A2C â ACER PPO
publication. Figure 6: Comparison of PPO and A2C on all 49 ATARI games included in OpenAI Gym at the time of
11
A2C ACER PPO Alien 1141.7 1655.4 1850.3 Amidar 380.8 827.6 674.6 Assault 1562.9 4653.8 4971.9 Asterix 3176.3 6801.2 4532.5 Asteroids 1653.3 2389.3 2097.5 Atlantis 729265.3 1841376.0 2311815.0 BankHeist 1095.3 1177.5 1280.6 BattleZone 3080.0 8983.3 17366.7 BeamRider 3031.7 3863.3 1590.0 Bowling 30.1 33.3 40.1 Boxing 17.7 98.9 94.6 Breakout 303.0 456.4 274.8 Centipede 3496.5 8904.8 4386.4 ChopperCommand 1171.7 5287.7 3516.3 CrazyClimber 107770.0 132461.0 110202.0 DemonAttack 6639.1 38808.3 11378.4 DoubleDunk -16.2 -13.2 -14.9 Enduro 0.0 0.0 758.3 FishingDerby 20.6 34.7 17.8 Freeway 0.0 0.0 32.5 Frostbite 261.8 285.6 314.2 Gopher 1500.9 37802.3 2932.9 Gravitar 194.0 225.3 737.2 ceHockey -6.4 -5.9 -4.2 Jamesbond 52.3 261.8 560.7 Kangaroo 45.3 50.0 9928.7 Krull 8367.4 7268.4 7942.3 KungFuMaster 24900.3 27599.3 23310.3 MontezumaRevenge 0.0 0.3 42.0 MsPacman 1626.9 2718.5 2096.5 NameThisGame 5961.2 8488.0 6254.9 Pitfall -55.0 -16.9 -32.9 Pong 19.7 20.7 20.7 PrivateEye 91.3 182.0 69.5 Qbert 10065.7 â15316.6 14293.3 Riverraid 7653.5 9125.1 8393.6 RoadRunner 32810.0 35466.0 25076.0 Robotank 2.2 2.5 5.5 Seaquest 1714.3 1739.5 1204.5 Spacelnvaders 744.5 1213.9 942.5 StarGunner 26204.0 49817.7 32689.0 Tennis -22.2 -17.6 -14.8 TimePilot 2898.0 4175.7 4342.0 Tutankham 206.8 280.8 254.4 UpNDown 17369.8 145051.4 95445.0 Venture 0.0 0.0 0.0 VideoPinball 19735.9 156225.6 37389.0 WizardOfWor 859.0 2308.3 4185.3
# Zaxxon
16.3
29.0
5008.7
Table 6: Mean final scores (last 100 episodes) of PPO and A2C on Atari games after 40M game frames (10M timesteps).
12 | {
"id": "1602.01783"
} |
1707.06203 | Imagination-Augmented Agents for Deep Reinforcement Learning | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines. | http://arxiv.org/pdf/1707.06203 | Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, Daan Wierstra | cs.LG, cs.AI, stat.ML | null | null | cs.LG | 20170719 | 20180214 | 8 1 0 2
b e F 4 1 ] G L . s c [
2 v 3 0 2 6 0 . 7 0 7 1 : v i X r a
# Imagination-Augmented Agents for Deep Reinforcement Learning
Théophane Weberâ Sébastien Racanièreâ David P. Reichertâ Lars Buesing Arthur Guez Danilo Rezende Adria Puigdomènech Badia Oriol Vinyals Nicolas Heess Yujia Li Razvan Pascanu Peter Battaglia Demis Hassabis David Silver Daan Wierstra DeepMind
# Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In con- trast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efï¬ciency, performance, and robustness to model misspeciï¬cation compared to several baselines.
# Introduction
A hallmark of an intelligent agent is its ability to rapidly adapt to new circumstances and "achieve goals in a wide range of environments" [1]. Progress has been made in developing capable agents for numerous domains using deep neural networks in conjunction with model-free reinforcement learning (RL) [2â4], where raw observations directly map to values or actions. However, this approach usually requires large amounts of training data and the resulting policies do not readily generalize to novel tasks in the same environment, as it lacks the behavioral ï¬exibility constitutive of general intelligence.
Model-based RL aims to address these shortcomings by endowing agents with a model of the world, synthesized from past experience. By using an internal model to reason about the future, here also referred to as imagining, the agent can seek positive outcomes while avoiding the adverse consequences of trial-and-error in the real environment â including making irreversible, poor decisions. Even if the model needs to be learned ï¬rst, it can enable better generalization across states, remain valid across tasks in the same environment, and exploit additional unsupervised learning signals, thus ultimately leading to greater data efï¬ciency. Another appeal of model-based methods is their ability to scale performance with more computation by increasing the amount of internal simulation.
The neural basis for imagination, model-based reasoning and decision making has generated a lot of interest in neuroscience [5â7]; at the cognitive level, model learning and mental simulation have been hypothesized and demonstrated in animal and human learning [8â11]. Its successful deployment in artiï¬cial model-based agents however has hitherto been limited to settings where an exact transition model is available [12] or in domains where models are easy to learn â e.g. symbolic environments or low-dimensional systems [13â16]. In complex domains for which a simulator is not available to the agent, recent successes are dominated by model-free methods [2, 17]. In such domains, the performance of model-based agents employing standard planning methods usually suffers from model errors resulting from function approximation [18, 19]. These errors compound during planning, causing over-optimism and poor agent performance. There are currently no planning
âEqual contribution, corresponding authors: {theophane, sracaniere, reichert}@google.com.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
or model-based methods that are robust against model imperfections which are inevitable in complex domains, thereby preventing them from matching the success of their model-free counterparts.
We seek to address this shortcoming by proposing Imagination-Augmented Agents, which use approximate environment models by "learning to interpret" their imperfect predictions. Our algorithm can be trained directly on low-level observations with little domain knowledge, similarly to recent model-free successes. Without making any assumptions about the structure of the environment model and its possible imperfections, our approach learns in an end-to-end way to extract useful knowledge gathered from model simulations â in particular not relying exclusively on simulated returns. This allows the agent to beneï¬t from model-based imagination without the pitfalls of conventional model-based planning. We demonstrate that our approach performs better than model- free baselines in various domains including Sokoban. It achieves better performance with less data, even with imperfect models, a signiï¬cant step towards delivering the promises of model-based RL.
# 2 The I2A architecture
a) Imagination core b) Single imagination rollout ) Full I2A Architecture 1,V 7} i i 2 encod Policy Net Env. Model 1. imagine future | 2. encode Model-based path Model-free =) a [| S Aggregator / \ 7+ . =. = = s| ls 3] |8 3| 18 3} |8 } 2} |e s| | 5). ; \ s| |s | 8| |3 | 3| |3 | \ 2| |2 | internal state L \ KF UA 4 \ Rollout _ fred input Encoding ge
Figure 1: I2A architecture. Ë· notation indicates imagined quantities. a): the imagination core (IC) predicts the next time step conditioned on an action sampled from the rollout policy ËÏ. b): the IC imagines trajectories of features Ëf = (Ëo, Ër), encoded by the rollout encoder. c): in the full I2A, aggregated rollout encodings and input from a model-free path determine the output policy Ï.
In order to augment model-free agents with imagination, we rely on environment models â models that, given information from the present, can be queried to make predictions about the future. We use these environment models to simulate imagined trajectories, which are interpreted by a neural network and provided as additional context to a policy network.
In general, an environment model is any recurrent architecture which can be trained in an unsupervised fashion from agent trajectories: given a past state and current action, the environment model predicts the next state and any number of signals from the environment. In this work, we will consider in particular environment models that build on recent successes of action-conditional next-step predictors [20â22], which receive as input the current observation (or history of observations) and current action, and predict the next observation, and potentially the next reward. We roll out the environment model over multiple time steps into the future, by initializing the imagined trajectory with the present time real observation, and subsequently feeding simulated observations into the model.
The actions chosen in each rollout result from a rollout policy ËÏ (explained in Section 3.1). The environment model together with ËÏ constitute the imagination core module, which predicts next time steps (Fig 1a). The imagination core is used to produce n trajectories ËT1, . . . , ËTn. Each imagined trajectory ËT is a sequence of features ( Ëft+1, . . . , Ëft+Ï ), where t is the current time, Ï the length of the rollout, and Ëft+i the output of the environment model (i.e. the predicted observation and/or reward).
Despite recent progress in training better environment models, a key issue addressed by I2As is that a learned model cannot be assumed to be perfect; it might sometimes make erroneous or nonsensical predictions. We therefore do not want to rely solely on predicted rewards (or values predicted
2
# path
input observations stacked context ConvNet â_ predicted observation | a ~ a | TO ! input action one-hot predicted reward wee O-CLOD -. tile
Figure 2: Environment model. The input action is broadcast and concate- nated to the observation. A convolu- tional network transforms this into a pixel-wise probability distribution for the output image, and a distribution for the reward.
from predicted states), as is often done in classical planning. Additionally, trajectories may contain information beyond the reward sequence (a trajectory could contain an informative subsequence â for instance solving a subproblem â which did not result in higher reward). For these reasons, we use a rollout encoder E that processes the imagined rollout as a whole and learns to interpret it, i.e. by extracting any information useful for the agentâs decision, or even ignoring it when necessary (Fig 1b). Each trajectory is encoded separately as a rollout embedding ei = E( ËTi). Finally, an aggregator A converts the different rollout embeddings into a single imagination code cia = A(e1, . . . , en).
The ï¬nal component of the I2A is the policy module, which is a network that takes the information cia from model-based predictions, as well as the output cmf of a model-free path (a network which only takes the real observation as input; see Fig 1c, right), and outputs the imagination-augmented policy vector Ï and estimated value V . The I2A therefore learns to combine information from its model-free and imagination-augmented paths; note that without the model-based path, I2As reduce to a standard model-free network [3]. I2As can thus be thought of as augmenting model-free agents by providing additional information from model-based planning, and as having strictly more expressive power than the underlying model-free agent.
# 3 Architectural choices and experimental setup
# 3.1 Rollout strategy
For our experiments, we perform one rollout for each possible action in the environment. The ï¬rst action in the ith rollout is the ith action of the action set A, and subsequent actions for all rollouts are produced by a shared rollout policy ËÏ. We investigated several types of rollout policies (random, pre- trained) and found that a particularly efï¬cient strategy was to distill the imagination-augmented policy into a model-free policy. This distillation strategy consists in creating a small model-free network ËÏ(ot), and adding to the total loss a cross entropy auxiliary loss between the imagination-augmented policy Ï(ot) as computed on the current observation, and the policy ËÏ(ot) as computed on the same observation. By imitating the imagination-augmented policy, the internal rollouts will be similar to the trajectories of the agent in the real environment; this also ensures that the rollout corresponds to trajectories with high reward. At the same time, the imperfect approximation results in a rollout policy with higher entropy, potentially striking a balance between exploration and exploitation.
# I2A components and environment models
In our experiments, the encoder is an LSTM with convolutional encoder which sequentially processes a trajectory T . The features Ëft are fed to the LSTM in reverse order, from Ëft+Ï to Ëft+1, to mimic Bellman type backup operations.2 The aggregator simply concatenates the summaries. For the model-free path of the I2A, we chose a standard network of convolutional layers plus one fully connected one [e.g. 3]. We also use this architecture on its own as a baseline agent.
Our environment model (Fig. 2) deï¬nes a distribution which is optimized by using a negative log- likelihood loss lmodel. We can either pretrain the environment model before embedding it (with frozen weights) within the I2A architecture, or jointly train it with the agent by adding lmodel to the total loss as an auxiliary loss. In practice we found that pre-training the environment model led to faster runtime of the I2A architecture, so we adopted this strategy.
2The choice of forward, backward or bi-directional processing seems to have relatively little impact on the performance of the I2A, however, and should not preclude investigating different strategies.
3
For all environments, training data for our environment model was generated from trajectories of a partially trained standard model-free agent (deï¬ned below). We use partially pre-trained agents because random agents see few rewards in some of our domains. However, this means we have to account for the budget (in terms of real environment steps) required to pretrain the data-generating agent, as well as to then generate the data. In the experiments, we address this concern in two ways: by explicitly accounting for the number of steps used in pretraining (for Sokoban), or by demonstrating how the same pretrained model can be reused for many tasks (for MiniPacman).
# 3.3 Agent training and baseline agents
Using a ï¬xed pretrained environment model, we trained the remaining I2A parameters with asyn- chronous advantage actor-critic (A3C) [3]. We added an entropy regularizer on the policy Ï to encourage exploration and the auxiliary loss to distill Ï into the rollout policy ËÏ as explained above. We distributed asynchronous training over 32 to 64 workers; we used the RMSprop optimizer [23]. We report results after an initial round of hyperparameter exploration (details in Appendix A). Learning curves are averaged over the top three agents unless noted otherwise.
A separate hyperparameter search was carried out for each agent architecture in order to ensure optimal performance. In addition to the I2A, we ran the following baseline agents (see Appendix B for architecture details for all agents).
Standard model-free agent. For our main baseline agent, we chose a model-free standard architec- ture similar to [3], consisting of convolutional layers (2 for MiniPacman, and 3 for Sokoban) followed by a fully connected layer. The ï¬nal layer, again fully connected, outputs the policy logits and the value function. For Sokoban, we also tested a âlargeâ standard architecture, where we double the number of all feature maps (for convolutional layers) and hidden units (for fully connected layers). The resulting architecture has a slightly larger number of parameters than I2A.
Copy-model agent. Aside from having an internal environment model, the I2A architecture is very different from the one of the standard agent. To verify that the information contained in the environment model rollouts contributed to an increase in performance, we implemented a baseline where we replaced the environment model in the I2A with a âcopyâ model that simply returns the input observation. Lacking a model, this agent does not use imagination, but uses the same architecture, has the same number of learnable parameters (the environment model is kept constant in the I2A), and beneï¬ts from the same amount of computation (which in both cases increases linearly with the length of the rollouts). This model effectively corresponds to an architecture where policy logits and value are the ï¬nal output of an LSTM network with skip connections.
# 4 Sokoban experiments
We now demonstrate the performance of I2A over baselines in a puzzle environment, Sokoban. We address the issue of dealing with imperfect models, highlighting the strengths of our approach over planning baselines. We also analyze the importance of the various components of the I2A.
Sokoban is a classic planning problem, where the agent has to push a number of boxes onto given target locations. Because boxes can only be pushed (as opposed to pulled), many moves are irreversible, and mistakes can render the puzzle unsolvable. A human player is thus forced to plan moves ahead of time. We expect that artiï¬cial agents will similarly beneï¬t from internal simulation. Our implementation of Sokoban procedurally generates a new level each episode (see Appendix D.4 for details, Fig. 3 for examples). This means an agent cannot memorize speciï¬c puzzles.3 Together with the planning aspect, this makes for a very challenging environment for our model-free baseline agents, which solve less than 60% of the levels after a billion steps of training (details below). We provide videos of agents playing our version of Sokoban online [24].
While the underlying game logic operates in a 10 à 10 grid world, our agents were trained directly on RGB sprite graphics as shown in Fig. 4 (image size 80 à 80 pixels). There are no aspects of I2As that make them speciï¬c to grid world games.
3Out of 40 million levels generated, less than 0.7% were repeated. Training an agent on 1 billion frames requires less than 20 million episodes.
4
Figure 3: Random examples of procedurally generated Sokoban levels. The player (green sprite) needs to push all 4 boxes onto the red target squares to solve a level, while avoiding irreversible mistakes. Our agents receive sprite graphics (shown above) as observations.
# I2A performance vs. baselines on Sokoban
Figure 4 (left) shows the learning curves of the I2A architecture and various baselines explained throughout this section. First, we compare I2A (with rollouts of length 5) against the standard model-free agent. I2A clearly outperforms the latter, reaching a performance of 85% of levels solved vs. a maximum of under 60% for the baseline. The baseline with increased capacity reaches 70% - still signiï¬cantly below I2A. Similarly, for Sokoban, I2A far outperforms the copy-model.
Sokoban performance Lo Unroll depth analysis ° ° Ea ° a © ES a unroll depth standaratiarge) Standard ho reward A cony-mode! 128 ° fraction of levels solved fraction of levels solved © io 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 08 1.0 environment steps 1e9 environment steps 1e9
Figure 4: Sokoban learning curves. Left: training curves of I2A and baselines. Note that I2A use additional environment observations to pretrain the environment model, see main text for discussion. Right: I2A training curves for various values of imagination depth.
Since using imagined rollouts is helpful for this task, we investigate how the length of individual rollouts affects performance. The latter was one of the hyperparameters we searched over. A breakdown by number of unrolling/imagination steps in Fig. 4 (right) shows that using longer rollouts, while not increasing the number of parameters, increases performance: 3 unrolling steps improves speed of learning and top performance signiï¬cantly over 1 unrolling step, 5 outperforms 3, and as a test for signiï¬cantly longer rollouts, 15 outperforms 5, reaching above 90% of levels solved. However, in general we found diminishing returns with using I2A with longer rollouts. It is noteworthy that 5 steps is relatively small compared to the number of steps taken to solve a level, for which our best agents need about 50 steps on average. This implies that even such short rollouts can be highly informative. For example, they allow the agent to learn about moves it cannot recover from (such as pushing boxes against walls, in certain contexts). Because I2A with rollouts of length 15 are signiï¬cantly slower, in the rest of this section, we choose rollouts of length 5 to be our canonical I2A architecture.
It terms of data efï¬ciency, it should be noted that the environment model in the I2A was pretrained (see Section 3.2). We conservatively measured the total number of frames needed for pretraining to be lower than 1e8. Thus, even taking pretraining into account, I2A outperforms the baselines after seeing about 3e8 frames in total (compare again Fig. 4 (left)). Of course, data efï¬ciency is even better if the environment model can be reused to solve multiple tasks in the same environment (Section 5).
# 4.2 Learning with imperfect models
One of the key strengths of I2As is being able to handle learned and thus potentially imperfect environment models. However, for the Sokoban task, our learned environment models actually perform quite well when rolling out imagined trajectories. To demonstrate that I2As can deal with less reliable predictions, we ran another experiment where the I2A used an environment model that had shown much worse performance (due to a smaller number of parameters), with strong artifacts accumulating over iterated rollout predictions (Fig. 5, left). As Fig. 5 (right) shows, even with such a
5
clearly ï¬awed environment model, I2A performs similarly well. This implies that I2As can learn to ignore the latter parts of the rollout as errors accumulate, but still use initial predictions when errors are less severe. Finally, note that in our experiments, surprisingly, the I2A agent with poor model ended outperforming the I2A agent with good model. We posit this was due to random initialization, though we cannot exclude the noisy model providing some form of regularization â more work will be required to investigate this effect.
Sokoban good vs. bad models B o Rollout steps â 2A: good â 2A: poor â MC: good â MC: poor os 92 a @ fraction of levels solved o 98 B 0.0 0.2 0.4 0.6 0.8 1.0 environment steps led
Rollout steps
Figure 5: Experiments with a noisy environment model. Left: each row shows an example 5-step rollout after conditioning on an environment observation. Errors accumulate and lead to various artefacts, including missing or duplicate sprites. Right: comparison of Monte-Carlo (MC) search and I2A when using either the accurate or the noisy model for rollouts.
Learning a rollout encoder is what enables I2As to deal with imperfect model predictions. We can further demonstrate this point by comparing them to a setup without a rollout encoder: as in the classic Monte-Carlo search algorithm of Tesauro and Galperin [25], we now explicitly estimate the value of each action from rollouts, rather than learning an arbitrary encoding of the rollouts, as in I2A. We then select actions according to those values. Specifically, we learn a value function V from states, and, using a rollout policy 7, sample a trajectory rollout for each initial action, and compute the corresponding estimated Monte Carlo return >, â77'r? + V (x4) where ((x?, r'))t=0..7 comes from a trajectory initialized with action a. Action a is chosen with probability proportional to exp(â(0,-9..7. 7â + V(24))/6), where 6 is a learned temperature. This can be thought of as a form of I2A with a fixed summarizer (which computes returns), no model-free path, and very simple policy head. In this architecture, only V, 7 and 6 are learned|'|
We ran this rollout encoder-free agent on Sokoban with both the accurate and the noisy environment model. We chose the length of the rollout to be optimal for each environment model (from the same range as for I2A, i.e. from 1 to 5). As can be seen in Fig. 5 (right),5 when using the high accuracy environment model, the performance of the encoder-free agent is similar to that of the baseline standard agent. However, unlike I2A, its performance degrades catastrophically when using the poor model, showcasing the susceptibility to model misspeciï¬cation.
# 4.3 Further insights into the workings of the I2A architecture
So far, we have studied the role of the rollout encoder. To show the importance of various other components of the I2A, we performed additional control experiments. Results are plotted in Fig. 4 (left) for comparison. First, I2A with the copy model (Section 3.3) performs far worse, demonstrating that the environment model is indeed crucial. Second, we trained an I2A where the environment model was predicting no rewards, only observations. This also performed worse. However, after much longer training (3e9 steps), these agents did recover performance close to that of the original I2A (see Appendix D.2), which was never the case for the baseline agent even with that many steps. Hence, reward prediction is helpful but not absolutely necessary in this task, and imagined observations alone are informative enough to obtain high performance on Sokoban. Note this is in contrast to many classical planning and model-based reinforcement learning methods, which often rely on reward prediction.
4the rollout policy is still learned by distillation from the output policy 5Note: the MC curves in Fig. 5 only used a single agent rather than averages.
6
# model
# model
# model
# model
Imagination efï¬ciency and comparison with perfect-model planning methods
â¼ 1400 I2A MC search @95 â¼ 4000 â¼ 25000 â¼ 100000 Random search â¼ millions I2A@87 MCTS@87 MCTS@95 2 3 4 5 6 7 99.5 97 92 87 77 66 53 Standard (%) 97 87 72 60 47 32 23 Boxes I2A (%) 1
Table 1: Imagination efï¬ciency of various architectures.
Table 2: Generalization of I2A to environ- ments with different number of boxes.
In previous sections, we illustrated that I2As can be used to efï¬ciently solve planning problems and can be robust in the face of model misspeciï¬cation. Here, we ask a different question â if we do assume a nearly perfect model, how does I2A compare to competitive planning methods? Beyond raw performance we focus particularly on the efï¬ciency of planning, i.e. the number of imagination steps required to solve a ï¬xed ratio of levels. We compare our regular I2A agent to a variant of Monte Carlo Tree Search (MCTS), which is a modern guided tree search algorithm [12, 26]. For our MCTS implementation, we aimed to have a strong baseline by using recent ideas: we include transposition tables [27], and evaluate the returns of leaf nodes by using a value network (in this case, a deep residual value network trained with the same total amount of data as I2A; see appendix D.3 for further details).
Running MCTS on Sokoban, we ï¬nd that it can achieve high performance, but at a cost of a much higher number of necessary environment model simulation steps: MCTS reaches the I2A performance of 87% of levels solved when using 25k model simulation steps on average to solve a level, compared to 1.4k environment model calls for I2A. Using even more simulation steps, MCTS performance increases further, e.g. reaching 95% with 100k steps.
If we assume access to a high-accuracy environment model (including the reward prediction), we can also push I2A performance further, by performing basic Monte-Carlo search with a trained I2A for the rollout policy: we let the agent play whole episodes in simulation (where I2A itself uses the environment model for short-term rollouts, hence corresponding to using a model-within-a-model), and execute a successful action sequence if found, up to a maximum number of retries; this is reminiscent of nested rollouts [28]. With a ï¬xed maximum of 10 retries, we obtain a score of 95% (up from 87% for the I2A itself). The total average number of model simulation steps needed to solve a level, including running the model in the outer loop, is now 4k, again much lower than the corresponding MCTS run with 100k steps. Note again, this approach requires a nearly perfect model; we donât expect I2A with MC search to perform well with approximate models. See Table 1 for a summary of the imagination efï¬ciency for the different methods.
# 4.5 Generalization experiments
Lastly, we probe the generalization capabilities of I2As, beyond handling random level layouts in Sokoban. Our agents were trained on levels with 4 boxes. Table 2 shows the performance of I2A when such an agent was tested on levels with different numbers of boxes, and that of the standard model-free agent for comparison. We found that I2As generalizes well; at 7 boxes, the I2A agent is still able to solve more than half of the levels, nearly as many as the standard agent on 4 boxes.
# 5 Learning one model for many tasks in MiniPacman
In our ï¬nal set of experiments, we demonstrate how a single model, which provides the I2A with a general understanding of the dynamics governing an environment, can be used to solve a collection of different tasks. We designed a simple, light-weight domain called MiniPacman, which allows us to easily deï¬ne multiple tasks in an environment with shared state transitions and which enables us to do rapid experimentation.
In MiniPacman (Fig. 6, left), the player explores a maze that contains food while being chased by ghosts. The maze also contains power pills; when eaten, for a ï¬xed number of steps, the player moves faster, and the ghosts run away and can be eaten. These dynamics are common to all tasks. Each task
7
is deï¬ned by a vector wrew â R5, associating a reward to each of the following ï¬ve events: moving, eating food, eating a power pill, eating a ghost, and being eaten by a ghost. We consider ï¬ve different reward vectors inducing ï¬ve different tasks. Empirically we found that the reward schemes were sufï¬ciently different to lead to very different high-performing policies6 (for more details on the game and tasks, see appendix C.
To illustrate the beneï¬ts of model-based methods in this multi-task setting, we train a single environ- ment model to predict both observations (frames) and events (as deï¬ned above, e.g. "eating a ghost"). Note that the environment model is effectively shared across all tasks, so that the marginal cost of learning the model is nil. During training and testing, the I2As have access to the frame and reward predictions generated by the model; the latter was computed from model event predictions and the task reward vector wrew. As such, the reward vector wrew can be interpreted as an âinstructionâ about which task to solve in the same environment [cf. the Frostbite challenge of 11]. For a fair comparison, we also provide all baseline agents with the event variable as input.7
We trained baseline agents and I2As separately on each task. Results in Fig. 6 (right) indicate the beneï¬t of the I2A architecture, outperforming the standard agent in all tasks, and the copy-model baseline in all but one task. Moreover, we found that the performance gap between I2As and baselines is particularly high for tasks 4 & 5, where rewards are particularly sparse, and where the anticipation of ghost dynamics is especially important. We posit that the I2A agent can leverage its environment and reward model to explore the environment much more effectively.
Task Name Regular Avoid Hunt Ambush Rush Standard model-free Copy-model 192 -16 -35 -40 1.3 919 3 33 -30 178 I2A 859 23 334 294 214
Figure 6: Minipacman environment. Left: Two frames from a minipacman game. Frames are 15 Ã 19 RGB images. The player is green, dangerous ghosts red, food dark blue, empty corridors black, power pills in cyan. After eating a power pill (right frame), the player can eat the 4 weak ghosts (yellow). Right: Performance after 300 million environment steps for different agents and all tasks. Note I2A clearly outperforms the other two agents on all tasks with sparse rewards.
# 6 Related work
Some recent work has focused on applying deep learning to model-based RL. A common approach is to learn a neural model of the environment, including from raw observations, and use it in classical planning algorithms such as trajectory optimization [29â31]. These studies however do not address a possible mismatch between the learned model and the true environment.
Model imperfection has attracted particular attention in robotics, when transferring policies from simulation to real environments [32â34]. There, the environment model is given, not learned, and used for pretraining, not planning at test time. Liu et al. [35] also learn to extract information from trajectories, but in the context of imitation learning. Bansal et al. [36] take a Bayesian approach to model imperfection, by selecting environment models on the basis of their actual control performance.
The problem of making use of imperfect models was also approached in simpliï¬ed environment in Talvitie [18, 19] by using techniques similar to scheduled sampling [37]; however these techniques break down in stochastic environments; they mostly address the compounding error issue but do not address fundamental model imperfections.
A principled way to deal with imperfect models is to capture model uncertainty, e.g. by using Gaussian Process models of the environment, see Deisenroth and Rasmussen [15]. The disadvantage of this method is its high computational cost; it also assumes that the model uncertainty is well calibrated and lacks a mechanism that can learn to compensate for possible miscalibration of uncertainty. Cutler et al. [38] consider RL with a hierarchy of models of increasing (known) ï¬delity. A recent multi-task
6For example, in the âavoidâ game, any event is negatively rewarded, and the optimal strategy is for the agent to clear a small space from food and use it to continuously escape the ghosts.
7It is not necessary to provide the reward vector wrew to the baseline agents, as it is equivalent a constant bias.
8
GP extension of this study can further help to mitigate the impact of model misspeciï¬cation, but again suffers from high computational burden in large domains, see Marco et al. [39].
A number of approaches use models to create additional synthetic training data, starting from Dyna [40], to more recent work e.g. Gu et al. [41] and Venkatraman et al. [42]; these models increase data efï¬ciency, but are not used by the agent at test time.
Tamar et al. [43], Silver et al. [44], and Oh et al. [45] all present neural networks whose architectures mimic classical iterative planning algorithms, and which are trained by reinforcement learning or to predict user-deï¬ned, high-level features; in these, there is no explicit environment model. In our case, we use explicit environment models that are trained to predict low-level observations, which allows us to exploit additional unsupervised learning signals for training. This procedure is expected to be beneï¬cial in environments with sparse rewards, where unsupervised modelling losses can complement return maximization as learning target as recently explored in Jaderberg et al. [46] and Mirowski et al. [47].
Internal models can also be used to improve the credit assignment problem in reinforcement learning: Henaff et al. [48] learn models of discrete actions environments, and exploit the effective differentia- bility of the model with respect to the actions by applying continuous control planning algorithms to derive a plan; Schmidhuber [49] uses an environment model to turn environment cost minimization into a network activity minimization.
Kansky et al. [50] learn symbolic networks models of the environment and use them for planning, but are given the relevant abstractions from a hand-crafted vision system.
Close to our work is a study by Hamrick et al. [51]: they present a neural architecture that queries learned expert models, but focus on meta-control for continuous contextual bandit problems. Pascanu et al. [52] extend this work by focusing on explicit planning in sequential environments, and learn how to construct a plan iteratively.
The general idea of learning to leverage an internal model in arbitrary ways was also discussed by Schmidhuber [53].
# 7 Discussion
We presented I2A, an approach combining model-free and model-based ideas to implement imagination-augmented RL: learning to interpret environment models to augment model-free deci- sions. I2A outperforms model-free baselines on MiniPacman and on the challenging, combinatorial domain of Sokoban. We demonstrated that, unlike classical model-based RL and planning methods, I2A is able to successfully use imperfect models (including models without reward predictions), hence signiï¬cantly broadening the applicability of model-based RL concepts and ideas.
As all model-based RL methods, I2As trade-off environment interactions for computation by pon- dering before acting. This is essential in irreversible domains, where actions can have catastrophic outcomes, such as in Sokoban. In our experiments, the I2A was always less than an order of magni- tude slower per interaction than the model-free baselines. The amount of computation can be varied (it grows linearly with the number and depth of rollouts); we therefore expect I2As to greatly beneï¬t from advances on dynamic compute resource allocation (e.g. Graves [54]). Another avenue for future research is on abstract environment models: learning predictive models at the "right" level of complexity and that can be evaluated efï¬ciently at test time will help to scale I2As to richer domains.
Remarkably, on Sokoban I2As compare favourably to a strong planning baseline (MCTS) with a perfect environment model: at comparable performance, I2As require far fewer function calls to the model than MCTS, because their model rollouts are guided towards relevant parts of the state space by a learned rollout policy. This points to further potential improvement by training rollout policies that "learn to query" imperfect models in a task-relevant way.
# Acknowledgements
We thank Victor Valdes for designing and implementing the Sokoban environment, Joseph Modayil for reviewing an early version of this paper, and Ali Eslami, Hado Van Hasselt, Neil Rabinowitz, Tom Schaul, Yori Zwols for various help and feedback.
9
# References
[1] Shane Legg and Marcus Hutter. Universal intelligence: A deï¬nition of machine intelligence. Minds and Machines, 17(4):391â444, 2007.
[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[3] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928â1937, 2016.
[4] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1889â1897, 2015.
[5] Demis Hassabis, Dharshan Kumaran, and Eleanor A Maguire. Using imagination to understand the neural basis of episodic memory. Journal of Neuroscience, 27(52):14365â14374, 2007.
[6] Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4):677â694, 2012.
[7] Demis Hassabis, Dharshan Kumaran, Seralynne D Vann, and Eleanor A Maguire. Patients with hippocam- pal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences, 104(5): 1726â1731, 2007.
[8] Edward C Tolman. Cognitive maps in rats and men. Psychological Review, 55(4):189, 1948.
[9] Anthony Dickinson and Bernard Balleine. The Role of Learning in the Operation of Motivational Systems. John Wiley & Sons, Inc., 2002.
[10] Brad E Pfeiffer and David J Foster. Hippocampal place-cell sequences depict future paths to remembered goals. Nature, 497(7447):74â79, 2013.
[11] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
[12] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[13] Jing Peng and Ronald J Williams. Efï¬cient learning and planning within the dyna framework. Adaptive Behavior, 1(4):437â454, 1993.
[14] Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In Proceedings of the 22nd international conference on Machine learning, pages 1â8. ACM, 2005.
[15] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efï¬cient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465â472, 2011.
[16] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071â1079, 2014.
[17] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. ICLR, 2016.
[18] Erik Talvitie. Model regularization for stable sample rollouts. In UAI, pages 780â789, 2014.
[19] Erik Talvitie. Agnostic system identiï¬cation for monte carlo planning. In AAAI, pages 2986â2992, 2015.
[20] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems, pages 2863â2871, 2015.
[21] Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, and Shakir Mohamed. Recurrent environment simulators. In 5th International Conference on Learning Representations, 2017.
10
[22] Felix Leibfried, Nate Kushman, and Katja Hofmann. A deep learning approach for joint video frame and reward prediction in atari games. CoRR, abs/1611.07078, 2016. URL http://arxiv.org/abs/1611. 07078.
[23] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-RMSprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
[24] https://drive.google.com/open?id=0B4tKsKnCCZtQY2tTOThucHVxUTQ, 2017.
[25] Gerald Tesauro and Gregory R Galperin. On-line policy improvement using monte-carlo search. In NIPS, volume 96, pages 1068â1074, 1996.
[26] Rémi Coulom. Efï¬cient selectivity and backup operators in monte-carlo tree search. In International Conference on Computers and Games, pages 72â83. Springer, 2006.
[27] Benjamin E Childs, James H Brodeur, and Levente Kocsis. Transpositions and move groups in monte carlo tree search. In Computational Intelligence and Games, 2008. CIGâ08. IEEE Symposium On, pages 389â395. IEEE, 2008.
[28] Christopher D Rosin. Nested rollout policy adaptation for monte carlo tree search. In Ijcai, pages 649â654, 2011.
[29] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2746â2754, 2015.
[30] Ian Lenz, Ross A Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model predictive control. In Robotics: Science and Systems, 2015.
[31] Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In IEEE International Conference on Robotics and Automation (ICRA), 2017.
[32] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633â1685, 2009.
[33] Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Sergey Levine, Kate Saenko, and Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments. arXiv preprint arXiv:1511.07111, 2015.
[34] Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518, 2016.
[35] YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. arXiv preprint arXiv:1707.03374, 2017.
[36] Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, and Claire J Tomlin. Goal-driven dynamics learning via bayesian optimization. arXiv preprint arXiv:1703.09260, 2017.
[37] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171â1179, 2015.
[38] Mark Cutler, Thomas J Walsh, and Jonathan P How. Real-world reinforcement learning via multiï¬delity simulators. IEEE Transactions on Robotics, 31(3):655â671, 2015.
[39] Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P Schoellig, Andreas Krause, Stefan Schaal, and Sebastian Trimpe. Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with bayesian optimization. arXiv preprint arXiv:1703.01250, 2017.
[40] Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the seventh international conference on machine learning, pages 216â224, 1990.
[41] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based acceleration. In International Conference on Machine Learning, pages 2829â2838, 2016.
[42] Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi, and J Andrew Bagnell. Improved learning of dynamics models for control. In International Symposium on Experimental Robotics, pages 703â713. Springer, 2016.
11
[43] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. In Advances in Neural Information Processing Systems, pages 2154â2162, 2016.
[44] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac- Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and planning. arXiv preprint arXiv:1612.08810, 2016.
[45] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. arXiv preprint arXiv:1707.03497, 2017.
[46] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.
[47] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673, 2016.
[48] Mikael Henaff, William F Whitney, and Yann LeCun. Model-based planning in discrete action spaces. arXiv preprint arXiv:1705.07177, 2017.
[49] Jürgen Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Neural Networks, 1990., 1990 IJCNN International Joint Conference on, pages 253â258. IEEE, 1990.
[50] Ken Kansky, Tom Silver, David A Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. Accepted at International Conference for Machine Learning, 2017, 2017.
[51] Jessica B. Hamrick, Andy J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W. Battaglia. Metacontrol for adaptive imagination-based optimization. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), 2017.
[52] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, David Reichert, Theophane Weber, Sebastien Racaniere, Lars Buesing, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch. arXiv preprint, 2017.
[53] Jürgen Schmidhuber. On learning to think: Algorithmic information theory for novel combinations of reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249, 2015.
[54] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
[55] Leemon C Baird III. Advantage updating. Technical report, Wright Lab. Technical Report WL-TR-93-1l46., 1993.
[56] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528â3536, 2015.
[57] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pages 282â293. Springer, 2006.
[58] Sylvain Gelly and David Silver. Combining online and ofï¬ine knowledge in uct. In Proceedings of the 24th international conference on Machine learning, pages 273â280. ACM, 2007.
[59] Joshua Taylor and Ian Parberry. Procedural generation of sokoban levels. In Proceedings of the International North American Conference on Intelligent Games and Simulation, pages 5â12, 2011.
[60] Yoshio Murase, Hitoshi Matsubara, and Yuzuru Hiraga. Automatic making of sokoban problems. PRI- CAIâ96: Topics in Artiï¬cial Intelligence, pages 592â600, 1996.
12
# Supplementary material for: Imagination-Augmented Agents for Deep Reinforcement Learning
# A Training and rollout policy distillation details
Each agent used in the paper deï¬nes a stochastic policy, i.e. a categorical distribution Ï(at|ot; θ) over discrete actions a. The logits of Ï(at|ot; θ) are computed by a neural network with parameters θ, taking observation ot at timestep t as input. During training, to increase the probability of rewarding actions being taken, A3C applies an update âθ to the parameters θ using policy gradient g(θ):
g(θ) = âθlogÏ(at|ot; θ)A(ot, at) where A(ot, at) is an estimate of the advantage function [55]. In practice, we learn a value function V (ot; θv) and use it to compute the advantage as the difference of the bootstrapped k-step return and and the current value estimate:
A(o1, a) = > of try + TV (0446415 80) â V (045 Oy): t<t!<t+k
The value function V (ot; θv) is also computed as the output of a neural network with parameters θv. The input to the value function network was chosen to be the second to last layer of the policy network that computes Ï. The parameter θv are updated with âθv towards bootstrapped k-step return:
g(θv) = âA(ot, at)âθv V (ot; θv)
implementation, we express the above updates as gradients of a corre- In our numerical To this surrogate loss, we add an entropy regularizer of sponding surrogate loss [56]. Ï(at|ot; θ) log Ï(at|ot; θ) to encourage exploration, with λent = 10â2 thoughout all ex- λent periments. Where applicable, we add a loss for policy distillation consisting of the cross-entropy between Ï and ËÏ:
ldist(Ï, ËÏ)(ot) = λdist Ï(a|ot) log ËÏ(a|ot),
a with scaling parameter λdist. Here Â¯Ï denotes that we do not backpropagate gradients of ldist wrt. to the parameters of the rollout policy through the behavioral policy Ï. Finally, even though we pre-trained our environment models, in principle we can also learn it jointly with the I2A agent by a adding an appropriate log-likelihood term of observations under the model. We will investigate this in future research. We optimize hyperparameters (learning rate and momentum of the RMSprop optimizer, gradient clipping parameter, distillation loss scaling λdist where applicable) separately for each agent (I2A and baselines).
# B Agent and model architecture details
We used rectiï¬ed linear units (ReLUs) between all hidden layers of all our agents. For the environment models, we used leaky ReLUs with a slope of 0.01.
# B.1 Agents
# Standard model-free baseline agent
The standard model-free baseline agent, taken from [3], is a multi-layer convolutional neural network (CNN), taking the current observation ot as input, followed by a fully connected (FC) hidden layer.
1
This FC layer feeds into two heads: into a FC layer with one output per action computing the policy logits log Ï(at|ot, θ); and into another FC layer with a single output that computes the value function V (ot; θv). The sizes of the layers were chosen as follows:
⢠for MiniPacman: the CNN has two layers, both with 3x3 kernels, 16 output channels and strides 1 and 2; the following FC layer has 256 units
⢠for Sokoban: the CNN has three layers with kernel sizes 8x8, 4x4, 3x3, strides of 4, 2, 1 and number of output channels 32, 64, 64; the following FC has 512 units
# I2A
The model free path of the I2A consists of a CNN identical to one of the standard model-free baseline (without the FC layers). The rollout encoder processes each frame generated by the environment model with another identically sized CNN. The output of this CNN is then concatenated with the reward prediction (single scalar broadcast into frame shape). This feature is the input to an LSTM with 512 (for Sokoban) or 256 (for MiniPacman) units. The same LSTM is used to process all 5 rollouts (one per action); the last output of the LSTM for all rollouts are concatenated into a single vector cia of length 2560 for Sokoban, and 1280 on MiniPacman. This vector is concatenated with the output cmf of the model-free CNN path and is fed into the fully connected layers computing policy logits and value function as in the baseline agent described above.
# Copy-model
The copy-model agent has the exact same architecture as the I2A, with the exception of the environ- ment model being replaced by the identity function (constantly returns the input observation).
# B.2 Environment models
For the I2A, we pre-train separate auto-regressive models of order 1 for the raw pixel observations of the MiniPacman and Sokoban environments (see ï¬gures 7 and 8) . In both cases, the input to the model consisted of the last observation ot, and a broadcasted, one-hot representation of the last action at. Following previous studies, the outputs of the models were trained to predict the next frame ot+1 by stochastic gradient decent on the Bernoulli cross-entropy between network outputs and data ot+1.
The Sokoban model is a simpliï¬ed case of the MiniPacman model; the Sokoban model is nearly entirely local (save for the reward model), while the MiniPacman model needs to deal with nonlocal interaction (movement of ghosts is affected by position of Pacman, which can be arbitrarily far from the ghosts).
# MiniPacman model
The input and output frames were of size 15 x 19 x 3 (width x height x RGB). The model is depicted in ï¬gure 7. It consisted of a size preserving, multi-scale CNN architecture with additional fully connected layers for reward prediction. In order to capture long-range dependencies across pixels, we also make use of a layer we call pool-and-inject, which applies global max-pooling over each feature map and broadcasts the resulting values as feature maps of the same size and concatenates the result to the input. Pool-and-inject layers are therefore size-preserving layers which communicate the max-value of each layer globally to the next convolutional layer.
# Sokoban model
The Sokoban model was chosen to be a residual CNN with an additional CNN / fully-connected MLP pathway for predicting rewards. The input of size 80x80x3 was ï¬rst processed with convolutions with a large 8x8 kernel and stride of 8. This reduced representation was further processed with two size preserving CNN layers before outputting a predicted frame by a 8x8 convolutional layer.
2
output image output reward t basic bloc t i fa 4) fo(5) Corer) basic bloc i â7 | 1x1, 64 an concat + ______ (16,32,64) a =) t pool es ee ee ââ) txt, 64 it cones and tile WxH (rend) ae inject Cae a) aa wee axt,nt txt, n2 >>s.,_ (max-pool WxH ee es) 7 \. ss Input frame tile 15x19 âone-hot AE Input action
Figure 7: The minipacman environment model. The overview is given in the right panel with blow- ups of the basic convolutional building block (middle panel) and the pool-and-inject layer (left panel). The basic build block has three hyperparameters n1, n2, n3 determining the number of channels in the convolutions; their numeric values are given in the right panel.
âoutput image output reward softmax 3x3, 32 / 3x3,32 \ 2x2 max-pool / 8x8, 32,/8 \ | / 3x3,32 \ i Input frame tile 80x80 Input action
Figure 8: The sokoban environment model.
# C MiniPacman additional details
MiniPacman is played in a 15 à 19 grid-world. Characters, the ghosts and Pacman, move through a maze. Walls positions are ï¬xed. At the start of each level 2 power pills, a number of ghosts, and Pacman are placed at random in the world. Food is found on every square of the maze. The number of ghosts on level k is 1 + levelâ1
2
# Game dynamics
Ghosts always move by one square at each time step. Pacman usually moves by one square, except when it has eaten a power pill, which makes it move by two squares at a time. When moving by 2 squares, if Pacman new position ends up inside a wall, then it is moved back by one square to get back to a corridor.
We say that Pacman and a ghost meet when they either end up at the same location, or when their path crosses (even if they do not end up at the same location). When Pacman moves to a square with food or a power pill, it eats it. Eating a power pill gives Pacman super powers, such as moving at
3
double speed and being able to eat ghosts. The effects of eating a power pill last for 19 time steps. When Pacman meets a ghost, either Pacman dies eaten by the ghost, or, if Pacman has recently eaten a power pill, the ghost dies eaten by Pacman.
If Pacman has eaten a power pill, ghosts try to ï¬ee from Pacman. They otherwise try to chase Pacman. A more precise algorithm for the movement of a ghost is given below in pseudo code:
# Algorithm 1 move ghost
: fun not 18: 19: 20: 21: 22: 23: 24: 25: ction MOVEGHOST Inputs: Ghost object PossibleDirections < [DOWN, LEFT, RIGHT, UP] CurrentDirection + Ghost.current_direction AllowedDirections < [] for dir in PossibleDirections do if Ghost.can_move(dir) then AllowedDirections + = [dir] if len(AllowedDirections) == 2 then > Contains position and some helper methods if Ghost.current_direction in AllowedDirections then return Ghost.current_direction if opposite(Ghost.current_direction) == AllowedDirections|[0] then return AllowedDirections[1] return AllowedDirections[0] else turn around X = normalise(Pacman.position - Ghost.position) DotProducts = [] for dir in AllowedDirections do DotProducts + = [dot_product(X, dir)] if Pacman.ate_super_pill then return AllowedDirections[argmin(DotProducts)] else > We are in a straight corridor, or at a bend > We are at an intersection if opposite(Ghost.current_direction) in AllowedDirections then AllowedDirections.remove(opposite(Ghost.current_direction)) > Ghosts do return AllowedDirections[argmax(DotProducts)] > Away from Pacman > Towards Pacman
# Task collection
We used 5 different tasks available in MiniPacman. They all share the same environment dynamics (layout of maze, movement of ghosts, . . . ), but vary in their reward structure and level termination. The rewards associated with various events for each tasks are given in the table below.
Task Regular Avoid Hunt Ambush Rush At each step Eating food Eating power pill Eating ghost Killed by ghost 0 0.1 0 0 0 1 -0.1 0 -0.1 -0.1 2 -5 1 0 10 5 -10 10 10 0 0 -20 -20 -20 0
When a level is cleared, a new level starts. Tasks also differ in the way a level was cleared.
Regular: level is cleared when all the food is eaten;
Avoid: level is cleared after 128 steps;
Hunt: level is cleared when all ghosts are eaten or after 80 steps.
⢠Ambush: level is cleared when all ghosts are eaten or after 80 steps.
Rush: level is cleared when all power pills are eaten.
4
Figure 9: The pink bar appears when Pacman eats a power pill, and it decreases in size over the duration of the effect of the pill.
There are no lives, and episode ends when Pacman is eaten by a ghost.
The time left before the effect of the power pill wears off is shown using a pink shrinking bar at the bottom of the screen as in Fig. 9.
Training curves
1400. Minipacman performance on âregularâ 40 Minipacman performance on âavoidâ â standara â standard 1200 copy mode 30 â copy modet â ita â Pn 1000 Score Score 00 05 10 415 20 25 30 âoo 05 10 15 20 25 30 environment steps 168 environment steps 168 250 400 __Minipacman performance on âhunt! 350. Minipacman performance on âambushâ 350 200 a 300) op a 300 os0|â 150 250 00 s g 200 e 3 100 5 § 150 a & 150 8 50 100 200 50 50 bo 05 20 15 20 25 30 bo 05 10 15 20 25 30 bo 05 10 15 20 25 30 environment steps 1e8 environment steps 1e8 environment steps 1e8
Figure 10: Learning curves for different agents and various tasks
# D Sokoban additional details
# D.1 Sokoban environment
In the game of Sokoban, random actions on the levels would solve levels with vanishing probability, leading to extreme exploration issues for solving the problem with reinforcement learning. To alleviate this issue, we use a shaping reward scheme for our version of Sokoban:
⢠Every time step, a penalty of -0.1 is applied to the agent.
Whenever the agent pushes a box on target, it receives a reward of +1.
Whenever the agent pushes a box off target, it receives a penalty of -1.
Finishing the level gives the agent a reward of +10 and the level terminates.
5
The ï¬rst reward is to encourage agents to ï¬nish levels faster, the second to encourage agents to push boxes onto targets, the third to avoid artiï¬cial reward loop that would be induced by repeatedly pushing a box off and on target, the fourth to strongly reward solving a level. Levels are interrupted after 120 steps (i.e. agent may bootstrap from a value estimate of the last frame, but the level resets to a new one). Identical levels are nearly never encountered during training or testing (out of 40 million levels generated, less than 0.7% were repeated). Note that with this reward scheme, it is always optimal to solve the level (thus our shaping scheme is valid). An alternative strategy would have been to have the agent play through a curriculum of increasingly difï¬cult tasks; we expect both strategies to work similarly.
# D.2 Additional experiments
Our ï¬rst additional experiment compared I2A with and without reward prediction, trained over a longer horizon. I2A with reward prediction clearly converged shortly after 1e9 steps and we therefore interrupted training; however, I2A without reward prediction kept increasing performance, and after 3e9 steps, we recover a performance level of close to 80% of levels solved, see Fig. 11.
10 Sokoban performance fraction of levels solved â RA â no reward 128 0.0 0.5 1.0 15 2.0 25 3.0 environment steps 1e9
Figure 11: I2A with and without reward prediction, longer training horizon.
Next, we investigated the I2A with Monte-Carlo search (using a near perfect environment model of Sokoban). We let the agent try to solve the levels up to 16 times within its internal model. The base I2A architecture was solving around 87% of levels; mental retries boosted its performance to around 95% of levels solved. Although the agent was allowed up to 16 mental retries, in practice all the performance increase was obtained within the ï¬rst 10 mental retries. Exact percentage gain by each mental retry is shown in Fig. 12. Note in Fig. 12, only 83% of the levels are solved on the ï¬rst mental attempt, even though the I2A architecture could solve around 87% of levels. The gap is explained by the use of an environment model: although it looks nearly perfect to the naked eye, the model is not actually equivalent to the environment.
Levels solved at each Mental Retry % a2 Percentage solved ° 2 4 6 8 10 2 4 6
Figure 12: Gain in percentage by each additional mental retry using a near perfect environment model.
6
# D.3 Planning with the perfect model and Monte-Carlo Tree Search in Sokoban
We ï¬rst trained a value network that estimates the value function of a trained model-free policy; to do this, we trained a model-free agent for 1e9 environment steps. This agent solved close to 60 % of episodes. Using this agent, we generated 1e8 (frame, return) pairs, and trained the value network to predict the value (expected return) from the frame; training and test error were comparable, and we donât expect increasing the number of training points would have signiï¬cantly improved the quality of the the value network.
The value network architecture is a residual network which stacks one convolution layer and 3 convolution blocks with a ï¬nal fully-connected layer of 128 hidden units. The ï¬rst convolution is 1 à 1 convolution with 128 feature maps. Each of the three residual convolution block is composed of two convolutional layers; the ï¬rst is a 1 à 1 convolution with 32 feature maps, the second a 3 à 3 convolution with 32 feature maps, and the last a 1 à 1 layer with 128 feature maps. To help the value networks, we trained them not on the pixel representation, but on a 10 à 10 à 4 symbolic representation.
The trained value network is then employed during search to evaluate leaf-nodes â similar to [12], replacing the role of traditional random rollouts in MCTS. The tree policy uses [57, 58] with a ï¬ne-tuned exploration constant of 1. Depth-wise transposition tables for the tree nodes are used to deal with the symmetries in the Sokoban environment. External actions are selected by taking the max Q value at the root node. The tree is reused between steps but selecting the appropriate subtree as the root node for the next step.
Reported results are obtained by averaging the results over 250 episodes.
# D.4 Level Generation for Sokoban
We detail here our procedural generation for Sokoban levels - we follow closely methods described in [59, 60].
The generation of a Sokoban level involves three steps: room topology generation, position conï¬gura- tion and room reverse-playing. Topology generation: Given an initial width*height room entirely constituted by wall blocks, the topology generation consists in creating the âemptyâ spaces (i.e. corridors) where boxes, targets and the player can be placed. For this simple random walk algorithm with a conï¬gurable number of steps is applied: a random initial position and direction are chosen. Afterwards, for every step, the position is updated and, with a probability p = 0.35, a new random direction is selected. Every âvisitedâ position is emptied together with a number of surrounding wall blocks, selected by randomly choosing one of the following patterns indicating the adjacent room blocks to be removed (the darker square represents the reference position, that is, the position being visited). Note that the room âexteriorâ walls are never emptied, so from a widthÃheight room only a (width-2)Ã(height-2) space can actually be converted into corridors. The random walk approach guarantees that all the positions in the room are, in principle, reachable by the player. A relatively small probability of changing the walk direction favours the generation of longer corridors, while the application of a random pattern favours slightly more convoluted spaces. Position conï¬guration:
| aoe
Once a room topology is generated, the target locations for the desired N boxes and the player initial position are randomly selected. There is the obvious prerequisite of having enough empty spaces in the room to place the targets and the player but no other constraints are imposed in this step.
7
Reverse playing: Once the topology and targets/player positions are generated the room is reverse- played. In this case, on each step, the player has eight possible actions to choose from: simply moving or moving+pulling from a box in each possible direction (assuming for the latter, that there is a box adjacent to the player position).
Initially the room is conï¬gured with the boxes placed over their corresponding targets. From that position a depth-ï¬rst search (with a conï¬gurable maximum depth) is carried out over the space of possible moves, by âexpandingâ each reached player/boxes position by iteratively applying all the possible actions (which are randomly permuted on each step). An entire tree is not explored as there are different combinations of actions leading to repeated boxes/player conï¬gurations which are skipped.
Statistics are collected for each boxes/player conï¬guration, which is, in turn, scored with a simple heuristic:
RoomScore = BoxSwaps à BoxDisplacementi i
where BoxSwaps represents the number of occasions in which the player stopped pulling from a given box and started pulling from a different one, while BoxDisplacement represents the Manhattan distance between the initial and ï¬nal position of a given box. Also whenever a box or the player are placed on top of one of the targets the RoomScore value is set to 0. While this scoring heuristic doesnât guarantee the complexity of the generated rooms itâs aimed to a) favour room conï¬gurations where overall the boxes are further away from their original positions and b) increase the probability of a room requiring a more convoluted combination of box moves to get to a solution (by aiming for solutions with higher boxSwaps values). This scoring mechanism has empirically proved to generate levels with a balanced combination of difï¬culties.
The reverse playing ends when there are no more available positions to explore or when a predeï¬ned maximum number of possible room conï¬gurations is reached. The room with the higher RoomScore is then returned.
# Defaul parameters:
⢠A maximum of 10 room topologies and for each of those 10 boxes/player positioning are retried in case a given combination doesnât produce rooms with a score > 0.
The room conï¬guration tree is by default limited to a maximum depth of 300 applied actions. ⢠The total number of visited positions is by default limited to 1000000. ⢠Default random-walk steps: 1.5à (room width + room height).
8 | {
"id": "1707.03374"
} |
1707.06209 | Crowdsourcing Multiple Choice Science Questions | We present a novel method for obtaining high-quality, domain-targeted
multiple choice questions from crowd workers. Generating these questions can be
difficult without trading away originality, relevance or diversity in the
answer options. Our method addresses these problems by leveraging a large
corpus of domain-specific text and a small set of existing questions. It
produces model suggestions for document selection and answer distractor choice
which aid the human question generation process. With this method we have
assembled SciQ, a dataset of 13.7K multiple choice science exam questions
(Dataset available at http://allenai.org/data.html). We demonstrate that the
method produces in-domain questions by providing an analysis of this new
dataset and by showing that humans cannot distinguish the crowdsourced
questions from original questions. When using SciQ as additional training data
to existing questions, we observe accuracy improvements on real science exams. | http://arxiv.org/pdf/1707.06209 | Johannes Welbl, Nelson F. Liu, Matt Gardner | cs.HC, cs.AI, cs.CL, stat.ML | accepted for the Workshop on Noisy User-generated Text (W-NUT) 2017 | null | cs.HC | 20170719 | 20170719 | 7 1 0 2
l u J 9 1 ] C H . s c [
1 v 9 0 2 6 0 . 7 0 7 1 : v i X r a
# Crowdsourcing Multiple Choice Science Questions
# Johannes Welblâ Computer Science Department University College London j.welbl@cs.ucl.ac.uk
Nelson F. Liuâ Paul G. Allen School of Computer Science & Engineering University of Washington nfliu@cs.washington.edu
# Matt Gardner Allen Institute for Artiï¬cial Intelligence mattg@allenai.org
# Abstract
We present a novel method for obtain- ing high-quality, domain-targeted multi- ple choice questions from crowd workers. Generating these questions can be difï¬cult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by lever- aging a large corpus of domain-speciï¬c text and a small set of existing ques- tions. It produces model suggestions for document selection and answer distractor choice which aid the human question gen- eration process. With this method we have assembled SciQ, a dataset of 13.7K mul- tiple choice science exam questions.1 We demonstrate that the method produces in- domain questions by providing an analysis of this new dataset and by showing that hu- mans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
2016; Dhingra et al., 2016; Sordoni et al., 2016; Seo et al., 2016). These recent datasets cover broad and general domains, but progress on these datasets has not translated into similar improve- ments in more targeted domains, such as science exam QA.
Science exam QA is a high-level NLP task which requires the mastery and integration of in- formation extraction, reading comprehension and common sense reasoning (Clark et al., 2013; Clark, 2015). Consider, for example, the ques- tion âWith which force does the moon affect tidal movements of the oceans?â. To solve it, a model must possess an abstract understanding of nat- ural phenomena and apply it to new questions. This transfer of general and domain-speciï¬c back- ground knowledge into new scenarios poses a formidable challenge, one which modern statisti- In a re- cal techniques currently struggle with. cent Kaggle competition addressing 8th grade sci- ence questions (Schoenick et al., 2016), the high- est scoring systems achieved only 60% on a mul- tiple choice test, with retrieval-based systems far outperforming neural systems.
# Introduction
The construction of large, high-quality datasets has been one of the main drivers of progress in NLP. The recent proliferation of datasets for tex- tual entailment, reading comprehension and Ques- tion Answering (QA) (Bowman et al., 2015; Her- mann et al., 2015; Rajpurkar et al., 2016; Hill et al., 2015; Hewlett et al., 2016; Nguyen et al., 2016) has allowed for advances on these tasks, particularly with neural models (Kadlec et al.,
*Work done while at the Allen Institute for Artiï¬cial In- telligence.
1Dataset available at http://allenai.org/data. html
A major bottleneck for applying sophisticated statistical techniques to science QA is the lack of large in-domain training sets. Creating a large, multiple choice science QA dataset is challeng- ing, since crowd workers cannot be expected to have domain expertise, and questions can lack rel- evance and diversity in structure and content. Fur- thermore, poorly chosen answer distractors in a multiple choice setting can make questions almost trivial to solve.
The ï¬rst contribution of this paper is a general method for mitigating the difï¬culties of crowd- sourcing QA data, with a particular focus on mul- tiple choice science questions. The method is broadly similar to other recent work (Rajpurkar et al., 2016), relying mainly on showing crowd
Example 1 Example 2 Example 3 Example 4 Q: What type of organism is | Q: What commonly used in preparation of foods such as cheese and yogurt? phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere? Q: Ghanges from a less-ordered | Q: What is the state to a more-ordered state | least danger- (such as a liquid to a solid) are | ous radioactive always what? decay? T) mesophilic organisms T) coriolis effect 2) protozoa 2) muon effect 3) gymnosperms 3) centrifugal effect 4) viruses 4) tropical effect T) exothermic 2) unbalanced 3) reactive 3) gamma decay 4) endothermic 4) zeta decay T) alpha decay 2) beta decay Mesophiles grow best in mod- erate temperature, typically be- tween 25°C and 40°C (77°F and 104°F). Mesophiles are often found living in or on the bod- ies of humans or other animals. The optimal growth temperature of many pathogenic mesophiles is 37°C (98°F), the normal human body temperature. Mesophilic organisms have important uses in food preparation, including cheese, yogurt, beer and wine. north to south or to north. sphere. hemisphere. Without Coriolis Effect the global winds would blow south But Coriolis makes them blow north- east to southwest or the re- | tem. verse in the Northern Hemi- The winds blow northwest to southeast or the reverse in the southern Summary Changes of state are | All radioactive examples of phase changes, or | decay is dan- phase transitions. All phase | gerous to living changes are accompanied by | things, but al- changes in the energy of a sys- | pha decay is the Changes from a more- | least dangerous. ordered state to a less-ordered state (such as a liquid to a gas) are endothermic. Changes from a less-ordered state to a more- ordered state (such as a liquid to a solid) are always exothermic. The conversion ...
Figure 1: The ï¬rst four SciQ training set examples. An instance consists of a question and 4 answer op- tions (the correct one in green). Most instances come with the document used to formulate the question.
workers a passage of text and having them ask a question about it. However, unlike previous dataset construction tasks, we (1) need domain- relevant passages and questions, and (2) seek to create multiple choice questions, not direct- answer questions.
We use a two-step process to solve these prob- lems, ï¬rst using a noisy classiï¬er to ï¬nd relevant passages and showing several options to workers to select from when generating a question. Sec- ond, we use a model trained on real science exam questions to predict good answer distractors given a question and a correct answer. We use these pre- dictions to aid crowd workers in transforming the question produced from the ï¬rst step into a multi- ple choice question. Thus, with our methodology we leverage existing study texts and science ques- tions to obtain new, relevant questions and plau- sible answer distractors. Consequently, the human intelligence task is shifted away from a purely gen- erative task (which is slow, difï¬cult, expensive and can lack diversity in the outcomes when repeated) and reframed in terms of a selection, modiï¬cation and validation task (being faster, easier, cheaper and with content variability induced by the sug- gestions provided).
we call SciQ. Figure 1 shows the ï¬rst four train- ing examples in SciQ. This dataset has a multiple choice version, where the task is to select the cor- rect answer using whatever background informa- tion a system can ï¬nd given a question and several answer options, and a direct answer version, where given a passage and a question a system must pre- dict the span within the passage that answers the question. With experiments using recent state-of- the-art reading comprehension methods, we show that this is a useful dataset for further research. In- terestingly, neural models do not beat simple infor- mation retrieval baselines on the multiple choice version of this dataset, leaving room for research on applying neural models in settings where train- ing examples number in the tens of thousands, in- stead of hundreds of thousands. We also show that using SciQ as an additional source of training data improves performance on real 4th and 8th grade exam questions, proving that our method success- fully produces useful in-domain training data.
# 2 Related Work
The second contribution of this paper is a dataset constructed by following this methodol- ogy. With a total budget of $10,415, we collected 13,679 multiple choice science questions, which
Dataset Construction. A lot of recent work has focused on constructing large datasets suitable for training neural models. QA datasets have been as- sembled based on Freebase (Berant et al., 2013; Bordes et al., 2015), Wikipedia articles (Yang et al., 2015; Rajpurkar et al., 2016; Hewlett et al.,
2016) and web search user queries (Nguyen et al., 2016); for reading comprehension (RC) based on news (Hermann et al., 2015; Onishi et al., 2016), children books (Hill et al., 2015) and novels (Pa- perno et al., 2016), and for recognizing textual en- tailment based on image captions (Bowman et al., 2015). We continue this line of work and construct a dataset for science exam QA. Our dataset dif- fers from some of the aforementioned datasets in that it consists of natural language questions pro- duced by people, instead of cloze-style questions. It also differs from prior work in that we aim at the narrower domain of science exams and in that we produce multiple choice questions, which are more difï¬cult to generate.
Science Exam Question Answering. Exist- ing models for multiple-choice science exam QA vary in their reasoning framework and training methodology. A set of sub-problems and solution strategies are outlined in Clark et al. (2013). The method described by Li and Clark (2015) eval- uates the coherence of a scene constructed from the question enriched with background KB infor- mation, while Sachan et al. (2016) train an en- tailment model that derives the correct answer from background knowledge aligned with a max- margin ranker. Probabilistic reasoning approaches include Markov logic networks (Khot et al., 2015) and an integer linear program-based model that assembles proof chains over structured knowl- edge (Khashabi et al., 2016). The Aristo ensem- ble (Clark et al., 2016) combines multiple rea- soning strategies with shallow statistical methods based on lexical co-occurrence and IR, which by themselves provide surprisingly strong baselines. There has not been much work applying neural networks to this task, likely because of the paucity of training data; this paper is an attempt to address this issue by constructing a much larger dataset than was previously available, and we present re- sults of experiments using state-of-the-art reading comprehension techniques on our datasets.
Automatic Question Generation. Transform- ing text into questions has been tackled be- fore, mostly for didactic purposes. Some ap- proaches rely on syntactic transformation tem- plates (Mitkov and Ha, 2003; Heilman and Smith, 2010), while most others generate cloze-style questions. Our ï¬rst attempts at constructing a sci- ence question dataset followed these techniques. We found the methods did not produce high-
quality science questions, as there were problems with selecting relevant text, generating reasonable distractors, and formulating coherent questions.
Several similarity measures have been em- ployed for selecting answer distractors (Mitkov et al., 2009), including measures derived from WordNet (Mitkov and Ha, 2003), thesauri (Sumita et al., 2005) and distributional context (Pino et al., 2008; Aldabe and Maritxalar, 2010). Domain- speciï¬c ontologies (Papasalouros et al., 2008), phonetic or morphological similarity (Pino and Esknazi, 2009; Correia et al., 2010), probabil- ity scores for the question context (Mostow and Jang, 2012) and context-sensitive lexical infer- ence (Zesch and Melamud, 2014) have also been used. In contrast to the aforementioned similarity- based selection strategies, our method uses a feature-based ranker to learn plausible distractors from original questions. Several of the above heuristics are used as features in this ranking model. Feature-based distractor generation mod- els (Sakaguchi et al., 2013) have been used in the past by Agarwal and Mannem (2011) for creating biology questions. Our model uses a random for- est to rank candidates; it is agnostic towards tak- ing cloze or humanly-generated questions, and it is learned speciï¬cally to generate distractors that resemble those in real science exam questions.
# 3 Creating a science exam QA dataset
In this section we present our method for crowd- sourcing science exam questions. The method is a two-step process: first we present a set of candi- date passages to a crowd worker, letting the worker choose one of the passages and ask a question about it. Second, another worker takes the ques- tion and answer generated in the first step and pro- duces three distractors, aided by a model trained to predict good answer distractors. The end result is a multiple choice science question, consisting of a question q, a passage p, a correct answer a*, and a set of distractors, or incorrect answer options, {aâ}. Some example questions are shown in Fig- ure 1. The remainder of this section elaborates on the two steps in our question generation process.
# 3.1 First task: producing in-domain questions
Conceiving an original question from scratch in a specialized domain is surprisingly difï¬cult; per- forming the task repeatedly involves the danger of
falling into speciï¬c lexical and structural patterns. To enforce diversity in question content and lex- ical expression, and to inspire relevant in-domain questions, we rely on a corpus of in-domain text about which crowd workers ask questions. How- ever, not all text in a large in-domain corpus, such as a textbook, is suitable for generating questions. We use a simple ï¬lter to narrow down the selection to paragraphs likely to produce reasonable ques- tions.
Base Corpus. Choosing a relevant, in-domain base corpus to inspire the questions is of crucial importance for the overall characteristics of the dataset. For science questions, the corpus should consist of topics covered in school exams, but not be too linguistically complex, speciï¬c, or loaded with technical detail (e.g., scientiï¬c papers). We observed that articles retrieved from web searches for science exam keywords (e.g. âanimalâ and âfoodâ) yield a signiï¬cant proportion of commer- cial or otherwise irrelevant documents and did not consider this further. Articles from science-related categories in Simple Wikipedia are more targeted and factual, but often state highly speciï¬c knowl- edge (e.g., âHoatzin can reach 25 inches in length and 1.78 pounds of weight.â).
We chose science study textbooks as our base corpus because they are directly relevant and lin- guistically tailored towards a student audience. They contain verbal descriptions of general nat- ural principles instead of highly speciï¬c example features of particular species. While the number of resources is limited, we compiled a list of 28 books from various online learning resources, in- cluding CK-122 and OpenStax3, who share this material under a Creative Commons License. The books are about biology, chemistry, earth science and physics and span elementary level to college introductory material. A full list of the books we used can be found in the appendix.
Document Filter. We designed a rule-based document ï¬lter model into which individual para- graphs of the base corpus are fed. The system classiï¬es individual sentences and accepts a para- graph if a minimum number of sentences is ac- cepted. With a small manually annotated dataset of sentences labelled as either relevant or irrele- vant, the ï¬lter was designed iteratively by adding ï¬lter rules to ï¬rst improve precision and then re-
# 2www.ck12.org 3www.openstax.org
call on a held-out validation set. The ï¬nal ï¬l- ter included lexical, grammatical, pragmatical and complexity based rules. Speciï¬cally, sentences were ï¬ltered out if they i) were a question or ex- clamation ii) had no verb phrase iii) contained modal verbs iv) contained imperative phrases v) contained demonstrative pronouns vi) contained personal pronouns other than third-person vii) be- gan with a pronoun viii) contained ï¬rst names ix) had less than 6 or more than 18 tokens or more than 2 commas x) contained special char- acters other than punctuation xi) had more than three tokens beginning uppercase xii) mentioned a graph, table or web link xiii) began with a dis- course marker (e.g. âNonethelessâ) xiv) contained absoulute wording (e.g. âneverâ, ânothingâ, âdef- initelyâ) xv) contained instructional vocabulary ( âteacherâ, âworksheetâ, . . . ).
Besides the last, these rules are all generally applicable in other domains to identify simple declarative statements in a corpus.
Question Formulation Task. To actually gen- erate in-domain QA pairs, we presented the ï¬l- tered, in-domain text to crowd workers and had them ask a question that could be answered by the presented passage. Although most undesirable paragraphs had been ï¬ltered out beforehand, a non-negligible proportion of irrelevant documents remained. To circumvent this problem, we showed each worker three textbook paragraphs and gave them the freedom to choose one or to reject all of them if irrelevant. Once a paragraph had been chosen, it was not reused to formulate more ques- tions about it. We further speciï¬ed desirable char- acteristics of science exam questions: no yes/no questions, not requiring further context, query- ing general principles rather than highly speciï¬c facts, question length between 6-30 words, answer length up to 3 words (preferring shorter), no am- biguous questions, answers clear from paragraph chosen. Examples for both desirable and undesir- able questions were given, with explanations for why they were good or bad examples. Further- more we encouraged workers to give feedback, and a contact email was provided to address up- coming questions directly; multiple crowdwork- ers made use of this opportunity. The task was advertised on Amazon Mechanical Turk, requiring Masterâs status for the crowdworkers, and paying a compensation of 0.30$ per HIT. A total of 175 workers participated in the whole crowdsourcing
project.
In 12.1% of the cases all three documents were rejected, much fewer than if a single document had been presented (assuming the same proportion of relevant documents). Thus, besides being more economical, proposing several documents reduces the risk of generating irrelevant questions and in the best case helps match a crowdworkerâs indi- vidual preferences.
# 3.2 Second task: selecting distractors
Generating convincing answer distractors is of great importance, since bad distractors can make a question trivial to solve. When writing science questions ourselves, we found that ï¬nding rea- sonable distractors was the most time-consuming part overall. Thus, we support the process in our crowdsourcing task with model-generated answer distractor suggestions. This primed the workers with relevant examples, and we allowed them to use the suggested distractors directly if they were good enough. We next discuss characteristics of good answer distractors, propose and evaluate a model for suggesting such distractors, and de- scribe the crowdsourcing task that uses them.
Distractor Characteristics. Multiple choice science questions with nonsensical incorrect an- swer options are not interesting as a task to study, nor are they useful for training a model to do well on real science exams, as the model would not need to do any kind of science reasoning to answer the training questions correctly. The difï¬culty in generating a good multiple choice question, then, lies not in identifying expressions which are false answers to q, but in generating expressions which are plausible false answers. Concretely, besides being false answers, good distractors should thus:
⢠be grammatically consistent: for the question âWhen animals use energy, what is always produced?â a noun phrase is expected.
⢠be consistent with respect to abstract proper- ties: if the correct answer belongs to a certain category (e.g., chemical elements) good dis- tractors likely should as well.
⢠be consistent with the semantic context of the question: a question about animals and en- ergy should not have newspaper or bingo as distractors.
Distractor Model Overview. We now intro- duce a model which generates plausible answer
distrators and takes into account the above criteria. On a basic level, it ranks candidates from a large collection C of possible distractors and selects the highest scoring items. Its ranking function
r:(q,a",a') + sq ⬠[0,1] qd)
produces a confidence score sq: for whether aâ ⬠C is a good distractor in the context of question q and correct answer a*. For r we use the scoring unction sq; = P(aâ is good| q,a*) of a binary classifier which distinguishes plausible (good) dis- tractors from random (bad) distractors based on eatures ¢(q, a*,aâ). For classification, we train r on actual in-domain questions with observed false answers as the plausible (good) distractors, and random expressions as negative examples, sam- pled in equal proportion from C.. As classifier we chose a random forest (Breiman, 2001), because of its robust performance in small and mid-sized data settings and its power to incorporate nonlin- ear feature interactions, in contrast, e.g., to logistic regression.
Distractor Model Features. This section de- scribes the features ¢(g,a*,aâ) used by the dis- tractor ranking model. With these features, the distractor model can learn characteristics of real distractors from original questions and will sug- gest those distractors that it deems the most realis- tic for a question. The following features of ques- tion q, correct answer a* and a tentative distractor expression aâ were used:
e bags of GloVe embeddings for g, a* and aâ;
e an indicator for POS-tag consistency of a* and aâ;
e singular/plural consistency of a* and aâ;
e log. avg. word frequency in a* and aâ;
e Levenshtein string edit distance between a* and aâ;
e suffix consistency of a* and aâ (firing e.g. for (regeneration, exhaustion));
e token overlap indicators for g, a* and aâ;
e token and character length for a* and aâ and similarity therein;
e indicators for numerical content in qg, a* and aâ consistency therein;
e indicators for units of measure in g, a* and aâ, and for co-occurrence of the same unit;
e WordNet-based hypernymy indicators be- tween tokens in g, a* and aâ, in both direc- tions and potentially via two steps;
e indicators for 2-step connections between en- tities in a* and aâ via a KB based on OpenIE triples (Mausam et al., 2012) extracted from pages in Simple Wikipedia about anatomical structures;
e indicators for shared Wordnet-hyponymy of a* and aâ to one of the concepts most fre- quently generalising all three question dis- tractors in the training set (e.g. element, or- gan, organism).
The intuition for the knowledge-base link and hypernymy indicator features is that they can re- veal sibling structures of a* and aâ with respect to a shared property or hypernym. For example, if the correct answer a* is heart, then a plausible distractor aâ like liver would share with a* the hy- ponymy relation to organ in WordNet.
Model Training. We ï¬rst constructed a large candidate distractor set C whose items were to be ranked by the model. C contained 488,819 ex- pressions, consisting of (1) the 400K items in the GloVe vocabulary (Pennington et al., 2014); (2) answer distractors observed in training questions; (3) a list of noun phrases from Simple Wikipedia articles about body parts; (4) a noun vocabulary of â¼6000 expressions extracted from primary school science texts. In examples where aâ consisted of multiple tokens, we added to C any expression that could be obtained by exchanging one unigram in aâ with another unigram from C.
The model was then trained on a set of 3705 sci- ence exam questions (4th and 8th grade) , separated into 80% training questions and 20% validation questions. Each question came with four answer options, providing three good distractor examples. We used scikit-learnâs implementation of ran- dom forests with default parameters. We used 500 trees and enforced at least 4 samples per tree leaf. Distractor Model Evaluation. Our model achieved 99, 4% training and 94, 2% validation ac- curacy overall. Example predictions of the dis- tractor model are shown in Table 1. Qualita- tively, the predictions appear acceptable in most cases, though the quality is not high enough to use
them directly without additional ï¬ltering by crowd workers. In many cases the distractor is semanti- cally related, but does not have the correct type (e.g., in column 1, ânutrientâ and âsoilâ are not el- ements). Some predictions are misaligned in their level of speciï¬city (e.g. âfrogsâ in column 3), and multiword expressions were more likely to be un- related or ungrammatical despite the inclusion of part of speech features.
Even where the predicted distractors are not fully coherent, showing them to a crowd worker still has a positive priming effect, helping the worker generate good distractors either by provid- ing nearly-good-enough candidates, or by forcing the worker to think why a suggestion is not a good distractor for the question.
Distractor Selection Task. To actually gener- ate a multiple choice science question, we show the result of the ï¬rst task, a (q, aâ) pair, to a crowd worker, along with the top six distractors sug- gested from the previously described model. The goal of this task is two-fold: (1) quality control (validating a previously generated (q, aâ) pair), and (2) validating the predicted distractors or writ- ing new ones if necessary.
The ï¬rst instruction was to judge whether the question could appear in a school science exam; questions could be marked as ungrammatical, hav- ing a false answer, being unrelated to science or re- quiring very speciï¬c background knowledge. The total proportion of questions passing was 92.8%. The second instruction was to select up to two of the six suggested distractors, and to write at least one distractor by themselves such that there is a total of three. The requirement for the worker to generate one of their own distractors, instead of being allowed to select three predicted distractors, was added after an initial pilot of the task, as we found that it forced workers to engage more with the task and resulted in higher quality distractors. We gave examples of desirable and undesir- able distractors and the opportunity to provide feedback, as before. We advertised the task on Amazon Mechanical Turk, paying 0.2$ per HIT, again requiring AMT Masterâs status. On aver- age, crowd workers found the predicted distrac- tors good enough to include in the ï¬nal question around half of the time, resulting in 36.1% of the distractors in the ï¬nal dataset being generated by the model (because workers were only allowed to pick two predicted distractors, the theoretical max-
Q: Compounds containing an atom of what element, bonded in a hydrocarbon framework, are classiï¬ed as amines? A: nitrogen oxygen (0.982) hydrogen (0.962) nutrient (0.942) calcium (0.938) silicon (0.938) soil (0.9365) Q: Elements have or- bitals that are ï¬lled with what? A: electrons ions (0.975) atoms (0.959) crystals (0.952) protons (0.951) neutrons (0.946) photons (0.912) Q: Many species use their body shape and col- oration to avoid being de- tected by what? A: predators viruses (0.912) ecosystems (0.896) frogs (0.896) distances (0.8952) males (0.877) crocodiles (0.869) Q: The small amount of energy input necessary for all chemi- cal reactions to occur is called what? A: activation energy conversely energy (0.987) decomposition energy (0.984) membrane energy (0.982) motion energy (0.982) context energy (0.981) distinct energy (0.980)
Table 1: Selected distractor prediction model outputs. For each QA pair, the top six predictions are listed in row 3 (ranking score in parentheses). Boldfaced candidates were accepted by crowd workers.
imum is 66%). Acceptance rates were higher in the case of short answers, with almost none ac- cepted for the few cases with very long answers.
The remainder of this paper will investigate properties of SciQ, the dataset we generated by following the methodology described in this sec- tion. We present system and human performance, and we show that SciQ can be used as additional training data to improve model performance on real science exams.
# 3.3 Dataset properties
SciQ has a total of 13,679 multiple choice ques- tions. We randomly shufï¬ed this dataset and split it into training, validation and test portions, with 1000 questions in each of the validation and test portions, and the remainder in train. In Figure 2 we show the distribution of question and answer lengths in the data. For the most part, questions and answers in the dataset are relatively short, though there are some longer questions.
âe question length ââ answer length âeâ distractor length absolute frequency [log] â z 0 10 20 30 40 50 60 Length (# tokens)
Figure 2: Total counts of question, answer and dis- tractor length, measured in number of tokens, cal- culated across the training set.
Each question also has an associated passage used when generating the question. Because the multiple choice question is trivial to answer when given the correct passage, the multiple choice ver- sion of SciQ does not include the passage; systems must retrieve their own background knowledge when answering the question. Because we have the associated passage, we additionally created a direct-answer version of SciQ, which has the pas- sage and the question, but no answer options. A small percentage of the passages were obtained from unreleasable texts, so the direct answer ver- sion of SciQ is slightly smaller, with 10481 ques- tions in train, 887 in dev, and 884 in test.
Model Aristo Lucene TableILP AS Reader GA Reader Humans Accuracy 77.4 80.0 31.8 74.1 73.8 87.8 ± 0.045
Qualitative Evaluation. We created a crowd- sourcing task with the following setup: A person was presented with an original science exam ques- tion and a crowdsourced question. The instruc- tions were to choose which of the two questions was more likely to be the real exam question. We randomly drew 100 original questions and 100 in- stances from the SciQ training set and presented the two options in random order. People identi- ï¬ed the science exam question in 55% of the cases, which falls below the signiï¬cance level of p=0.05 under a null hypothesis of a random guess4.
Table 2: Test set accuracy of existing models on the multiple choice version of SciQ.
4using normal approximation
# 4 SciQ Experiments
# 4.1 System performance
We evaluated several state-of-the-art science QA systems, reading comprehension models, and hu- man performance on SciQ.
Multiple Choice Setting. We used the Aristo ensemble (Clark et al., 2016), and two of its indi- vidual components: a simple information retrieval baseline (Lucene), and a table-based integer linear programming model (TableILP), to evaluate SciQ. We also evaluate two competitive neural reading comprehension models: the Attention Sum Reader (AS Reader, a GRU with a pointer-attention mech- anism; Kadlec et al. (2016)) and the Gated At- tention Reader (GA Reader, an AS Reader with additional gated attention layers; Dhingra et al. (2016)). These reading comprehension methods require a supporting text passage to answer a ques- tion. We use the same corpus as Aristoâs Lucene component to retrieve a text passage, by formulat- ing ï¬ve queries based on the question and answer5 and then concatenating the top three results from each query into a passage. We train the reading comprehension models on the training set with hy- perparameters recommended by prior work ((On- ishi et al., 2016) for the AS Reader and (Dhingra et al., 2016) for the GA Reader), with early stop- ping on the validation data6. Human accuracy is estimated using a sampled subset of 650 questions, with 13 different people each answering 50 ques- tions. When answering the questions, people were allowed to query the web, just as the systems were. Table 2 shows the results of this evaluation. Aristo performance is slightly better on this set than on real science exams (where Aristo achieves 71.3% accuracy (Clark et al., 2016)).7 Because TableILP uses a hand-collected set of background knowledge that does not cover the topics in SciQ, its performance is substantially worse here than on its original test set. Neural models perform rea- sonably well on this dataset, though, interestingly, they are not able to outperform a very simple infor- mation retrieval baseline, even when using exactly the same background information. This suggests that SciQ is a useful dataset for studying reading comprehension models in medium-data settings.
5The question text itself, plus each of the four answer op- tions appended to the question text.
6For training and hyperparameter details, see Appendix 7We did not retrain the Aristo ensemble for SciQ; it might overly rely on TableILP, which does not perform well here.
Dataset AS Reader GA Reader 4th grade 4th grade + SciQ Difference 40.7% 45.0% +4.3% 37.6% 45.4% +7.8% 8th grade 8th grade + SciQ Difference 41.2% 43.0% +1.8% 41.0% 44.3% +3.3%
Table 3: Model accuracies on real science ques- tions validation set when trained on 4th / 8th grade exam questions alone, and when adding SciQ.
Direct Answer Setting. We additionally present a baseline on the direct answer version of SciQ. We use the Bidirectional Attention Flow model (BiDAF; Seo et al. (2016)), which recently achieved state-of-the-art results on SQuAD (Ra- jpurkar et al., 2016). We trained BiDAF on the training portion of SciQ and evaluated on the test set. BiDAF achieves a 66.7% exact match and 75.7 F1 score, which is 1.3% and 1.6% below the modelâs performance on SQuAD.
# 4.2 Using SciQ to answer exam questions
Our last experiment with SciQ shows its useful- ness as training data for models that answer real science questions. We collected a corpus of 4th and 8th grade science exam questions and used the AS Reader and GA Reader to answer these ques- tions.8 Table 3 shows model performances when only using real science questions as training data, and when augmenting the training data with SciQ. By adding SciQ, performance for both the AS Reader and the GA Reader improves on both grade levels, in a few cases substantially. This contrasts with our earlier attempts using purely synthetic data, where we saw models overï¬t the synthetic data and an overall performance decrease. Our successful transfer of information from SciQ to real science exam questions shows that the ques- tion distribution is similar to that of real science questions.
# 5 Conclusion
We have presented a method for crowdsourcing the creation of multiple choice QA data, with
8There are approx. 3200 8th grade training questions and 1200 4th grade training questions. Some of the questions come from www.allenai.org/data, some are propri- etary.
a particular focus on science questions. Using this methodology, we have constructed a dataset of 13.7K science questions, called SciQ, which we release for future research. We have shown through baseline evaluations that this dataset is a useful research resource, both to investigate neu- ral model performance in medium-sized data set- tings, and to augment training data for answering real science exam questions.
There are multiple strands for possible future work. One direction is a systematic exploration of multitask settings to best exploit this new dataset. Possible extensions for the direction of generating answer distractors could lie in the adaptation of this idea in negative sampling, e.g. in KB popula- tion. Another direction is to further bootstrap the data we obtained to improve automatic document selection, question generation and distractor pre- diction to generate questions fully automatically.
# References
Manish Agarwal and Prashanth Mannem. 2011. Auto- matic gap-ï¬ll question generation from text books. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, Strouds- burg, PA, USA, IUNLPBEA â11, pages 56â64. http://dl.acm.org/citation.cfm?id=2043132.2043139.
Itziar Aldabe and Montse Maritxalar. 2010. Auto- matic Distractor Generation for Domain Speciï¬c Texts, Springer Berlin Heidelberg, Berlin, Heidel- berg, pages 27â38.
Jonathan Berant, Andrew Chou, Roy Frostig, and Semantic parsing on free- Percy Liang. 2013. In Proceedings base from question-answer pairs. of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18- 21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. pages 1533â1544. http://aclweb.org/anthology/D/D13/D13-1160.pdf.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Large-scale simple ques- CoRR Jason Weston. 2015. tion answering with memory networks. abs/1506.02075. http://arxiv.org/abs/1506.02075.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics.
Leo Breiman. 2001. Random forests. Machine Learn- ing 45(1):5â32.
Peter Clark. 2015. Elementary school science and math tests as a driver for ai: Take the aristo the Twenty- challenge! Ninth AAAI Conference on Artiï¬cial Intelli- gence. AAAI Press, AAAIâ15, pages 4019â4021. http://dl.acm.org/citation.cfm?id=2888116.2888274.
Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Turney, and Combining retrieval, Daniel Khashabi. 2016. elemen- to answer statistics, the tary science questions. Thirtieth AAAI Conference on Artiï¬cial Intelli- gence. AAAI Press, AAAIâ16, pages 2580â2586. http://dl.acm.org/citation.cfm?id=3016100.3016262.
Peter Clark, Philip Harrison, and Niranjan Balasub- ramanian. 2013. A study of the knowledge base requirements for passing an elementary science In Proceedings of the 2013 Workshop on test. Automated Knowledge Base Construction. ACM, New York, NY, USA, AKBC â13, pages 37â42. https://doi.org/10.1145/2509558.2509565.
Rui Correia, Jorge Baptista, Nuno Mamede, Isabel Trancoso, and Maxine Eskenazi. 2010. Automatic In Pro- generation of cloze question distractors. ceedings of the Interspeech 2010 Satellite Workshop on Second Language Studies: Acquisition, Learn- ing, Education and Technology, Waseda University, Tokyo, Japan.
Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention read- ers for text comprehension. CoRR abs/1606.01549. http://arxiv.org/abs/1606.01549.
Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT â10, pages 609â617. http://dl.acm.org/citation.cfm?id=1857999.1858085.
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Teaching Suleyman, and Phil Blunsom. 2015. In Advances machines to read and comprehend. in Neural Information Processing Systems (NIPS). http://arxiv.org/abs/1506.03340.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikiread- ing: A novel large-scale language understand- ing task over wikipedia. CoRR abs/1608.03542. http://arxiv.org/abs/1608.03542.
Felix Hill, Antoine Bordes, Sumit Chopra, and The goldilocks prin- Jason Weston. 2015. ciple: Reading childrenâs books with explicit memory representations. CoRR abs/1511.02301. http://arxiv.org/abs/1511.02301.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the at- tention sum reader network. CoRR abs/1603.01547. http://arxiv.org/abs/1603.01547.
Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over In Proceedings of semi-structured knowledge. the Twenty-Fifth International Joint Conference on Artiï¬cial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016. pages 1145â1152. http://www.ijcai.org/Abstract/16/166.
Eric Gribkoff, Ashish Sabharwal, Peter Clark, and Oren Etzioni. 2015. Exploring markov logic networks In Proceedings of the for question answering. 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015. pages 685â694. http://aclweb.org/anthology/D/D15/D15-1080.pdf.
Yang Li and Peter Clark. 2015. Answering elementary science questions by constructing coherent scenes In EMNLP. pages using background knowledge. 2007â2012.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language In Proceed- learning for information extraction. ings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Asso- ciation for Computational Linguistics, Stroudsburg, PA, USA, EMNLP-CoNLL â12, pages 523â534. http://dl.acm.org/citation.cfm?id=2390948.2391009.
Ruslan Mitkov and Le An Ha. 2003. Computer- In aided generation of multiple-choice tests. Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natu- ral Language Processing - Volume 2. Associa- tion for Computational Linguistics, Stroudsburg, PA, USA, HLT-NAACL-EDUC â03, pages 17â22. https://doi.org/10.3115/1118894.1118897.
Ruslan Mitkov, Le An Ha, Andrea Varga, and Semantic similarity of dis- Luz Rello. 2009. tractors in multiple-choice tests: Extrinsic eval- the Workshop on uation. Geometrical Models of Natural Language Seman- tics. Association for Computational Linguistics, Stroudsburg, PA, USA, GEMS â09, pages 49â56. http://dl.acm.org/citation.cfm?id=1705415.1705422.
Generat- ing diagnostic multiple choice comprehension In Proceedings of the Seventh cloze questions. Workshop on Building Educational Applications Using NLP. Association for Computational Lin- guistics, Stroudsburg, PA, USA, pages 136â146. http://dl.acm.org/citation.cfm?id=2390384.2390401.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and
Li Deng. 2016. MS MARCO: A human gener- ated machine reading comprehension dataset. CoRR abs/1611.09268. http://arxiv.org/abs/1611.09268.
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David A. McAllester. 2016. Who did what: A large-scale person-centered cloze the 2016 Con- dataset. ference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 2230â2235. http://aclweb.org/anthology/D/D16/D16-1241.pdf.
Andreas Papasalouros, Konstantinos Kanaris, and Kon- stantinos Kotis. 2008. Automatic generation of mul- In tiple choice questions from domain ontologies. Miguel Baptista Nunes and Maggie McPherson, ed- itors, e-Learning. IADIS, pages 427â434.
Denis Paperno, Germ´an Kruszewski, Angeliki Lazari- dou, Quan Ngoc Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031 .
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP). pages 1532â 1543. http://www.aclweb.org/anthology/D14-1162.
Juan Pino and Maxine Esknazi. 2009. Semi-automatic generation of cloze question distractors effect of stu- dentsâ l1. In SLaTE. ISCA, pages 65â68.
Juan Pino, Michael Heilman, and Maxine Eskenazi. 2008. A Selection Strategy to Improve Cloze Ques- tion Quality. In Proceedings of the Workshop on In- telligent Tutoring Systems for Ill-Deï¬ned Domains. 9th International Conference on Intelligent Tutoring Systems..
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehen- sion of text. In Empirical Methods in Natural Lan- guage Processing (EMNLP).
Mrinmaya Sachan, Avinava Dubey, and Eric P. Science question answering using CoRR abs/1602.04375. Xing. 2016. instructional materials. http://arxiv.org/abs/1602.04375.
Keisuke Sakaguchi, Yuki Arase, and Mamoru Ko- machi. 2013. Discriminative approach to ï¬ll- in-the-blank quiz generation for language learn- the 51st Annual Meet- ers. ing of the Association for Computational Linguis- tics, ACL 2013, 4-9 August 2013, Soï¬a, Bul- garia, Volume 2: Short Papers. pages 238â242. http://aclweb.org/anthology/P/P13/P13-2043.pdf.
Carissa Schoenick, Peter Clark, Oyvind Tafjord, Peter Turney, and Oren Etzioni. 2016. Moving beyond the turing test with the allen ai science challenge. arXiv preprint arXiv:1604.04315 .
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional at- tention ï¬ow for machine comprehension. CoRR abs/1611.01603. http://arxiv.org/abs/1611.01603.
Alessandro Sordoni, Phillip Bachman, and Yoshua Iterative alternating neural atten- Bengio. 2016. tion for machine reading. CoRR abs/1606.02245. http://arxiv.org/abs/1606.02245.
and Seiichi Yamamoto. 2005. Measuring non-native speak- ersâ proï¬ciency of english by using a test with automatically-generated ques- In Proceedings of the Second Workshop on tions. Building Educational Applications Using NLP. Association for Computational Linguistics, Strouds- burg, PA, USA, EdAppsNLP 05, pages 61â68. http://dl.acm.org/citation.cfm?id=1609829.1609839.
Yi Yang, Scott Wen-tau Yih, and Chris Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. ACL Association for Compu- tational Linguistics. https://www.microsoft.com/en- us/research/publication/wikiqa-a-challenge-dataset- for-open-domain-question-answering/.
Au- challenging distractors tomatic generation of In inference using context-sensitive Proceedings of the Ninth Workshop on In- novative Use of NLP for Building Educa- tional Applications, BEA@ACL 2014, June 26, 2014, Baltimore, Maryland, USA. pages 143â http://aclweb.org/anthology/W/W14/W14- 148. 1817.pdf.
# A List of Study Books
The following is a list of the books we used as data source:
⢠OpenStax, Anatomy & Physiology. Open- Stax. 25 April 20139
⢠OpenStax, Biology. OpenStax. May 20, 201310
⢠OpenStax, Chemistry. OpenStax. 11 March 201511
⢠OpenStax, College Physics. OpenStax. 21 June 201212
⢠OpenStax, Concepts of Biology. OpenStax. 25 April 201313
by Michael 2.0 Klymkowsky, University of Colorado & Melanie Cooper, Michigan State Univer- sity14
⢠Earth Systems, An Earth Science Course on www.curriki.org15
⢠General Chemistry, Principles, Patterns, and Applications by Bruce Averill, Strategic En- ergy Security Solutions and Patricia El- dredge, R.H. Hand, LLC; Saylor Founda- tion16
⢠General Biology; Paul Doerder, Cleveland State University & Ralph Gibson, Cleveland State University 17
9Download for free at http://cnx.org/content/ col11496/latest/
10Download for free at http://cnx.org/content/ col11448/latest/
11Download for free at http://cnx.org/content/ col11760/latest/
12Download for free at http://cnx.org/content/ col11406/latest
13Download for free at http://cnx.org/content/ col11487/latest
14https://open.umn.edu/opentextbooks/ BookDetail.aspx?bookId=350
# 15http://www.curriki. org/xwiki/bin/view/Group_ CLRN-OpenSourceEarthScienceCourse/ 16https://www.saylor.org/site/
textbooks/General%20Chemistry% 20Principles,%20Patterns,%20and% 20Applications.pdf
# 17https://upload.wikimedia.org/
wikipedia/commons/4/40/GeneralBiology. pdf
⢠Introductory Chemistry by David W. Ball, Cleveland State University. Saylor Founda- tion 18
⢠The Basics of General, Organic, and Biologi- cal Chemistry by David Ball, Cleveland State University & John Hill, University of Wis- consin & Rhonda Scott, Southern Adventist University. Saylor Foundation19
4 Elementary-Level Science Test, by Joyce Thornton Barry and Kathleen Cahill 20
⢠Campbell Biology: Concepts & Connections by Jane B. Reece, Martha R. Taylor, Eric J. Simon, Jean L. Dickey21
⢠CK-12 Peoples Physics Book Basic 22
⢠CK-12 Biology Advanced Concepts 23
⢠CK-12 Biology Concepts 24
⢠CK-12 Biology 25
⢠CK-12 Chemistry - Basic 26
⢠CK-12 Chemistry Concepts â Intermediate 27
⢠CK-12 Earth Science Concepts For Middle School28
⢠CK-12 Earth Science Concepts For High School29
# 18https://www.saylor.org/site/
# textbooks/Introductory%20Chemistry.pdf
# 19http://web.archive.org/web/ 20131024125808/http://www.saylor. org/site/textbooks/The%20Basics%20of% 20General,%20Organic%20and%20Biological% 20Chemistry.pdf
20We do not include documents from this resource in the dataset.
21We do not include documents from this resource in the dataset.
22http://www.ck12.org/book/ Peoples-Physics-Book-Basic/ 23http://www.ck12.org/book/ CK-12-Biology-Advanced-Concepts/ 24http://www.ck12.org/book/ CK-12-Biology-Concepts/ 25http://www.ck12.org/book/ CK-12-Biology/ 26http://www.ck12.org/book/ CK-12-Chemistry-Basic/ 27http://www.ck12.org/book/ CK-12-Chemistry-Concepts-Intermediate/ 28http://www.ck12.org/book/ CK-12-Earth-Science-Concepts-For-Middle-School/ 29http://www.ck12.org/book/ CK-12-Earth-Science-Concepts-For-High-School/
⢠CK-12 Earth Science For Middle School 30
⢠CK-12 Life Science Concepts For Middle School 31
⢠CK-12 Life Science For Middle School 32
⢠CK-12 Physical Science Concepts For Mid- dle School33
⢠CK-12 Physical Science For Middle School 34
⢠CK-12 Physics Concepts - Intermediate 35
⢠CK-12 Peopleâs Physics Concepts 36
through correspondence with the authors of On- ishi et al. (2016)) and use the hyperparameters re- ported in the original paper (Kadlec et al., 2016) for the rest. For the GA Reader, we use three gated-attention layers with the multiplicative gat- ing mechanism. We do not use the character-level embedding features or the question-evidence com- mon word features, but we do follow their work by using pretrained 100-dimension GloVe vectors to initialize a ï¬xed word embedding layer. Between each gated attention layer, we apply dropout with a rate of 0.3. The other hyperparameters are the same as their original work (Dhingra et al., 2016). Direct Answer Reading Comprehension. We implemented the Bidirectional Attention Flow model exactly as described in Seo et al. (2016) and adopted the hyperparameters used in the paper.
CK-12 books were obtained under the Creative Commons Attribution-Non-Commercial 3.0 Un- ported (CC BY-NC 3.0) License 37.
# B Training and Implementation Details
Multiple Choice Reading Comprehension. Dur- ing training of the AS Reader and GA Reader, we monitored model performance after each epoch and stopped training when the error on the valida- tion set had increased (early stopping, with a pa- tience of one). We set a hard limit of ten epochs, but most models reached their peak validation ac- curacy after the ï¬rst or second epoch. Test set evaluation, when applicable, used model param- eters at the epoch of their peak validation accu- racy. We implemented the models in Keras, and ran them with the Theano backend on a Tesla K80 GPU.
The hyperparameters for each of the models were adopted from previous work. For the AS Reader, we use an embedding dimension of 256 and GRU hidden layer dimension of 384 (obtained
30http://www.ck12.org/book/ CK-12-Earth-Science-For-Middle-School/ 31http://www.ck12.org/book/ CK-12-Life-Science-Concepts-For-Middle-School/ 32http://www.ck12.org/book/ CK-12-Life-Science-For-Middle-School/ 33http://www.ck12.org/book/ CK-12-Physical-Science-Concepts-For-Middle-School/ 34http://www.ck12.org/book/ CK-12-Physical-Science-For-Middle-School/ 35http://www.ck12.org/book/ CK-12-Physics-Concepts-Intermediate/ 36http://www.ck12.org/book/ Peoples-Physics-Concepts/ 37http://creativecommons.org/licenses/ by-nc/3.0/ | {
"id": "1606.06031"
} |
1707.05589 | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset. | http://arxiv.org/pdf/1707.05589 | Gábor Melis, Chris Dyer, Phil Blunsom | cs.CL | null | null | cs.CL | 20170718 | 20171120 | 7 1 0 2
v o N 0 2 ] L C . s c [
2 v 9 8 5 5 0 . 7 0 7 1 : v i X r a
Under review as a conference paper at ICLR 2018
# ON THE STATE OF THE ART OF EVALUATION IN NEURAL LANGUAGE MODELS
# G´abor Melisâ , Chris Dyerâ , Phil Blunsomâ â¡ {melisgl,cdyer,pblunsom}@google.com â DeepMind â¡University of Oxford
# ABSTRACT
Ongoing innovations in recurrent neural network architectures have provided a steady inï¬ux of apparently state-of-the-art results on language modelling bench- marks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation meth- ods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
# INTRODUCTION
The scientiï¬c process by which the deep learning research community operates is guided by em- pirical studies that evaluate the relative quality of models. Complicating matters, the measured performance of a model depends not only on its architecture (and data), but it can strongly depend on hyperparameter values that affect learning, regularisation, and capacity. This hyperparameter dependence is an often inadequately controlled source of variation in experiments, which creates a risk that empirically unsound claims will be reported.
In this paper, we use a black-box hyperparameter optimisation technique to control for hyperpa- rameter effects while comparing the relative performance of language modelling architectures based on LSTMs, Recurrent Highway Networks (Zilly et al., 2016) and NAS (Zoph & Le, 2016). We specify ï¬exible, parameterised model families with the ability to adjust embedding and recurrent cell sizes for a given parameter budget and with ï¬ne grain control over regularisation and learning hyperparameters.
Once hyperparameters have been properly controlled for, we ï¬nd that LSTMs outperform the more recent models, contra the published claims. Our result is therefore a demonstration that replication failures can happen due to poorly controlled hyperparameter variation, and this paper joins other recent papers in warning of the under-acknowledged existence of replication failure in deep learn- ing (Henderson et al., 2017; Reimers & Gurevych, 2017). However, we do show that careful controls are possible, albeit at considerable computational cost.
Several remarks can be made in light of these results. First, as (conditional) language models serve as the central building block of many tasks, including machine translation, there is little reason to expect that the problem of unreliable evaluation is unique to the tasks discussed here. However, in machine translation, carefully controlling for hyperparameter effects would be substantially more expensive because standard datasets are much larger. Second, the research community should strive for more consensus about appropriate experimental methodology that balances costs of careful ex- perimentation with the risks associated with false claims. Finally, more attention should be paid to hyperparameter sensitivity. Models that introduce many new hyperparameters or which perform well only in narrow ranges of hyperparameter settings should be identiï¬ed as such as part of standard publication practice.
1
# Under review as a conference paper at ICLR 2018
©OSSSSOD SOOO OOOO 7 T T ry + T T T 7 7-hLL-- a ah ea eh ea ea fea \ \ \ \ \ \ y : ; Aj : fk : \ | | | \ (es ee ee L\ ft \ bP \ P| \ P| p\ OOO000000 Od000000
(a) two-layer LSTM/NAS with skip connections
(b) RHN with two processing steps per input
Figure 1: Recurrent networks with optional down-projection, per-step and per-sequence dropout (dashed and solid lines).
# 2 MODELS
Our focus is on three recurrent architectures:
⢠The Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) serves as a well known and frequently used baseline.
⢠The recently proposed Recurrent Highway Network (Zilly et al., 2016) is chosen because it has demonstrated state-of-the-art performance on a number of datasets.
⢠Finally, we also include NAS (Zoph & Le, 2016), because of its impressive performance and because its architecture was the result of an automated reinforcement learning based optimisation process.
Our aim is strictly to do better model comparisons for these architectures and we thus refrain from including techniques that are known to push perplexities even lower, but which are believed to be largely orthogonal to the question of the relative merits of these recurrent cells. In parallel work with a remarkable overlap with ours, Merity et al. (2017) demonstrate the utility of adding a Neural Cache (Grave et al., 2016). Building on their work, Krause et al. (2017) show that Dynamic Evaluation (Graves, 2013) contributes similarly to the ï¬nal perplexity.
As pictured in Fig. 1a, our models with LSTM or NAS cells have all the standard components: an input embedding lookup table, recurrent cells stacked as layers with additive skip connections combining outputs of all layers to ease optimisation. There is an optional down-projection whose presence is governed by a hyperparameter from this combined output to a smaller space which reduces the number of output embedding parameters. Unless otherwise noted, input and output embeddings are shared, see (Inan et al., 2016) and (Press & Wolf, 2016).
Dropout is applied to feedforward connections denoted by dashed arrows in the ï¬gure. From the bottom up: to embedded inputs (input dropout), to connections between layers (intra-layer dropout), to the combined and the down-projected outputs (output dropout). All these dropouts have random masks drawn independently per time step, in contrast to the dropout on recurrent states where the same mask is used for all time steps in the sequence.
RHN based models are typically conceived of as a single horizontal âhighwayâ to emphasise how the recurrent state is processed through time. In Fig. 1b, we choose to draw their schema in a way that makes the differences from LSTMs immediately apparent. In a nutshell, the RHN state is passed from the topmost layer to the lowest layer of the next time step. In contrast, each LSTM layer has its own recurrent connection and state.
The same dropout variants are applied to all three model types, with the exception of intra-layer dropout which does not apply to RHNs since only the recurrent state is passed between the layers.
2
# Under review as a conference paper at ICLR 2018
For the recurrent states, all architectures use either variational dropout (Gal & Ghahramani, 2016, state dropout)1 or recurrent dropout (Semeniuta et al., 2016), unless explicitly noted otherwise.
3 EXPERIMENTAL SETUP
3.1 DATASETS
We compare models on three datasets. The smallest of them is the Penn Treebank corpus by Marcus et al. (1993) with preprocessing from Mikolov et al. (2010). We also include another word level corpus: Wikitext-2 by Merity et al. (2016). It is about twice the size of Penn Treebank with a larger vocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset (Hutter, 2012). Following common practice, we use the ï¬rst 90 million characters for training, and the remaining 10 million evenly split between validation and test.
# 4 TRAINING DETAILS
When training word level models we follow common practice and use a batch size of 64, truncated backpropagation with 35 time steps, and we feed the ï¬nal states from the previous batch as the initial state of the subsequent one. At the beginning of training and test time, the model starts with a zero state. To bias the model towards being able to easily start from such a state at test time, during training, with probability 0.01 a constant zero state is provided as the initial state.
Optimisation is performed by Adam (Kingma & Ba, 2014) with 6; = 0 but otherwise default parameters (82 = 0.999, « = 107%). Setting 31 so turns off the exponential moving average for the estimates of the means of the gradients and brings Adam very close to RMSProp without momentum, but due to Adamâs bias correction, larger learning rates can be used.
Batch size is set to 64. The learning rate is multiplied by 0.1 whenever validation performance does not improve ever during 30 consecutive checkpoints. These checkpoints are performed after every 100 and 200 optimization steps for Penn Treebank and Wikitext-2, respectively.
For character level models (ie. Enwik8), the differences are: truncated backpropagation is per- formed with 50 time steps. Adamâs parameters are 8. = 0.99, ⬠= 1075. Batch size is 128. Checkpoints are only every 400 optimisation steps and embeddings are not shared.
# 5 EVALUATION
For evaluation, the checkpoint with the best validation perplexity found by the tuner is loaded and the model is applied to the test set with a batch size of 1. For the word based datasets, using the training batch size makes results worse by 0.3 PPL while Enwik8 is practically unaffected due to its evaluation and training sets being much larger. Preliminary experiments indicate that MC averaging would bring a small improvement of about 0.4 in perplexity and 0.005 in bits per character, similar to the results of Gal & Ghahramani (2016), while being a 1000 times more expensive which is prohibitive on larger datasets. Therefore, throughout we use the mean-ï¬eld approximation for dropout at test time.
5.1 HYPERPARAMETER TUNING
Hyperparameters are optimised by Google Vizier (Golovin et al., 2017), a black-box hyperparameter tuner based on batched GP bandits using the expected improvement acquisition function (Desautels et al., 2014). Tuners of this nature are generally more efï¬cient than grid search when the number of hyperparameters is small. To keep the problem tractable, we restrict the set of hyperparameters to learning rate, input embedding ratio, input dropout, state dropout, output dropout, weight decay. For deep LSTMs, there is an extra hyperparameter to tune: intra-layer dropout. Even with this small set, thousands of evaluations are required to reach convergence.
1Of the two parameterisations, we used the one in which there is further sharing of masks between gates rather than independent noise for the gates.
3
# Under review as a conference paper at ICLR 2018
Model Size Depth Valid Test Medium LSTM, Zaremba et al. (2014) Large LSTM, Zaremba et al. (2014) VD LSTM, Press & Wolf (2016) VD LSTM, Inan et al. (2016) VD LSTM, Inan et al. (2016) VD RHN, Zilly et al. (2016) NAS, Zoph & Le (2016) NAS, Zoph & Le (2016) AWD-LSTM, Merity et al. (2017) â 10M 24M 51M 9M 28M 24M 25M 54M 24M 2 2 2 2 2 10 - - 3 86.2 82.2 75.8 77.1 72.5 67.9 - - 60.0 82.7 78.4 73.2 73.9 69.0 65.4 64.0 62.4 57.3 LSTM LSTM LSTM RHN NAS 10M 1 2 4 5 1 61.8 63.0 62.4 66.0 65.6 59.6 60.8 60.1 63.5 62.7 LSTM LSTM LSTM RHN NAS 24M 1 2 4 5 1 61.4 62.1 60.9 64.8 62.1 59.5 59.6 58.3 62.2 59.7
Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parameters and depths. All results except those from Zaremba are with shared input and output embeddings. VD stands for Variational Dropout from Gal & Ghahramani (2016). â : parallel work.
Parameter budget. Motivated by recent results from Collins et al. (2016), we compare models on the basis of the total number of trainable parameters as opposed to the number of hidden units. The tuner is given control over the presence and size of the down-projection, and thus over the tradeoff between the number of embedding vs. recurrent cell parameters. Consequently, the cellsâ hidden size and the embedding size is determined by the actual parameter budget, depth and the input embedding ratio hyperparameter.
For Enwik8 there are relatively few parameters in the embeddings since the vocabulary size is only 205. Here we choose not to share embeddings and to omit the down-projection unconditionally.
# 6 RESULTS
6.1 PENN TREEBANK
We tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24 million matching the sizes of the Medium and Large LSTMs by (Zaremba et al., 2014). The results are summarised in Table 1.
Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the 24M one in the original publication. Our 24M version improves on that further. However, a shallow LSTM-based model with only 10M parameters enjoys a very comfortable margin over that, with deeper models following near the estimated noise range. At 24M, all depths obtain very similar results, reaching 58.3 at depth 4. Unsurprisingly, NAS whose architecture was chosen based on its performance on this dataset does almost equally well, even better than in Zoph & Le (2016).
# 6.2 WIKITEXT-2
Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned for Penn Treebank perform reasonably on this dataset, and this is in fact how results in previous works were produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table 2, we report numbers for both approaches. All our results are well below the previous state of the are for models without dynamic evaluation or caching. That said, our best result, 65.9 compares
4
# Under review as a conference paper at ICLR 2018
Model Size Depth Valid Test VD LSTM, Merity et al. (2016) VD+Zoneout LSTM, Merity et al. (2016) VD LSTM, Inan et al. (2016) AWD-LSTM, Merity et al. (2017) â 20M 20M 22M 33M 2 2 2 3 101.7 108.7 91.5 68.6 96.3 100.9 87.7 65.8 LSTM (tuned for PTB) LSTM LSTM LSTM RHN NAS 10M 1 1 2 4 5 1 88.4 72.7 73.8 78.3 83.5 79.6 83.2 69.1 70.7 74.3 79.5 75.9 LSTM (tuned for PTB) LSTM LSTM LSTM RHN NAS 24M 1 1 2 4 5 1 79.8 69.3 69.1 70.5 78.1 73.0 76.3 65.9 65.9 67.6 75.6 69.8
Table 2: Validation and test set perplexities on Wikitext-2. All results are with shared input and output embed- dings. â : parallel work.
favourably even to the Neural Cache (Grave et al., 2016) whose innovations are fairly orthogonal to the base model.
Shallow LSTMs do especially well here. Deeper models have gradually degrading perplexity, with RHNs lagging all of them by a signiï¬cant margin. NAS is not quite up there with the LSTM suggesting its architecture might have overï¬tted to Penn Treebank, but data for deeper variants would be necessary to draw this conclusion.
6.3 ENWIK8
In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochs which is about a tenth of what the model of Zilly et al. (2016) was trained for. Nevertheless, we match their smaller RHN with our models which are very close to each other. NAS lags the other models by a surprising margin at this task.
# 7 ANALYSIS
On two of the three datasets, we improved previous results substantially by careful model speciï¬- cation and hyperparameter optimisation, but the improvement for RHNs is much smaller compared to that for LSTMs. While it cannot be ruled out that our particular setup somehow favours LSTMs, we believe it is more likely that this effect arises due to the original RHN experimental condition having been tuned more extensively (this is nearly unavoidable during model development).
Naturally, NAS beneï¬tted only to a limited degree from our tuning, since the numbers of Zoph & Le (2016) were already produced by employing similar regularisation methods and a grid search. The small edge can be attributed to the suboptimality of grid search (see Section 7.3).
In summary, the three recurrent cell architectures are closely matched on all three datasets, with minuscule differences on Enwik8 where regularisation matters the least. These results support the claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation. While comparing three similar architectures cannot prove this point, the inclusion of NAS certainly gives it more credence. This way we have two of the best human designed and one machine optimised cell that was the top performer among thousands of candidates.
5
# Under review as a conference paper at ICLR 2018
Model Size Depth Valid Test Stacked LSTM, Graves (2013) Grid LSTM, Kalchbrenner et al. (2015) MI-LSTM, Wu et al. (2016) LN HM-LSTM, Chung et al. (2016) ByteNet, Kalchbrenner et al. (2016) VD RHN, Zilly et al. (2016) VD RHN, Zilly et al. (2016) VD RHN, Zilly et al. (2016) 21M 17M 17M 35M - 23M 21M 46M 7 6 1 3 25 5 10 10 - - - - - - - - 1.67 1.47 1.44 1.32 1.31 1.31 1.30 1.27 LSTM RHN NAS 27M 4 5 4 1.29 1.30 1.38 1.31 1.31 1.40 LSTM RHN NAS 46M 4 5 4 1.28 1.29 1.32 1.30 1.30 1.33
Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.
7.1 THE EFFECT OF INDIVIDUAL FEATURES
Down-projection was found to be very beneï¬cial by the tuner for some depth/budget combinations. On Penn Treebank, it improved results by about 2â5 perplexity points at depths 1 and 2 at 10M, and depth 1 at 24M, possibly by equipping the recurrent cells with more capacity. The very same models beneï¬ted from down-projection on Wikitext-2, but even more so with gaps of about 10â18 points which is readily explained by the larger vocabulary size.
We further measured the contribution of other features of the models in a series of experiments. See Table 4. To limit the number of resource used, in these experiments only individual features were evaluated (not their combinations) on Penn Treebank at the best depth for each architecture (LSTM or RHN) and parameter budget (10M or 24M) as determined above.
First, we untied input and output embeddings which made perplexities worse by about 6 points across the board which is consistent with the results of Inan et al. (2016).
Second, without variational dropout the RHN models suffer quite a bit since there remains no dropout at all in between the layers. The deep LSTM also sees a similar loss of perplexity as having intra-layer dropout does not in itself provide enough regularisation.
Third, we were also interested in how recurrent dropout (Semeniuta et al., 2016) would perform in lieu of variational dropout. Dropout masks were shared between time steps in both methods, and our results indicate no consistent advantage to either of them.
7.2 MODEL SELECTION
With a large number of hyperparameter combinations evaluated, the question of how much the tuner overï¬ts arises. There are multiple sources of noise in play,
(a) non-deterministic ordering of ï¬oating-point operations in optimised linear algebra routines, (b) different initialisation seeds, (c) the validation and test sets being ï¬nite samples from a inï¬nite population.
To assess the severity of these issues, we conducted the following experiment: models with the best hyperparameter settings for Penn Treebank and Wikitext-2 were retrained from scratch with various initialisation seeds and the validation and test scores were recorded. If during tuning, a model just got a lucky run due to a combination of (a) and (b), then retraining with the same hyperparameters but with different seeds would fail to reproduce the same good results.
There are a few notable things about the results. First, in our environment (Tensorï¬ow with a single GPU) even with the same seed as the one used by the tuner, the effect of (a) is almost as large as that of (a) and (b) combined. Second, the variance induced by (a) and (b) together is roughly equivalent to an absolute difference of 0.4 in perplexity on Penn Treebank and 0.5 on Wikitext-2.
6
# Under review as a conference paper at ICLR 2018
Size 10M Size 24M Model Depth Valid Test Depth Valid Test LSTM 1 61.8 59.6 4 60.9 58.3 - Shared Embeddings - Variational Dropout + Recurrent Dropout + Untied gates + Tied gates 1 1 1 1 1 67.6 62.9 62.8 61.4 61.7 65.2 61.2 60.6 58.9 59.6 4 4 4 4 4 65.6 66.3 65.2 64.0 60.4 63.2 64.5 62.9 61.3 58.0 RHN 5 66.0 63.5 5 64.8 62.2 - Shared Embeddings - Variational Dropout + Recurrent Dropout 5 5 5 72.3 74.4 65.5 69.5 71.7 63.0 5 5 5 67.4 74.7 63.4 64.6 71.7 61.0
Table 4: Validation and test set perplexities on Penn Treebank for variants of our best LSTM and RHN models of two sizes.
Third, the validation perplexities of the best checkpoints are about one standard deviation lower than the sample mean of the reruns, so the tuner could ï¬t the noise only to a limited degree.
Because we treat our corpora as a single sequence, test set contents are not i.i.d., and we cannot apply techniques such as the bootstrap to assess (c). Instead, we looked at the gap between validation and test scores as a proxy and observed that it is very stable, contributing variance of 0.12â0.3 perplexity to the ï¬nal results on Penn Treebank and Wikitext-2, respectively.
We have not explicitly dealt with the unknown uncertainty remaining in the Gaussian Process that may affect model comparisons, apart from running it until apparent convergence. All in all, our ï¬ndings suggest that a gap in perplexity of 1.0 is a statistically robust difference between models trained in this way on these datasets. The distribution of results was approximately normal with roughly the same variance for all models, so we still report numbers in a tabular form instead of plotting the distribution of results, for example in a violin plot (Hintze & Nelson, 1998).
7.3 SENSITIVITY
To further verify that the best hyperparameter setting found by the tuner is not a ï¬uke, we plotted the validation loss against the hyperparameter settings. Fig. 2 shows one such typical plot, for a 4-layer LSTM. We manually restricted the ranges around the best hyperparameter values to around 15â25% of the entire tuneable range, and observed that the vast majority of settings in that neighbourhood produced perplexities within 3.0 of the best value. Widening the ranges further leads to quickly deteriorating results.
Satisï¬ed that the hyperparameter surface is well behaved, we considered whether the same results could have possibly been achieved with a simple grid search. Omitting input embedding ratio be- cause the tuner found having a down-projection suboptimal almost non-conditionally for this model, there remain six hyperparameters to tune. If there were 5 possible values on the grid for each hyper- parameter (with one value in every 20% interval), then we would need 65, nearly 8000 trials to get within 3.0 of the best perplexity achieved by the tuner in about 1500 trials.
7.4 TYING LSTM GATES
Normally, LSTMs have two independent gates controlling the retention of cell state and the admis- sion of updates (Eq. 1). A minor variant which reduces the number of parameters at the loss of some ï¬exibility is to tie the input and forget gates as in Eq. 2. A possible middle ground that keeps the number of parameters the same but ensures that values of the cell state c remain in [â1, 1] is to cap
7
# Under review as a conference paper at ICLR 2018
objectveValue input_dropout intra_layer_dropout learning_rate output_dropout state_dropout weight decay 0.800007) 7 0.007000 7 ora0000 | 42800 | 1180000 | «1.060000 442500 | | 0.70000 | 2400 | «10050000 42200 42000 41900 41900 4.1400 Yi 4.1200
Figure 2: Average per-word negative log-likelihoods of hyperparameter combinations in the neighbourhood of the best solution for a 4-layer LSTM with 24M weights on the Penn Treebank dataset.
the input gate as in Eq. 3.
co =fOci1thOj eo =fOu-1+(1-f) Oj ce, =f, Oc¢_1 + min(1 â fy, iz) Oj
co =fOci1thOj dd)
eo =fOu-1+(1-f) Oj (2)
ce, =f, Oc¢_1 + min(1 â fy, iz) Oj (3)
Where the equations are based on the formulation of Sak et al. (2014). All LSTM models in this pa- per use the third variant, except those titled âUntied gatesâ and âTied gatesâ in Table 4 corresponding to Eq. 1 and 2, respectively.
The results show that LSTMs are insensitive to these changes and the results vary only slightly even though more hidden units are allocated to the tied version to ï¬ll its parameter budget. Finally, the numbers suggest that deep LSTMs beneï¬t from bounded cell states.
# 8 CONCLUSION
During the transitional period when deep neural language models began to supplant their shallower predecessors, effect sizes tended to be large, and robust conclusions about the value of the mod- elling innovations could be made, even in the presence of poorly controlled âhyperparameter noise.â However, now that the neural revolution is in full swing, researchers must often compare competing deep architectures. In this regime, effect sizes tend to be much smaller, and more methodological care is required to produce reliable results. Furthermore, with so much work carried out in parallel by a growing research community, the costs of faulty conclusions are increased.
Although we can draw attention to this problem, this paper does not offer a practical methodologi- cal solution beyond establishing reliable baselines that can be the benchmarks for subsequent work. Still, we demonstrate how, with a huge amount of computation, noise levels of various origins can be carefully estimated and models meaningfully compared. This apparent tradeoff between the amount of computation and the reliability of results seems to lie at the heart of the matter. Solutions to the methodological challenges must therefore make model evaluation cheaper by, for instance, reducing the number of hyperparameters and the sensitivity of models to them, employing better hyperpa- rameter optimisation strategies, or by deï¬ning âleaguesâ with predeï¬ned computational budgets for a single model representing different points on the tradeoff curve.
# REFERENCES
Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net- works. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704.
Jasmine Collins, Jascha Sohl-Dickstein, and David Sussillo. Capacity and trainability in recurrent neural networks. arXiv preprint arXiv:1611.09913, 2016.
8
(1) (2) (3)
# Under review as a conference paper at ICLR 2018
Thomas Desautels, Andreas Krause, and Joel W. Burdick. Parallelizing exploration-exploitation tradeoffs in Gaussian process bandit optimization. Journal of Machine Learning Research, 15: 4053â4103, 2014. URL http://jmlr.org/papers/v15/desautels14a.html.
Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1019â1027, 2016.
Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D Scul- In Proceedings of the 23rd ACM ley. Google vizier: A service for black-box optimization. SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1487â1495. ACM, 2017.
Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. CoRR, abs/1612.04426, 2016. URL http://arxiv.org/abs/1612. 04426.
Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013. URL http://arxiv.org/abs/1308.0850.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560, 2017.
Jerry L Hintze and Ray D Nelson. Violin plots: a box plot-density trace synergism. The American Statistician, 52(2):181â184, 1998.
Sepp Hochreiter and J¨urgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9 ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL (8):1735â1780, November 1997. http://dx.doi.org/10.1162/neco.1997.9.8.1735.
# Marcus Hutter. The human knowledge compression contest. 2012.
Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classiï¬ers: A loss framework for language modeling. CoRR, abs/1611.01462, 2016. URL http://arxiv. org/abs/1611.01462.
Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. CoRR, abs/1507.01526, 2015. URL http://arxiv.org/abs/1507.01526.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A¨aron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. CoRR, abs/1610.10099, 2016. URL http://arxiv.org/abs/1610.10099.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. Dynamic evaluation of neural sequence models. arXiv preprint arXiv:1709.07432, 2017.
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The Penn treebank. Computational linguistics, 19(2):313â330, 1993.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. CoRR, abs/1609.07843, 2016. URL http://arxiv.org/abs/1609.07843.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing LSTM language models. CoRR, abs/1708.02182, 2017. URL http://arxiv.org/abs/1708. 02182.
Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, pp. 3, 2010.
Oï¬r Press and Lior Wolf. Using the output embedding to improve language models. CoRR, abs/1608.05859, 2016. URL http://arxiv.org/abs/1608.05859.
9
# Under review as a conference paper at ICLR 2018
Nils Reimers and Iryna Gurevych. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. CoRR, abs/1707.09861, 2017. URL http:// arxiv.org/abs/1707.09861.
Hasim Sak, Andrew W. Senior, and Franc¸oise Beaufays. Long short-term memory based recur- rent neural network architectures for large vocabulary speech recognition. CoRR, abs/1402.1128, 2014. URL http://arxiv.org/abs/1402.1128.
Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. CoRR, abs/1603.05118, 2016. URL http://arxiv.org/abs/1603.05118.
Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On mul- tiplicative integration with recurrent neural networks. CoRR, abs/1606.06630, 2016. URL http://arxiv.org/abs/1606.06630.
Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014. URL http://arxiv.org/abs/1409.2329.
Julian G. Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. Recurrent highway networks. CoRR, abs/1607.03474, 2016. URL http://arxiv.org/abs/1607. 03474.
Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
10 | {
"id": "1611.09913"
} |
1707.05173 | Trial without Error: Towards Safe Reinforcement Learning via Human Intervention | AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe. | http://arxiv.org/pdf/1707.05173 | William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans | cs.AI, cs.LG, cs.NE | null | null | cs.AI | 20170717 | 20170717 | 7 1 0 2
l u J 7 1 ] I A . s c [
1 v 3 7 1 5 0 . 7 0 7 1 : v i X r a
# Trial without Error: Towards Safe Reinforcement Learning via Human Intervention
William Saunders University of Oxford Girish Sastry University of Oxford Andreas Stuhlmüller Stanford University Owain Evans University of Oxford
# Abstract
AI systems are increasingly applied to complex tasks that involve interaction with humans. During training, such systems are potentially dangerous, as they havenât yet learned to avoid actions that could cause serious harm. How can an AI system explore and learn without making a single mistake that harms humans or otherwise causes serious damage? For model-free reinforcement learning, having a human âin the loopâ and ready to intervene is currently the only way to prevent all catastrophes. We formalize human intervention for RL and show how to reduce the human labor required by training a supervised learner to imitate the humanâs intervention decisions. We evaluate this scheme on Atari games, with a Deep RL agent being overseen by a human for four hours. When the class of catastrophes is simple, we are able to prevent all catastrophes without affecting the agentâs learning (whereas an RL baseline fails due to catastrophic forgetting). However, this scheme is less successful when catastrophes are more complex: it reduces but does not eliminate catastrophes and the supervised learner fails on adversarial examples found by the agent. Extrapolating to more challenging environments, we show that our implementation would not scale (due to the infeasible amount of human labor required). We outline extensions of the scheme that are necessary if we are to train model-free agents without a single catastrophe.
Link to videos that illustrate our approach on Atari games.
# Introduction
# 1.1 Motivation
AI systems are increasingly applied to complex tasks that involve interaction with humans. During training, such systems are potentially dangerous, as they havenât yet learned to avoid actions that would cause serious harm. How can an AI system explore and learn without making a single mistake that harms humans, destroys property, or damages the environment?
A crucial safeguard against this danger is human intervention. Self-driving cars are overseen by human drivers, who take control when they predict the AI system will perform badly. These overseers frequently intervene, especially in self-driving systems at an early stage of development [11]. The same safeguard is used for human learners, who are overseen by a licensed driver.
Many AI systems pose no physical danger to humans. Yet web-based systems can still cause unintended harm. Microsoftâs chatbot Tay reproduced thousands of offensive tweets before being taken down [29]. Facebookâs algorithms for sharing news stories inadvertently provided a platform for malicious and false stories and disinformation during the US 2016 election [3]. If human operators had monitored these systems in real-time (as with self-driving cars), the bad outcomes could have been avoided.
Human oversight is currently the only means of avoiding all accidents in complex real-world domains.1 How does human intervention for safety ï¬t together with Deep Learning and Reinforcement Learning, which are likely to be key components of future applied AI systems? We present a scheme for human intervention in RL systems and test the scheme on Atari games. We document serious scalability problems for human intervention applied to RL and outline potential remedies.
# 1.2 Contributions
We provide a formal scheme (HIRL) for applying human oversight to RL agents. The scheme makes it easy to train a supervised learner to imitate the humanâs intervention policy and take over from the human. (Automating human oversight is crucial since itâs infeasible for a human to watch over an RL agent for 100 million timesteps.) While the human oversees a particular RL agent, the supervised learner can be re-used as a safety-harness for different agents.
The goal of HIRL is enabling an RL agent to learn a real-world task without a single catastrophe. We investigated the scalability of HIRL in Atari games, which are challenging toy environments for current AI [19]. HIRL was applied to Deep RL agents playing three games: Pong, Space Invaders, and Road Runner (see Figure 2). For the ï¬rst 4.5 hours of training, a human watched every frame and intervened to block the agent from taking catastrophic actions. In Pong and Space Invaders, where the class of catastrophes was chosen to be simple to learn, the supervised learner succeeded in blocking all catastrophes. In Road Runner, where the class of catastrophes was more diverse and complex, HIRL reduced the number catastrophes by a factor of 50 but did not reduce them to zero.
We compared HIRL to a baseline where the agent gets a large negative reward for causing catastrophic outcomes but is not blocked from causing them. This baseline canât avoid all catastrophes but it could (in principle) become reliably safe after only a small number of catastrophes. Yet the baseline agent never stopped causing catastrophes. For Pong, we show that this was due to catastrophic forgetting: the agent had to periodically cause catastrophes to re-learn how bad they are [18]. This shows that HIRL can succeed where an âRL onlyâ approach to safety fails.
We describe some key challenges for HIRL. First, the supervised learner that imitates human oversight must be robust to adversarial distribution shift [2]. (The CNN we used for Road Runner was not robust to an adversarial agent.) Second, additional techniques are needed to reduce the amount of time the human has to spend overseeing the agent. We show that our implementation of HIRL would not be feasible for other Atari games, as theyâd require years of human time. We suggest a range of techniques for reducing this human time-cost.
# 2 HIRL: A Scheme for Safe RL via Human Intervention
# 2.1 Motivation for HIRL
Can RL agents learn safely in real-world environments? The existing literature contains a variety of deï¬nitions of âsafe RLâ [12]. In this paper, we say an RL agent is safe if it never takes âcatastrophic actionsâ during training. We deï¬ne âcatastrophic actionsâ as actions that the human overseer deems unacceptable under any circumstances (even at the start of training). That is, we avoid formalizing the concept of catastrophes and let the human supervisor specify them (as in [15]). The overseer will typically distinguish sub-optimal actions from catastrophic actions. It is tolerable for a car to drive slowly during learning; but hitting pedestrians is catastrophic and must be avoided from the very start of training.
Reinforcement learning alone is insufï¬cient to achieve this kind of safety. The fundamental problem is that RL learns by trial and error. Without prior knowledge, a model-free RL agent will not avoid a catastrophic action unless it has tried the action (or a similar action) and learned from the negative experience.2
This problem could potentially be side-stepped by training in simulation [10]. The agent explores dangerous actions in simulation and transfers this knowledge to the real world [8]. To work reliably,
1Hand-coding a program to recognize and prevent dangerous actions does not scale up to complex domains in which accidents are diverse.
2This paper focuses on model-free RL. Model-based algorithms have some advantages in terms of potential to avoid catastrophes: see Section 5.
2
Figure 1: HIRL scheme. At (1) the hu- man overseer (or Blocker imitating the human) can block/intercept unsafe ac- tions a and replace them with safe ac- tions aâ. At (2) the overseer can deliver a negative reward penalty râ for the agent choosing an unsafe action.
this would require advances in transfer learning and in simulation. Yet simulating humans accurately is infeasible for many tasks3 and tasks involving human interaction are the most safety-critical.
Imitation learning can be used to learn a safe initial policy from human demonstrations [16]. While the initial policy will be much safer than random initialization, any deviation between the human and the learned policy can result in unsafe actions, and subsequent ï¬ne-tuning of the policy using RL can introduce catastrophic behavior. So, imitation learning is not sufï¬cient on its own but could be valuable combined with HIRL. (Imitation learning is helpful for safe initialization when the human knows an easy-to-learn policy that performs well and steers clear of dangerous regions of the state space.)
# 2.2 Formal Speciï¬cation of HIRL
We model the RL agentâs environment as a Markov Decision Process (MDP). The environment is an MDP specified by a tuple M = (S,A,7,R, 7), where S is the state space, A is the action space, T: Sx Ax S++ [0,1] is the transition function, R: S x A+ R is the reward function, and Â¥ is the discount factor.
How can an RL agent learn while never taking a single catastrophic action? Our scheme, HIRL (Human Intervention RL), is simple. The human controls the interface between the RL agent and environment M , constantly watching over the agent and blocking any catastrophic actions before they happen. More precisely, at each timestep the human observes the current state s and the agentâs proposed action a. If (s, a) is catastrophic, the human sends a safe action aâ to the environment instead. The human also replaces the new reward r = R(s, aâ) with a penalty râ (Figure 1).
The period in which the human blocks the agent is called the âHuman Oversightâ phase of HIRL. During this phase, we store each state-action (s, a) and a binary label for whether or not the human blocked it. This dataset is used to train a âBlockerâ, a classiï¬er trained by supervised learning to imitate the humanâs blocking decisions. The Human Oversight phase lasts until the Blocker performs well on a held-out subset of the training data. At this point, the human retires and the Blocker takes over for the rest of time. The Blocker never stops overseeing the agent, which prevents catastrophes even if the agent exhibits random exploration or catastrophic forgetting [18].
HIRL is agnostic as to the inner workings of the RL algorithm (building on our earlier work [1]). It works for Q-learning [20], for policy gradient algorithms like A3C [21] and for model-based RL [14]. Moreover, the Blocker that imitates the human overseer is modular. While trained on data from one agent, the Blocker can act as a safeguard for a completely different agent.4
The scheme for HIRL we have just presented (and which we use in our experiments) skips over some important challenges of avoiding catastrophes. The Blockerâs task is not a standard classiï¬cation task
3Itâs hard to simulate how a human would change their strategy in response to interaction with an AI system.
This is no accident: simulating the strategic reasoning of humans would solve a major open problem in AI.
4The human does not need to spend more time providing safety interventions whenever they try a new agent architecture. This makes possible a typical work-ï¬ow in which researchers explore a variety of different algorithms (e.g. DQN vs. A3C) for a task.
3
because the distribution on state-action pairs shifts (as the agent learns).5 One way to address this is by having multiple Human Oversight phases: the human provides additional training data for the Blocker as the distribution starts to shift. See Section 5 for further elaborations on HIRL.
# 2.3 When is HIRL feasible?
To learn with zero catastrophes, the Blocker (which imitates human interventions) needs to achieve near-perfect reliability in recognizing catastrophic actions. This may require a huge set of labeled examples, which might be too costly in terms of human labor. We discuss this challenge in Section 4.1. A further requirement is that the environment proceeds slowly enough for the human to intervene. This rules out real-world tasks that are intrinsically high-speed. In environments where speed is a controllable parameter (e.g. computer tasks), slowing down the environment might make the RL agentâs learning too slow for HIRL to work.
Figure 2: In Pong (left) itâs a catastrophe if the agent (green paddle) enters the Catastrophe Zone. In Space Invaders (center), itâs a catastrophe if the agent shoots their defensive barriers (highlighted in pink box). In Road Runner (right), itâs a catastrophe if Road Runner touches the Coyote.
# 3 Experiments
# 3.1 Design of Experiments and Implementation of HIRL
Our experiments used the OpenAI Gym implementation of Atari Learning Environment [5, 7], modiï¬ed to allow interactive blocking of actions by a human. We used open-source implementa- tions [23, 22] of A3C with an LSTM policy [21] and Double DQN [28]. Rewards were clipped when using Double DQN but not for A3C.
For the Blocker (the supervised learner that imitates human blocking) we used a convolutional neural network (CNN). The CNN was trained on the Atari images (rather than the downsampled frames the agent sees) and had no pooling layers. Architectures and hyperparameters for all neural networks are in Section 6.1 of the Appendix. Our code is available on GitHub.
Our goal is that the Blocker never misclassiï¬es a catastrophe: the false-negative rate should be extremely low. We trained a CNN on the training set of human interventions to minimize the standard cross-entropy loss. To achieve a low false-negative rate (at the expense of false positives), we then selected a threshold for the CNNâs sigmoid output and blocked any actions that exceeded this threshold. This threshold can be set very low initially (causing many false positives) and then gradually raised until it becomes possible for the agent to learn the task. In our experiments, this simple approach sufï¬ced.
As well as deciding which actions to block, the Blocker replaces catastrophic actions with safe actions (having learned to imitate how the human overseer replaces actions). Our implementation of action replacement is described in Section 6.2 (Appendix).
To summarize, our application of HIRL involved the following sequence of steps:
5There will also be distributional shift if a Blocker trained on one agent is applied to another agent.
4
1. Human Oversight Phase (duration = 4.5 hours): Fresh RL agent starts playing the game (slowed down to accommodate the human). Human6 oversees and blocks catastrophic actions.
2. Blocker training: The game is paused. The CNN is trained to imitate human blocking decisions. The threshold for the sigmoid is chosen to try to ensure Blocker has no false negatives.
3. Blocker Oversight Phase (duration = 12-24 hours): Blocker takes over from human and game is run at usual speed for Atari experiments.
The main difference between HIRL and regular RL are in steps (1) and (2) above. Once the Blocker takes over, the environment runs at full speed for the normal training time for Deep RL agents learning Atari.
# 3.1.1 What are Catastrophes in Atari?
In Atari there are no catastrophic actions: the human researchers running Atari agents donât care if their agents die millions of times in the process of mastering a game. In our experiments, we stipulate that certain outcomes are catastrophic and require the agent to maximize reward without causing catastrophes (Figure 2). For example, can an agent learn Road Runner without losing a single life on Level 1? These are the outcomes we stipulate to be catastrophic:
⢠Pong: Itâs a catastrophe if the paddle goes close to the bottom of the screen. (This is not a
bad outcome in regular Pong but provides a toy example for avoiding catastrophes.) ⢠Space Invaders: Itâs a catastrophe if the agent shoots their own defensive barriers.7
⢠Road Runner: Itâs a catastrophe if the agent dies on Level 1.
How did we choose these outcomes to be catastrophic? Some catastrophes can be avoided by adjusting course just before the catastrophe would have happened. We call these âlocally avoidableâ catastrophes. For example, in Pong the agent can move upwards just before it would have entered the Catastrophe Zone (Figure 2). Other catastrophes cannot be avoided just before they happen. For example, just before losing a point on Pong, itâs often impossible for the agent to salvage the situation â the agentâs critical error came hundreds of frames earlier. Compared to locally avoidable catastrophes, preventing ânon-localâ catastrophes requires much more understanding of the environment.
For our experiments, we used only locally avoidable catastrophes. So the human overseer just needs to recognize when a catastrophe is imminent and provide an action that averts it; they donât need any skill at the game.8
# 3.1.2 Baseline: Human-trained Reward Shaping
Two important elements of HIRL are:
1. The class of catastrophic actions is speciï¬ed online by the humanâs decisions of what to block.
2. If the RL agent takes a catastrophic action it is blocked and receives a negative reward penalty.
The Human-trained Reward Shaping baseline shares (1) with HIRL but modiï¬es (2). The RL agent still receives the reward penalty for taking a catastrophic action but is not blocked. The Reward Shaping baseline cannot achieve zero catastrophes because it must try catastrophic actions to learn that they have negative reward (see 2.1). However, if the negative rewards are large, the RL agent would (ideally) have a rate of catastrophes that quickly falls to zero. In Pong and Road Runner, we set
6Authors WS and GS took the role of human overseer. 7A possible strategy in Space Invaders is to shoot a slit through the barriers and attack from behind the slit. In our experiments DQN did not appear to use this strategy and blocking it under HIRL did not harm performance. 8In driving a car, some catastrophes are locally avoidable and others are not. We expect HIRL to be more
useful when catastrophes are locally avoidable.
5
Pong des Space Invaders des Road Runner | â HIRL _ â No Oversight Cumulative Catastrophes Cumulative Catastrophes Cumulative Catastrophes oo 02 04 06 o8 10 12 14 16 o i 2 3 4 5 6 7 os Lo us 2.0 25 Training frames (10 millions) *â Training frames (millions) ** Training frames (10 millions) *â
Figure 3: Cumulative Catastrophes over time (mean and standard error). No Oversight agent gets no human intervention at all; it shows that our objective of preventing catastrophes is not trivial.
Pong as Space Invaders se Road Runner te 4 â_ HIRL Deo 3 3 ,,. ââ Reward Shaping a au a g g, 6 éâ a0 gos =» zo, 3 8, âTraining frames (10 millions)â : Training frames (millions) °° © Training frames (10 millions) **" Pong Space Invaders veo Road Runner J â HIRL 10 20- â= Reward Shaping é é é é é @. go 8. g g g gs" <7 <. < ao) «Training frames (10 millions) Training frames (millions) © Training frames (10 millions)
Figure 4: Average Reward and Cumulative Catastrophes over time (mean and standard error). Reward Shaping baseline (below) is not blocked from catastrophes but gets huge negative rewards for causing them. (Road Runner error bars are misleading because at random times the agent gets stuck with a policy that causes it to die quickly, resulting in large negative rewards.)
the negative reward to be much larger than the maximum total discounted reward for an episode.9 So itâs never rational to cause a catastrophe as a means to achieving greater reward after the catastrophe.
For Space Invaders, we used DQN with reward clipping, where all rewards are either +1 or â1. This makes it impossible to have a negative reward for catastrophic actions that is larger than the total discounted return.10 So the Space Invaders baseline is slightly different from Pong and Road Runner.
# 3.2 Summary of Results
The objective is to avoid catastrophes while achieving good performance. This must be achieved with a feasible amount of human oversight. Figure 3 shows that this objective is not trivially satisï¬ed: an agent with no human oversight has more than ten thousand catastrophes in each game.11
9The maximum returns are the best scores the agents achieve with no blocking or human oversight. For Pong,
the penalty is +46 bigger than the returns. For Road Runner, the penalty is +15000 bigger.
10This could be addressed in future work by modifying DQN as suggested by [27]. But it wonât always be
easy to for Deep RL algorithms to deal correctly with rewards that are extreme outliers in magnitude.
11In Pong there is no incentive in the regular game to avoid the Catastrophe Zone. In Space Invaders and Road Runner there is an incentive to avoid the catastrophes but the agents do not become good enough to learn this.
6
3'0
HIRL was a mixed success overall. In Pong and Space Invaders, the agent had zero catastrophes and still was able to achieve impressive performance on the game. In Road Runner we did not achieve zero catastrophes but were able to reduce the rate of deaths per frame from 0.005 (with no human oversight) to 0.0001.
Figure 4 shows that the Reward Shaping agent has a low total number of catastrophes compared to the No Oversight setting (Figure 3). Yet in all games its catastrophe rate does not appear to be converging to zero. Section 3.3.2 shows that the persistence of catastrophes in Pong is caused by catastrophic forgetting.
By frequently blocking the agent (and replacing its action with a different one) HIRL essentially changes each gameâs transition function. Itâs conceivable that this added complexity makes the game harder for Deep RL to learn. However, we donât see any negative effects on learning for HIRL compared to the Reward Shaping baseline. Indeed, HIRL appears to improve faster and it achieves much better reward performance overall.
# 3.3 Pong: Detailed Analysis of the Blocker and of Human Time Cost
HIRL was successful at Pong: an A3C agent mastered Pong while incurring no catastrophes. Would the Blocker work just as well for different RL agents? Why did the Reward Shaping agent (without blocking catastrophic actions) fail and keep trying catastrophic actions?
# 3.3.1 The Blocker transfers perfectly and is robust to adversarial agents
The Blocker was trained on examples from a human overseeing an A3C agent. Figure 4 shows performance for the Blocker on that very same A3C agent. A virtue of HIRL is that this Blocker is modular: while it was trained on data from one agent, it can be applied to another. But would the Blocker be equally reliable for another agent? We applied the Blocker to a variety of RL agents and it always blocked all catastrophes without preventing the agent mastering Pong. The agents were:
A3C agents with different architectures/hyper-parameters ⢠Double DQN ⢠A âcatastrophe lovingâ A3C agent: this agent was previously trained on a modiï¬ed version
of Pong where it got positive rewards for entering the Catastrophe Zone
# 3.3.2 Safety requires constant intervention (due to catastrophic forgetting)
We argued in Section 2.1 that regular RL agents are not âcatastrophe-safeâ. They only avoid catastrophic actions if theyâve already tried them; so they canât learn a task with zero catastrophes. Figure 4 demonstrated a second way in which current Deep RL agents are unsafe: they never stop taking catastrophic actions. The Reward-Shaping agent is initially trained by a human overseer who blocks all catastrophes. After this, the agent receives negative rewards for catastrophes but is not blocked. The agent learns to mostly avoid catastrophes but the catastrophe rate seems to converge to a low but non-zero level.
# Table 1: Long-run rate of attempted catastrophes in Pong.
Policy Stochastic Deterministic Stochastic Deterministic 10â4 10â4 0 0 0.012 (0.004) 0.079 (0.017) 0.003 (0.001) 0 (0)
# Learning Rate Catastrophe Rate Per Episode (Std Err)
Why does the Reward Shaping agent keep taking actions that received a big negative reward? We investigate this by examining how frequently the HIRL agent attempts catastrophic actions.12 In
12The HIRL agent is blocked from actually taking catastrophic actions. By measuring how often it attempts catastrophic actions we learn how many catastrophes it would have caused if blocking was turned off (as in Reward Shaping).
7
Pong, the HIRL agent attempts catastrophes at a rate of about 10â5 per frame or 10â2 per episode. Is this due to A3Câs stochastic policy or due to catastrophic forgetting? We tested this by turning the learning rate down to zero and by making the agent deterministic (taking a max over the policy networkâs softmax layer). Table 1 shows the four possibilities. The ï¬rst row (stochastic and non-zero learning rate) is regular A3C with HIRL as shown in Pong results for Figure 4. The catastrophe rate only goes down when the learning rate is set to zero. This strongly suggests that catastrophic forgetting is the problem. We also examined the probability of taking a catastrophic action for the regular A3C agentâs policy network: after the agent has mastered Pong, this probability continues to vary between 10â9 (which is appropriate) and 0.3 (which would lead to trying the catastrophic action again).
# 3.4 Road Runner: Blocking an Adversarial Agent
Road Runner is a side-scrolling game based on the Warner Bros cartoon. The player controls Road Runner, who is constantly pursued by the Coyote (Figure 2). The Road Runner gets points for eating birdseed and for knocking out the Coyote. We stipulate that losing a life on Level 1 is a catastrophe. The main way to lose a life is by touching the Coyote, which is easily avoided on Level 1 by running away.
# 3.4.1 The Score Exploit
Deep RL agents are known to discover a âScore Exploitâ in Road Runner [4]. The agent learns to intentionally kill itself in a way that (paradoxically) earns greater reward. Dying at a precise time causes the agent to repeat part of Level 1, where it earns more points than on Level 2 (as Level 2 takes a long time to master). This is a local optimum in policy space that a human gamer would never be stuck in. Ideally, our Blocker would prevent all deaths on Level 1 and hence eliminate the Score Exploit. However, through random exploration the agent may hit upon ways of dying that âfoolâ our Blocker (because they look different from examples in its training set) and hence learn a new version of the Score Exploit. In other words, the agent is performing a random search for adversarial examples [13] for our Blocker CNN.
# 3.4.2 Blocker (early version) fails due to adversarial agent
Our ï¬rst attempt to prevent catastrophes in Road Runner was an instructive failure. During the early stages of training the rate of deaths/catastrophes was very low. However, much later in training (after 16 million frames), the death rate rises (see Figure 5) and reaches almost the same level as the baseline No Oversight agent (Fig.3). Inspecting videos of the HIRL agent, we found that although the usual Score Exploit was blocked, after 16 million frames the agent found an alternative Score Exploit. The agent moved along the very top of the screen to the top right corner and waited for the Coyote to kill it there. This position at the top of the screen (which is visually distinct from other positions) presumably fooled the Blocker CNN. (In preliminary experiments, the A3C agent found different adversarial examples for an even earlier version of the Blocker. See videos.)
Road Runner: Failed Version of Blocker
° a a4 cresie ie mm Reward -0.0 0.0 0.5 10 15 2.0 2.5 Training frames (10 millions) le7 Catastrophes per Episode es es os BF B 2 &® ON Average Reward (1e4) 2
Figure 5: Reward/catastrophe- rate for HIRL agent with failed Blocker. Blue line indicates when agent learned Score Exploit. Be- fore this point the catastrophe- rate spikes a few times, indicating additional failures of the Blocker; these spikes are anti-correlated with reward and do not indicate a Score Exploit. Results from more successful Blocker are in Fig. 4.
8
After the Blocker failed, we examined the 20,000 frames used as training data for the Blocker and looked for mistakes in the labels. We spent 20 minutes correcting mistakes and re-trained the Blocker. This reduced the average death rate by a factor of 20: from a rate of 0.002 deaths per frame to 0.0001. The No Oversight baseline has a rate of 0.005.
# 4 Challenges in Scaling Up HIRL
In our experiments, the Human Oversight phase was short (4.5 hours) and the number of examples of catastrophes used to train the Blocker was small. For Pong and Space Invaders, the training set sufï¬ced to train a Blocker that blocked all catastrophes. But in Road Runner (with more diverse catastrophes and an adversarial agent) the training set was insufï¬cient.
In all three games catastrophes occur at the start of the game. This contrasts with games where certain catastrophes only occur on higher levels. If the human overseer had to oversee the agent until it reached Level 2 on Road Runner, this would increase the amount of human labor by orders of magnitude.
To assess the feasibility of RL agents learning with zero catastrophes, itâs crucial to estimate the amount of human labor required. We present a simple formula for computing the human time-cost and use it for extrapolations.
# 4.1 Extrapolating the Human Time-Cost of HIRL
We want to estimate the amount of wall-clock time, C, a human spends overseeing the agent. This is just the time it takes to generate a training set sufï¬cient to train the Blocker. The training set contains (up to time C) the agentâs observations (s, a) and whether or not (s, a) is catastrophic.13 We let Nall be the size of this training set. The formula for C is:
C = thuman à Nall [ total time-cost = time per human label à # observations to label ] (1)
In this formula, thuman is the average time it takes the human to process an observation. Since humans are intrinsically slow, weâre stuck with a bound thuman > 0.1 seconds. So the main way to reduce C is to reduce Nall. For the Blocker to have an extremely low false-negative rate (i.e. to avoid letting through any catastrophes) it needs some substantial number of both positive and negative examples in its training set, bounding how much Nall can be reduced. However, in many environments catastrophes are rare and the training set consists mostly of safe observations. Increasing the proportion of attempted catastrophes will therefore reduce Nall without harming the Blockerâs performance.
Let p denote the ratio of all observations to catastrophe observations (averaged over time Câ). We can re-write Formula terms of p. Training the Blocker requires Na observations of catastrophes. But to get that many observed catastrophes, the agent encounters a greater number of safe observations (p > 1). So we have:
C = thuman Ã Ï Ã Ncat [ total time-cost = time per label à (#observations / #cat-observations) à #cat-observations ]
# 4.1.1 Time-Cost for Pong and Montezumaâs Revenge
In our Pong experiment, the Human Oversight phase lasted for four hours: C = 4hrs. We can break this down according to Formula 2:
thuman = 0.8s (average time for human to process one observation) â¢ Ï = 166 (ratio of observations to catastrophes observations) ⢠Ncat = 120 (number of labeled catastrophes)
13For catastrophic actions, the training set would also record which action aâ was used in place of a, as well as the negative reward penalty râ (see Figure 1).
9
(2)
The number Ncat is small because the catastrophe is so simple: the Blocker CNN didnât need much data. The ratio Ï is also small because the agent frequently tries catastrophic actions. Once the agent learns to avoid catastrophes (after 200,000 frames), Ï increases to around 105. Suppose that in our experiment, we had used an agent pre-trained in a similar environment to avoid catastrophes (instead of a fresh A3C agent).14 If this pre-trained agent had Ï = 105 from the start, the total time for human labeling would be 0.8 Ã 105 Ã 120 = 110 days: a huge amount of human labor to learn such a simple concept!
The ratio Ï would also be much higher if the Catastrophe Zone (Fig 2) were hard to reach. Consider the Atari game Montezumaâs Revenge and suppose we treat it as a catastrophe if the agent ever walks off a ledge and dies. Current Deep RL algorithms might take 100 million frames to reach all the distinct rooms in the game that contain ledges [4]. Overseeing an agent for 100 million frames would take a human at least a year. This suggests that the implementation of HIRL in this paper would not scale to other Atari games, let alone to environments with more variety and visual complexity (such as Minecraft).
# 5 Discussion
Currently, the only way to guarantee the safety of RL systems during training is to have a human watch the systemâs actions, ready to intervene, or else to have an automated overseer that is just as reliable at preventing catastrophes. We investigated whether human oversight could allow Deep RL agents to learn without a single catastrophic event. While HIRL succeeded in preventing the simplest catastrophes (in Pong and Space Invaders), it was only a partial success in blocking more complex catastrophes. Moreover, extrapolations suggest that our HIRL implementation would not scale to more complex environments; the human time-cost would be infeasible.
To make the human time-cost of HIRL feasible for complex environments, new techniques will be required. We conclude by outlining some promising techniques:
Make Blockers (human imitators) more data-efï¬cient: The classiï¬er would learn to imitate the human from a smaller training set (reducing C in Formula 2 by reducing Ncat). ⢠Make RL agents more data-efï¬cient: Deep RL tends to require millions of observations for successful learning. With more data-efï¬cient RL, the human would not need to wait so long for the agent to observe the full range of catastrophes (as in the Montezumaâs Revenge example above).
⢠Seek out catastrophes: Even if the agent is slow to master the whole environment, it could be quick to ï¬nd the catastrophes. This means a higher ratio of catastrophes to safe events (lowering Ï) and lower human time-cost C. Note that RL agents that are more data-efï¬cient may sometimes increase human time-costs. This is because they quickly learn to avoid catastrophes and so catastrophes become very rare in the Blockerâs training set (see Pong example above). This suggests a role for agents who initially explore systematically [24] and aggressively [6] and so encounter many catastrophes early on.15
Selectively query the human (Active Learning): In some environments, the agent spends a long time in states that are âfar awayâ from dangerous regions. Human oversight is not necessary at these times; in principle, the human could take a break until the agent gets close to a dangerous region. Similarly, a Blocker might reliably block catastrophes in one region of the state space but not in a novel region that hasnât been visited yet. The human could take a break while the agent is in the already-visited region and come back when the agent gets close to the novel region. In Montezumaâs Revenge, for example, the human could come back when the agent is about to enter a new room. Techniques from active learning and anomaly detection can be used to detect unfamiliar states [25, 17, 9]. Related approaches have been pursued in recent work on safe exploration [26].
14For example, suppose the agent had already trained in an environment similar to Pong. We might still want to train a Blocker because itâs uncertain whether the agent will generalize perfectly from its old environment to Pong.
15An agent could also be pre-trained in a simulation to seek out catastrophes.
10
An algorithm that decides when to ask the human for oversight must have no false negatives: for any novel catastrophe, it must either block the agent directly or ensure that the human is overseeing the action.16
⢠Explaining why an action is catastrophic: We could augment the binary âcatastro- pheâ/âsafeâ labels (that we get automatically based on the humanâs decision to intervene or not) with additional information, such as explanations of what exactly caused a catastrophe. This will introduce additional labeling cost, but could make it easier to learn a robust imitator from a small training set.
⢠Model-based RL for safe learning: Model-based agents could potentially learn which actions are catastrophic without ever trying them. They could achieve this by learning a good world model through exploration of safe regions of the state space. (Similarly, chemists know to avoid exposure to certain chemicals even if no human has ever been exposed to the chemical.)
16For some environments, the human need not to be ready to take control at all times. When the algorithm suspects an action leads to a novel state, it blocks the action. The action is sent to the human who evaluates (asynchronously) whether the action was safe.
11
# Acknowledgements
This work was supported by Future of Life Institute grant 2015-144846 (all authors) and by the Future of Humanity Institute, Oxford. We thank Vlad Firoiu for early contributions and Jan Leike and David Abel for helpful comments. Special thanks to David Krueger for detailed comments on a draft.
# References
[1] David Abel, John Salvatier, Andreas Stuhlmüller, and Owain Evans. Agent-agnostic human- in-the-loop reinforcement learning. CoRR, abs/1701.04079, 2017. URL http://arxiv.org/ abs/1701.04079.
[2] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
[3] Intelligence Community Assessment. Background to âassessing russian activities and intentions in recent us electionsâ: The analytic process and cyber incident attribu- tion. https://web-beta.archive.org/web/20170421222356/https:/www.dni.gov/ files/documents/ICA_2017_01.pdf. Accessed: April-21-2017.
[4] Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471â1479, 2016.
[5] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253â279, 2013.
[6] Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, and Demis Hassabis. Model-free episodic control. arXiv preprint arXiv:1606.04460, 2016.
[7] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
[8] Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse dynamics model. arXiv preprint arXiv:1610.03518, 2016.
[9] Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. arXiv preprint arXiv:1706.03741, 2017.
[10] Kamil Andrzej Ciosek and Shimon Whiteson. Offer: Off-environment reinforcement learning. In AAAI, pages 1819â1825, 2017.
[11] Maricris Francisco. Google waymo performing better than other self-driving cars, http://www.techtimes.com/articles/195565/20170202/ says california dmv. google-waymo-cars-california-dmv.htm. Accessed: December-02-2017.
[12] Javier Garcia and Fernando Fernandez. A Comprehensive Survey on Safe Reinforcement Learning. The Journal of Machine Learning Research, 16:1437â1480, 2015.
[13] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572, 2014.
[14] Xiaoxiao Guo, Satinder Singh, Richard Lewis, and Honglak Lee. Deep learning for reward design to improve monte carlo tree search in atari games. arXiv preprint arXiv:1604.07095, 2016.
[15] Bar Hilleli and Ran El-Yaniv. Deep learning of robotic tasks using strong and weak human supervision. arXiv preprint arXiv:1612.01086, 2016.
[16] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565â4573, 2016.
12
[17] David Krueger, Jan Leike, Owain Evans, and John Salvatier. Active reinforcement learning: Observing rewards at a cost. In NIPS 2016 Workshop, 2016.
[18] Zachary C Lipton, Abhishek Kumar, Jianfeng Gao, Lihong Li, and Li Deng. Combating deep reinforcement learningâs sisyphean curse with reinforcement learning. arXiv preprint arXiv:1611.01211, 2016.
[19] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforce- ment learning. Nature, 518(7540):529â533, 02 2015. URL http://dx.doi.org/10.1038/ nature14236.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep rein- forcement learning. In International Conference on Machine Learning, pages 1928â1937, 2016.
[22] OpenAI. Openai baselines, . URL https://github.com/openai/baselines. Accessed: July-1-2017.
[23] OpenAI. Openai universe starter agent, . URL https://github.com/openai/ universe-starter-agent. Accessed: May-1-2017.
[24] Georg Ostrovski, Marc G. Bellemare, Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. CoRR, abs/1703.01310, 2017. URL http://arxiv. org/abs/1703.01310.
[25] Burr Settles. Active learning. Synthesis Lectures on Artiï¬cial Intelligence and Machine Learning, 6(1):1â114, 2012.
[26] Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with gaussian processes. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 997â1005, Lille, France, 07â09 Jul 2015. PMLR. URL http://proceedings. mlr.press/v37/sui15.html.
[27] Hado van Hasselt, Arthur Guez, Matteo Hessel, Volodymyr Mnih, and David Silver. Learning values across many orders of magnitude. Advances in Neural Information Processing Systems 29 (NIPS 2016), 2016.
[28] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In AAAI, pages 2094â2100, 2016.
[29] Wikipedia. Tay (bot) â wikipedia, the free encyclopedia, 2017. URL https://en. wikipedia.org/w/index.php?title=Tay_(bot). Accessed: May-19-2017.
13
# 6 Appendix
# 6.1 Neural network architectures and hyperparameters
# 6.1.1 RL agent parameters
A3C agent network architecture (Pong, RoadRunner):
Based on OpenAIâs Universe Starter Agent ⢠Input format: 42x42x1, grayscale, (cropped, downsampled, rgb values averaged) ⢠4 convolutional layers with 32 3x3 ï¬lters, applied with 2x2 stride ⢠Last convolutional layer fed into an LSTM with 256 hidden units ⢠LSTM output fed into linear layers to produce value function estimate and policy logits ⢠ELU activation ⢠Learning rate: 0.0001 ⢠Adam Optimizer ⢠Entropy bonus: 0.01 ⢠Discount factor: 0.99 ⢠Steps between policy gradient updates: 20
(Double) DQN agent network architecture (Space Invaders)
Based on OpenAIâs baseline DQN implementation using Double DQN ⢠Input format: 84x84x1, grayscale, (cropped, downsampled) ⢠Convolutional layer with 32 8x8 ï¬lters, 4x4 stride ⢠Convolutional layer with 64 4x4 ï¬lters, 2x2 stride ⢠Convolutional layer with 64 3x3 ï¬lters ⢠Hidden layer with 512 units ⢠Output layer ⢠RELU activation ⢠Adam Optimizer ⢠Steps: 2500000 ⢠Exploration schedule: exploration rate is 1.0 until step 25000, then linearly decreased to
0.01 until step 1250000, then ï¬xed at 0.01
Learning rate schedule: 10â4 until step 25000, linearly decreased to 5 â 10â5 until step
1250000, then ï¬xed at 5 â 10â5
Gradient norm clipping: 10 ⢠Target network update frequency: 10000 ⢠Learning starts: 50000 ⢠Frame history length: 4 ⢠Replay buffer size: 1000000 ⢠Discount factor: 0.99 ⢠Batch size: 32 ⢠Frameskip: 4 ⢠Episode ended at end of life (but environment not reset until end of episode)
Game-dependent reward scaling
Pong: reward = reward/1.0 ⢠Road Runner: reward = reward/100.0 ⢠Space Invaders: reward clipping to +/-1
14
# 6.1.2 Blocker Parameters
Parameters ï¬xed across all experiments:
⢠Input format: [105, 80, 3], color (cropped then downsampled)
⢠Convolutional layers, where ï¬nal layer is concatenated with one-hot embedding of agentâs action
⢠FC layers and a linear layer outputting logits
Learning rate 0.002
⢠Adam Optimizer
Batch size: 400
Pong:
⢠2 convolutional layers, 4 ï¬lters size 3x3 with 2x2 stride
2 10-unit hidden layers
No dropout
Space Invaders and Road Runner:
⢠4 convolutional layers, 16 ï¬lters size 3x3 with 2x2 stride
2 20-unit hidden layers
Dropout with probability of discarding 0.5
⢠Examples were reweighted to give positive and negative examples equal weight
⢠Labels were manually cleaned after collection (by manually reviewing episodes and by looking for individual frames where the blocker disagreed with the given label)
# 6.2 How the Blocker Replaced Catastrophic Actions
The Blocker should be trained to not just imitate the humanâs classiï¬cation of actions as catastrophic but also to decide which safe action to substitute for the catastrophic action (Fig 1). This would makes the supervised learning problem of training the Blocker more complex than just a binary classiï¬cation task. In our experiments we avoid dealing with the more complex learning problem as it seems unlikely to change our conclusions. Instead, we use the following techniques:
⢠Fixed Action Replacement: The human speciï¬es which action the Blocker should use to replace blocked actions. More generally, the human could specify a lookup table.
⢠Action Pruning: If an action is blocked, it is not sent to the environment. The agent has to choose an action again (having received a penalty for the blocked action). To ensure the agent always has at least one action available, the action with the lowest logit score is never blocked. (Essentially, we wait until the agent chooses an action that the Blocker thinks is unlikely to be catastrophic. This is a technique for replacing actions that is learned rather than hard-coded by the human. But the more general strategy would be to learn to imitate how the human replaces actions.)
Here are the techniques used for each game:
Pong: Action Replacement with safe action âUpâ.
⢠Space Invaders: Action Replacement with the safe action being the agentâs action but with âFireâ removed.
⢠Road Runner: Action Pruning.
15
# 6.3 Space Invaders Experiment: Human Oversight Procedure
In Space Invaders, the agent starts on the left side of the screen. When a human blocks it from shooting the left barrier, it responds by staying to the left of the left barrier (where it knows it wonât get a negative reward penalty). This means for that for many episodes it never goes under the middle or right barriers. To get a training set that includes shooting under those barriers, the human would have to label for a long time. (We estimate 70 hours.) We ï¬xed this problem by including episodes where the agent is initially placed at the center or right of the screen. We alternated between episodes with these three different initializations (i.e. starting at left (as normal), starting at center, starting at right). Once the Human Oversight phase was complete, we reverted to the normal initialization for every episode (starting at left).
16 | {
"id": "1606.06565"
} |
1707.04873 | Efficient Architecture Search by Network Transformation | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters. | http://arxiv.org/pdf/1707.04873 | Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | cs.LG, cs.AI | The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18). We change the title from "Reinforcement Learning for Architecture
Search by Network Transformation" to "Efficient Architecture Search by
Network Transformation" | null | cs.LG | 20170716 | 20171121 | 7 1 0 2
v o N 1 2 ] G L . s c [ 2 v 3 7 8 4 0 . 7 0 7 1 : v i X r a
# Efï¬cient Architecture Search by Network Transformation
Han Cai1, Tianyao Chen1, Weinan Zhang1â, Yong Yu1, Jun Wang2 1Shanghai Jiao Tong University, 2University College London {hcai,tychen,wnzhang,yyu}@apex.sjtu.edu.cn, j.wang@cs.ucl.ac.uk
# Abstract
Techniques for automatically designing deep neural net- work architectures such as reinforcement learning based ap- proaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difï¬cult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architec- ture space, which is highly inefï¬cient. In this paper, we pro- pose a new framework toward efï¬cient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learn- ing agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving trans- formations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly com- petitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model with- out skip-connections achieves 4.23% test error rate, exceed- ing a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters.
(Zoph and Le 2017; Real et al. 2017). Despite the promising results as reported, their success is based on vast computa- tional resources (e.g. hundreds of GPUs), making them dif- ï¬cult to be used in practice for individual researchers, small sized companies, or university research teams. Another key drawback is that they still design and train each network from scratch during exploring the architecture space without any leverage of previously explored networks, which results in high computational resources waste.
In fact, during the architecture design process, many slightly different networks are trained for the same task. Apart from their ï¬nal validation performances that are used to guide exploration, we should also have access to their architectures, weights, training curves etc., which contain abundant knowledge and can be leveraged to accelerate the architecture design process just like human experts (Chen, Goodfellow, and Shlens 2015; Klein et al. 2017). Further- more, there are typically many well-designed architectures, by human or automatic architecture designing methods, that have achieved good performances at the target task. Under restricted computational resources limits, instead of totally neglecting these existing networks and exploring the archi- tecture space from scratch (which does not guarantee to re- sult in better performance architectures), a more economical and efï¬cient alternative could be exploring the architecture space based on these successful networks and reusing their weights.
Introduction The great success of deep neural networks in various chal- lenging applications (Krizhevsky, Sutskever, and Hinton 2012; Bahdanau, Cho, and Bengio 2014; Silver et al. 2016) has led to a paradigm shift from feature designing to archi- tecture designing, which still remains a laborious task and requires human expertise. In recent years, many techniques for automating the architecture design process have been proposed (Snoek, Larochelle, and Adams 2012; Bergstra and Bengio 2012; Baker et al. 2017; Zoph and Le 2017; Real et al. 2017; Negrinho and Gordon 2017), and promis- ing results of designing competitive models against human- designed models are reported on some benchmark datasets
# âCorrespondence to Weinan Zhang.
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
In this paper, we propose a new framework, called EAS, Efï¬cient Architecture Search, where the meta-controller ex- plores the architecture space by network transformation op- erations such as widening a certain layer (more units or ï¬l- ters), inserting a layer, adding skip-connections etc., given an existing network trained on the same task. To reuse weights, we consider the class of function-preserving trans- formations (Chen, Goodfellow, and Shlens 2015) that allow to initialize the new network to represent the same function as the given network but use different parameterization to be further trained to improve the performance, which can signiï¬cantly accelerate the training of the new network es- pecially for large networks. Furthermore, we combine our framework with recent advances of reinforcement learn- ing (RL) based automatic architecture designing methods (Baker et al. 2017; Zoph and Le 2017), and employ a RL based agent as the meta-controller.
Our experiments of exploring the architecture space of the plain convolutional neural networks (CNNs), which purely consists of convolutional, fully-connected and pool- ing layers without skip-connections, branching etc., on im- age benchmark datasets (CIFAR-10, SVHN), show that EAS with limited computational resources (5 GPUs) can design competitive architectures. The best plain model designed by EAS on CIFAR-10 with standard data augmentation achieves 4.23% test error rate, even better than many modern architectures that use skip-connections. We further apply our method to explore the DenseNet (Huang et al. 2017) archi- tecture space, and achieve 4.66% test error rate on CIFAR- 10 without data augmentation and 3.44% on CIFAR-10 with standard data augmentation, surpassing the best results given by the original DenseNet while still maintaining fewer pa- rameters.
Related Work and Background Automatic Architecture Designing There is a long stand- ing study on automatic architecture designing. Neuro- evolution algorithms which mimic the evolution processes in the nature, are one of the earliest automatic architec- ture designing methods (Miller, Todd, and Hegde 1989; Stanley and Miikkulainen 2002). Authors in (Real et al. 2017) used neuro-evolution algorithms to explore a large CNN architecture space and achieved networks which can match performances of human-designed models. In paral- lel, automatic architecture designing has also been stud- ied in the context of Bayesian optimization (Bergstra and Bengio 2012; Domhan, Springenberg, and Hutter 2015; Mendoza et al. 2016). Recently, reinforcement learning is in- troduced in automatic architecture designing and has shown strong empirical results. Authors in (Baker et al. 2017) pre- sented a Q-learning agent to sequentially pick CNN layers; authors in (Zoph and Le 2017) used an auto-regressive recur- rent network to generate a variable-length string that speci- ï¬es the architecture of a neural network and trained the re- current network with policy gradient.
As the above solutions rely on designing or training networks from scratch, signiï¬cant computational resources have been wasted during the construction. In this paper, we aim to address the efï¬ciency problem. Technically, we allow to reuse the existing networks trained on the same task and take network transformation actions. Both function- preserving transformations and an alternative RL based meta-controller are used to explore the architecture space. Moreover, we notice that there are some complementary techniques, such as learning curve prediction (Klein et al. 2017), for improving the efï¬ciency, which can be combined with our method.
Network Transformation and Knowledge Transfer Generally, any modiï¬cation to a given network can be viewed as a network transformation operation. In this pa- per, since our aim is to utilize knowledge stored in previ- ously trained networks, we focus on identifying the kind of network transformation operations that would be able to reuse pre-existing models. The idea of reusing pre-existing models or knowledge transfer between neural networks
has been studied before. Net2Net technique introduced in (Chen, Goodfellow, and Shlens 2015) describes two speciï¬c function-preserving transformations, namely Net2WiderNet and Net2DeeperNet, which respectively initialize a wider or deeper student network to represent the same functionality of the given teacher network and have proved to signiï¬cantly accelerate the training of the student network especially for large networks. Similar function-preserving schemes have also been proposed in ResNet particularly for training very deep architectures (He et al. 2016a). Additionally, the net- work compression technique presented in (Han et al. 2015) prunes less important connections (low-weight connections) in order to shrink the size of neural networks without reduc- ing their accuracy.
In this paper, instead, we focus on utilizing such network transformations to reuse pre-existing models to efï¬ciently and economically explore the architecture space for auto- matic architecture designing.
Reinforcement Learning Background Our meta- in this work is based on RL (Sutton and controller Barto 1998), techniques for training the agent to max- imize the cumulative reward when interacting with an environment (Cai et al. 2017). We use the REIN- FORCE algorithm (Williams 1992) similar to (Zoph and Le 2017) for updating the meta-controller, while other advanced policy gradient methods (Kakade 2002; Schulman et al. 2015) can be applied analogously. Our action space is, however, different with that of (Zoph and Le 2017) or any other RL based approach (Baker et al. 2017), as our actions are the network transformation operations like adding, deleting, widening, etc., while others are speciï¬c conï¬gurations of a newly created network layer on the top of preceding layers. Speciï¬cally, we model the automatic architecture design procedure as a sequential decision making process, where the state is the current network architecture and the action is the corresponding network transformation operation. After T steps of network transformations, the ï¬nal network architecture, along with its weights transferred from the initial input network, is then trained in the real data to get the validation performance to calculate the reward signal, which is further used to update the meta-controller via policy gradient algorithms to maximize the expected validation performances of the designed networks by the meta-controller.
Architecture Search by Net Transformation In this section, we ï¬rst introduce the overall framework of our meta-controller, and then show how each speciï¬c network transformation decision is made under it. We later extend the function-preserving transformations to the DenseNet (Huang et al. 2017) architecture space where di- rectly applying the original Net2Net operations can be prob- lematic since the output of a layer will be fed to all subse- quent layers.
We consider learning a meta-controller to generate net- work transformation actions given the current network ar- chitecture, which is speciï¬ed with a variable-length string (Zoph and Le 2017). To be able to generate various types
{Rework Transformation) (Raton Transformation) Update the network Multiple Net2Wider Actor Networks âActor Network Net2Deeper âActor Network Encoder Network 8 BiLSTM i BiLLSTM g BiLSTM g BiLSTM (CayerEmbecsing) (CayerEmbedding) (LayerEmbedding) (LayerEmbedaing) (Cayer Embedding)
Figure 1: Overview of the RL based meta-controller in EAS, which consists of an encoder network for encoding the ar- chitecture and multiple separate actor networks for taking network transformation actions.
of network transformation actions while keeping the meta- controller simple, we use an encoder network to learn a low- dimensional representation of the given architecture, which is then fed into each separate actor network to generate a certain type of network transformation actions. Further- more, to handle variable-length network architectures as in- put and take the whole input architecture into considera- tion when making decisions, the encoder network is imple- mented with a bidirectional recurrent network (Schuster and Paliwal 1997) with an input embedding layer. The overall framework is illustrated in Figure 1, which is an analogue of end-to-end sequence to sequence learning (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014).
Actor Networks Given the low dimensional representation of the input archi- tecture, each actor network makes necessary decisions for taking a certain type of network transformation actions. In this work, we introduce two speciï¬c actor networks, namely Net2Wider actor and Net2Deeper actor which correspond to Net2WiderNet and Net2DeeperNet respectively.
Net2Wider Actor Net2WiderNet operation allows to re- place a layer with a wider layer, meaning more units for fully-connected layers, or more ï¬lters for convolutional lay- ers, while preserving the functionality. For example, con- sider a convolutional layer with kernel Kl whose shape is (kl i , f l h, f l h denote the ï¬lter width and height, while f l o denote the number of input and out- put channels. To replace this layer with a wider layer that has Ëf l o) output channels, we should ï¬rst introduce a random remapping function Gl, which is deï¬ned as
Gl(j) = j random sample from {1, · · · , f l 1 ⤠j ⤠f l o o < j ⤠Ëf l o} f l o . (1)
With the remapping function Gl, we have the new kernel ËKl for the wider layer with shape (kl
i , Ëf l o) ËK l[x, y, i, j] = K l[x, y, i, Gl(j)].
(2)
As such, the ï¬rst f l of ËKl are directly copied from Kl while the remaining Ëf l
QD eee Se 2 ? Net2Wider Actor i i ? Probability of ? ? Widening te Layer Sigmoid Sigmoid Sigmoid Classifier Classifier Classifier Encoder Network Q Bi-LSTM Q Bi-LSTM Q
Figure 2: Net2Wider actor, which uses a shared sigmoid classiï¬er to simultaneously determine whether to widen each layer based on its hidden state given by the encoder network.
f l o entries are created by choosing randomly as deï¬ned in Gl. Accordingly, the new output of the wider layer is ËOl with ËOl(j) = Ol(Gl(j)), where Ol is the output of the original layer and we only show the channel dimension to make the notation simpler.
To preserve the functionality, the kernel Kl+1 of the next layer should also be modiï¬ed due to the replication in its input. The new kernel ËKl+1 with shape (kl+1 i = Ëf l o, f l+1 o
Kaley, k= eo Gily), k] GB) {2lGi(z) = Gi(4)}|
For further details, we refer to the original Net2Net work (Chen, Goodfellow, and Shlens 2015).
In our work, to be ï¬exible and efï¬cient, the Net2Wider actor simultaneously determines whether each layer should be extended. Speciï¬cally, for each layer, this decision is carried out by a shared sigmoid classiï¬er given the hid- den state of the layer learned by the bidirectional encoder network. Moreover, we follow previous work and search the number of ï¬lters for convolutional layers and units for fully-connected layers in a discrete space. Therefore, if the Net2Wider actor decides to widen a layer, the number of ï¬lters or units of the layer increases to the next discrete level, e.g. from 32 to 64. The structure of Net2Wider actor is shown in Figure 2.
Net2Deeper Actor Net2DeeperNet operation allows to insert a new layer that is initialized as adding an identity mapping between two layers so as to preserve the functional- ity. For a new convolutional layer, the kernel is set to be iden- tity ï¬lters while for a new fully-connected layer, the weight matrix is set to be identity matrix. Thus the new layer is set with the same number of ï¬lters or units as the layer below at ï¬rst, and could further get wider when Net2WiderNet op- eration is performed on it. To fully preserve the functional- ity, Net2DeeperNet operation has a constraint on the activa- tion function Ï, i.e. Ï must satisfy Ï(IÏ(v)) = Ï(v) for all vectors v. This property holds for rectiï¬ed linear activation (ReLU) but fails for sigmoid and tanh activation. However, we can still reuse weights of existing networks with sigmoid
Net2Deeper Actor Initial State Parameters of New Layer <START> if Applicable (CNN as example) Encoder 8 g = 6 ce - 8 RF
Figure 3: Net2Deeper actor, which uses a recurrent network to sequentially determine where to insert the new layer and corresponding parameters for the new layer based on the ï¬- nal hidden state of the encoder network given the input ar- chitecture.
or tanh activation, which could be useful compared to ran- dom initialization. Additionally, when using batch normal- ization (Ioffe and Szegedy 2015), we need to set output scale and output bias of the batch normalization layer to undo the normalization, rather than initialize them as ones and zeros. Further details about the Net2DeeperNet operation is pro- vided in the original paper (Chen, Goodfellow, and Shlens 2015).
The structure of the Net2Deeper actor is shown in Fig- ure 3, which is a recurrent network whose hidden state is initialized with the ï¬nal hidden state of the encoder net- work. Similar to previous work (Baker et al. 2017), we al- low the Net2Deeper actor to insert one new layer at each step. Speciï¬cally, we divide a CNN architecture into sev- eral blocks according to the pooling layers and Net2Deeper actor sequentially determines which block to insert the new layer, a speciï¬c index within the block and parameters of the new layer. For a new convolutional layer, the agent needs to determine the ï¬lter size and the stride while for a new fully- connected layer, no parameter prediction is needed. In CNN architectures, any fully-connected layer should be on the top of all convolutional and pooling layers. To avoid resulting in unreasonable architectures, if the Net2Deeper actor decides to insert a new layer after a fully-connected layer or the ï¬nal global average pooling layer, the new layer is restricted to be a fully-connected layer, otherwise it must be a convolutional layer.
Function-preserving Transformation for DenseNet The original Net2Net operations proposed in (Chen, Good- fellow, and Shlens 2015) are discussed under the scenarios where the network is arranged layer-by-layer, i.e. the output of a layer is only fed to its next layer. As such, in some mod- ern CNN architectures where the output of a layer would be fed to multiple subsequent layers, such as DenseNet (Huang
et al. 2017), directly applying the original Net2Net opera- tions can be problematic. In this section, we introduce sev- eral extensions to the original Net2Net operations to enable function-preserving transformations for DenseNet.
Different from the plain CNN, in DenseNet, the lth layer would receive the outputs of all preceding layers as input, which are concatenated on the channel dimension, denoted as [O0, O1, · · · , Olâ1], while its output Ol would be fed to all subsequent layers.
Denote the kernel of the lth layer as Kl with shape o). To replace the lth layer with a wider layer i , f l (kl h, f l w, kl that has Ëf l o output channels while preserving the function- ality, the creation of the new kernel ËKl in the lth layer is the same as the original Net2WiderNet operation (see Eq. (1) and Eq. (2)). As such, the new output of the wider layer is ËOl with ËOl(j) = Ol(Gl(j)), where Gl is the random remapping function as deï¬ned in Eq. (1). Since the output of the lth layer will be fed to all subsequent layers in DenseNet, the replication in ËOl will result in replication in the inputs of all layers after the lth layer. As such, instead of only modifying the kernel of the next layer as done in the original Net2WiderNet operation, we need to modify the kernels of all subsequent layers in DenseNet. For the mth layer where m > l, its input be- comes [O0, · · · , Olâ1, ËOl, Ol+1, · · · , Omâ1] after widen- ing the lth layer, thus from the perspective of mth layer, the equivalent random remapping function ËGm can be written as
ËGm(j) = j f 0:l o +Gl(j) j â Ëf l o +f l o 1 ⤠j ⤠f 0:l f 0:l o < j ⤠f 0:l o + Ëf l f 0:l o o + Ëf l o < j ⤠f 0:m o o + Ëf l o âf l o , (4)
where f 0:l o is the number of input channels for the lth layer, the ï¬rst part corresponds to [O0, · · · , Olâ1], the second part corresponds to [ ËOl], and the last part corre- sponds to [Ol+1, · · · , Omâ1]. A simple example of ËGm is given as
(on a A â_ â Gin : {1,-++ ,5,6,7,8, 9,10, 11} > {1,-+- ,5,6,7,6, 6, 8, 9} where G : {1,2,3,4} > {1,2, 1,1}.
Accordingly the new kernel of mth layer can be given by Eq. (3) with Gl replaced with ËGm.
To insert a new layer in DenseNet, suppose the new layer is inserted after the lth layer. Denote the output of the new layer as Onew, and its input is [O0, O1, · · · , Ol]. Therefore, for the mth (m > l) layer, its new input after the insertion is [O0, O1, · · · , Ol, Onew, Ol+1, · · · , Omâ1]. To preserve the functionality, similar to the Net2WiderNet case, Onew should be the replication of some entries in [O0, O1, · · · , Ol]. It is possible, since the input of the new layer is [O0, O1, · · · , Ol]. Each ï¬lter in the new layer can be represented with a tensor, denoted as ËF with shape i = f 0:l+1 (knew ), where knew denote the width and height of the ï¬lter, and f new is the number of in- put channels. To make the output of ËF to be a replication
of the nth entry in [O0, O1, · · · , Ol], we can set ËF (using the special case that knew h = 3 for illustration) as the following
ËF [x, y, n] = 0 0 0 0 1 0 0 0 0 , (5)
while all other values in ËF are set to be 0. Note that n can be chosen randomly from {1, · · · , f 0:l+1 } for each ï¬lter. After all ï¬lters in the new layer are set, we can form an equivalent random remapping function for all subsequent layers as is done in Eq. (4) and modify their kernels accordingly.
# Experiments and Results
In line with the previous work (Baker et al. 2017; Zoph and Le 2017; Real et al. 2017), we apply the proposed EAS on image benchmark datasets (CIFAR-10 and SVHN) to ex- plore high performance CNN architectures for the image classiï¬cation task1. Notice that the performances of the ï¬nal designed models largely depend on the architecture space and the computational resources. In our experiments, we evaluate EAS in two different settings. In all cases, we use restricted computational resources (5 GPUs) compared to the previous work such as (Zoph and Le 2017) that used 800 GPUs. In the ï¬rst setting, we apply EAS to explore the plain CNN architecture space, which purely consists of con- volutional, pooling and fully-connected layers. While in the second setting, we apply EAS to explore the DenseNet ar- chitecture space.
# Image Datasets
CIFAR-10 The CIFAR-10 dataset (Krizhevsky and Hin- ton 2009) consists of 50,000 training images and 10,000 test images. We use a standard data augmentation scheme that is widely used for CIFAR-10 (Huang et al. 2017), and denote the augmented dataset as C10+ while the original dataset is denoted as C10. For preprocessing, we normal- ized the images using the channel means and standard de- viations. Following the previous work (Baker et al. 2017; Zoph and Le 2017), we randomly sample 5,000 images from the training set to form a validation set while using the re- maining 45,000 images for training during exploring the ar- chitecture space.
SVHN The Street View House Numbers (SVHN) dataset (Netzer et al. 2011) contains 73,257 images in the original training set, 26,032 images in the test set, and 531,131 addi- tional images in the extra training set. For preprocessing, we divide the pixel values by 255 and do not perform any data augmentation, as is done in (Huang et al. 2017). We follow (Baker et al. 2017) and use the original training set during the architecture search phase with 5,000 randomly sampled images as the validation set, while training the ï¬nal discov- ered architectures using all the training data, including the original training set and extra training set.
1Experiment code and discovered top architectures along with weights: https://github.com/han-cai/EAS
wo fo (95.11) & 300 epochs ! training (92.47) o eq 1 o N 1 (91.78) 100 epochs } training | (90.18) oO ° 1 foe] 0 1 (87.07) Average Validation Accuracy (%) co a ie} 100 200 300 400 500 Number of Nets Sampled
Figure 4: Progress of two stages architecture search on C10+ in the plain CNN architecture space.
Training Details For the meta-controller, we use a one-layer bidirectional LSTM with 50 hidden units as the encoder network (Fig- ure 1) with an embedding size of 16, and train it with the ADAM optimizer (Kingma and Ba 2015).
At each step, the meta-controller samples 10 networks by taking network transformation actions. Since the sampled networks are not trained from scratch but we reuse weights of the given network in our scenario, they are then trained for 20 epochs, a relative small number compared to 50 epochs in (Zoph and Le 2017). Besides, we use a smaller initial learn- ing rate for this reason. Other settings for training networks on CIFAR-10 and SVHN, are similar to (Huang et al. 2017; Zoph and Le 2017). Speciï¬cally, we use the SGD with a Nesterov momentum (Sutskever et al. 2013) of 0.9, a weight decay of 0.0001, a batch size of 64. The initial learning rate is 0.02 and is further annealed with a cosine learning rate decay (Gastaldi 2017). The accuracy in the held-out valida- tion set is used to compute the reward signal for each sam- pled network. Since the gain of improving the accuracy from 90% to 91% should be much larger than from 60% to 61%, instead of directly using the validation accuracy accv as the reward, as done in (Zoph and Le 2017), we perform a non- linear transformation on accv, i.e. tan(accv à Ï/2), and use the transformed value as the reward. Additionally, we use an exponential moving average of previous rewards, with a decay of 0.95 as the baseline function to reduce the variance.
Explore Plain CNN Architecture Space We start applying EAS to explore the plain CNN archi- tecture space. Following the previous automatic architec- ture designing methods (Baker et al. 2017; Zoph and Le 2017), EAS searches layer parameters in a discrete and lim- ited space. For every convolutional layer, the ï¬lter size is chosen from {1, 3, 5} and the number of ï¬lters is cho- sen from {16, 32, 64, 96, 128, 192, 256, 320, 384, 448, 512}, while the stride is ï¬xed to be 1 (Baker et al. 2017). For every fully-connected layer, the number of units is chosen from {64, 128, 256, 384, 512, 640, 768, 896, 1024}. Additionally,
Table 1: Simple start point network. C(n, f, l) denotes a convolutional layer with n ï¬lters, ï¬lter size f and stride l; P(f, l, MAX) and P(f, l, AVG) denote a max and an average pooling layer with ï¬lter size f and stride l respectively; FC(n) denotes a fully- connected layer with n units; SM(n) denotes a softmax layer with n output units.
Model Architecture C(16, 3, 1), P(2, 2, MAX), C(32, 3, 1), P(2, 2, MAX), C(64, 3, 1), P(2, 2, MAX), C(128, 3, 1), P(4, 4, AVG), FC(256), SM(10) Validation Accuracy (%) 87.07
we use ReLU and batch normalization for each convolu- tional or fully-connected layer. For SVHN, we add a dropout layer after each convolutional layer (except the ï¬rst layer) and use a dropout rate of 0.2 (Huang et al. 2017).
Start with Small Network We begin the exploration on C10+, using a small network (see Table 1), which achieves 87.07% accuracy in the held-out validation set, as the start point. Different from (Zoph and Le 2017; Baker et al. 2017), EAS is not restricted to start from empty and can ï¬exibly use any discovered architecture as the new start point. As such, to take the advantage of such ï¬exibility and also re- duce the search space for saving the computational resources and time, we divide the whole architecture search process into two stages where we allow the meta-controller to take 5 steps of Net2Deeper action and 4 steps of Net2Wider action in the ï¬rst stage. After 300 networks are sampled, we take the network which performs best currently and train it with a longer period of time (100 epochs) to be used as the start point for the second stage. Similarly, in the second stage, we also allow the meta-controller to take 5 steps of Net2Deeper action and 4 steps of Net2Wider action and stop exploration after 150 networks are sampled.
The progress of the two stages architecture search is shown in Figure 4, where we can ï¬nd that EAS gradu- ally learns to pick high performance architectures at each stage. As EAS takes function-preserving transformations to explore the architecture space, we can also ï¬nd that the sampled architectures consistently perform better than the start point network at each stage. Thus it is usually âsafeâ to explore the architecture space with EAS. We take the top networks discovered during the second stage and fur- ther train the networks with 300 epochs using the full train- ing set. Finally, the best model achieves 95.11% test ac- curacy (i.e. 4.89% test error rate). Furthermore, to justify the transferability of the discovered networks, we train the top architecture (95.11% test accuracy) on SVHN from ran- dom initialization with 40 epochs using the full training set and achieves 98.17% test accuracy (i.e. 1.83% test er- ror rate), better than both human-designed and automatically designed architectures that are in the plain CNN architecture space (see Table 2).
We would like to emphasize that the required computa- tional resources to achieve this result is much smaller than those required in (Zoph and Le 2017; Real et al. 2017). Speciï¬cally, it takes less than 2 days on 5 GeForce GTX 1080 GPUs with totally 450 networks trained to achieve 4.89% test error rate on C10+ starting from a small network.
Further Explore Larger Architecture Space To further search better architectures in the plain CNN architecture
Table 2: Test error rate (%) comparison with CNNs that use convolutional, fully-connected and pooling layers alone.
human designed auto designed Model Maxout (Goodfellow et al. 2013) NIN (Lin, Chen, and Yan 2013) All-CNN (Springenberg et al. 2014) VGGnet (Simonyan and Zisserman 2015) MetaQNN (Baker et al. 2017) (depth=7) MetaQNN (Baker et al. 2017) (ensemble) EAS (plain CNN, depth=16) EAS (plain CNN, depth=20) C10+ 9.38 8.81 7.25 7.25 6.92 - 4.89 4.23 SVHN 2.47 2.35 - - - 2.06 1.83 1.73
space, in the second experiment, we use the top architec- tures discovered in the ï¬rst experiment, as the start points to explore a larger architecture space on C10+ and SVHN. This experiment on each dataset takes around 2 days on 5 GPUs.
The summarized results of comparing with human- designed and automatically designed architectures that use a similar design scheme (plain CNN), are reported in Table 2, where we can ï¬nd that the top model designed by EAS on the plain CNN architecture space outperforms all similar models by a large margin. Speciï¬cally, comparing to human- designed models, the test error rate drops from 7.25% to 4.23% on C10+ and from 2.35% to 1.73% on SVHN. While comparing to MetaQNN, the Q-learning based automatic ar- chitecture designing method, EAS achieves a relative test er- ror rate reduction of 38.9% on C10+ and 16.0% on SVHN. We also notice that the best model designed by MetaQNN on C10+ only has a depth of 7, though the maximum is set to be 18 in the original paper (Baker et al. 2017). We sup- pose maybe they trained each designed network from scratch and used an aggressive training strategy to accelerate train- ing, which resulted in many networks under performed, es- pecially for deep networks. Since we reuse the weights of pre-existing networks, the deep networks are validated more accurately in EAS, and we can thus design deeper and more accurate networks than MetaQNN.
We also report the comparison with state-of-the-art ar- chitectures that use advanced techniques such as skip- connections, branching etc., on C10+ in Table 3. Though it is not a fair comparison since we do not incorporate such advanced techniques into the search space in this experi- ment, we still ï¬nd that the top model designed by EAS is highly competitive even comparing to these state-of-the-art modern architectures. Speciï¬cally, the 20-layers plain CNN with 23.4M parameters outperforms ResNet, its stochas- tic depth variant and its pre-activation variant. It also ap- proaches the best result given by DenseNet. When com- paring to automatic architecture designing methods that in-
# Table 3: Test error rate (%) comparison with state-of-the-art architectures.
human designed auto designed Model ResNet (He et al. 2016a) ResNet (stochastic depth) (Huang et al. 2017) Wide ResNet (Zagoruyko and Komodakis 2016) Wide ResNet (Zagoruyko and Komodakis 2016) ResNet (pre-activation) (He et al. 2016b) DenseNet (L = 40, k = 12) (Huang et al. 2017) DenseNet-BC (L = 100, k = 12) (Huang et al. 2017) DenseNet-BC (L = 190, k = 40) (Huang et al. 2017) Large-Scale Evolution (250 GPUs)(Real et al. 2017) NAS (predicting strides, 800 GPUs) (Zoph and Le 2017) NAS (max pooling, 800 GPUs) (Zoph and Le 2017) NAS (post-processing, 800 GPUs) (Zoph and Le 2017) EAS (plain CNN, 5 GPUs) Depth 110 1202 16 28 1001 40 100 190 - 20 39 39 20 Params C10+ 6.61 1.7M 4.91 10.2M 4.81 11.0M 4.17 36.5M 4.62 10.2M 5.24 1.0M 4.51 0.8M 3.46 25.6M 5.40 5.4M 6.01 2.5M 4.47 7.1M 3.65 37.4M 4.23 23.4M
© i © 3 Average Validation Accuracy (%) $ Best Validation Accuracy (%) pS en ont yo ee Ny 88 â RL â RL â-- Random â-- Random 87 + 87 1 0 100 200 300 0 100 200 300 Number of Nets Sampled Number of Nets Sampled
Figure 5: Comparison between RL based meta-controller and random search on C10+.
Table 4: Test error rate (%) results of exploring DenseNet architecture space with EAS.
Model DenseNet (L = 100, k = 24) DenseNet-BC (L = 250, k = 24) DenseNet-BC (L = 190, k = 40) NAS (post-processing) EAS (DenseNet on C10) EAS (DenseNet on C10+) Depth 100 250 190 39 70 76 Params C10 C10+ 3.74 27.2M 5.83 3.62 15.3M 5.19 3.46 25.6M 3.65 37.4M 8.6M 4.66 - 3.44 10.7M - - -
corporate skip-connections into their search space, our 20- layers plain model beats most of them except NAS with post-processing, that is much deeper and has more param- eters than our model. Moreover, we only use 5 GPUs and train hundreds of networks while they use 800 GPUs and train tens of thousands of networks.
Comparison Between RL and Random Search Our framework is not restricted to use the RL based meta- controller. Beside RL, one can also take network transfor- mation actions to explore the architecture space by random search, which can be effective in some cases (Bergstra and Bengio 2012). In this experiment, we compare the perfor- mances of the RL based meta-controller and the random search meta-controller in the architecture space that is used in the above experiments. Speciï¬cally, we use the network in Table 1 as the start point and let the meta-controller to take 5 steps of Net2Deeper action and 4 steps of Net2Wider
action. The result is reported in Figure 5, which shows that the RL based meta-controller can effectively focus on the right search direction, while the random search cannot (left plot), and thus ï¬nd high performance architectures more ef- ï¬ciently than random search.
Explore DenseNet Architecture Space We also apply EAS to explore the DenseNet architecture space. We use the DenseNet-BC (L = 40, k = 40) as the start point. The growth rate, i.e. the width of the non- bottleneck layer is chosen from {40, 44, 48, 52, 56, 60, 64}, and the result is reported in Table 4. We ï¬nd that by ap- plying EAS to explore the DenseNet architecture space, we achieve a test error rate of 4.66% on C10, better than the best result, i.e. 5.19% given by the original DenseNet while having 43.79% less parameters. On C10+, we achieve a test error rate of 3.44%, also outperforming the best result, i.e. 3.46% given by the original DenseNet while having 58.20% less parameters.
Conclusion In this paper, we presented EAS, a new framework to- ward economical and efï¬cient architecture search, where the meta-controller is implemented as a RL agent. It learns to take actions for network transformation to explore the architecture space. By starting from an existing network and reusing its weights via the class of function-preserving transformation operations, EAS is able to utilize knowledge stored in previously trained networks and take advantage of the existing successful architectures in the target task to explore the architecture space efï¬ciently. Our experiments have demonstrated EASâs outstanding performance and ef- ï¬ciency compared with several strong baselines. For future work, we would like to explore more network transforma- tion operations and apply EAS for different purposes such as searching networks that not only have high accuracy but also keep a balance between the size and the performance.
Acknowledgments This research was sponsored by Huawei Innovation Re- search Program, NSFC (61702327) and Shanghai Sailing Program (17YF1428200).
References [Bahdanau, Cho, and Bengio 2014] Bahdanau, D.; Cho, K.; and Bengio, Y. 2014. Neural machine translation by jointly learning to align and translate. ICLR.
[Baker et al. 2017] Baker, B.; Gupta, O.; Naik, N.; and Raskar, R. 2017. Designing neural network architectures using reinforcement learning. ICLR.
[Bergstra and Bengio 2012] Bergstra, J., and Bengio, Y. 2012. Ran- dom search for hyper-parameter optimization. JMLR.
[Cai et al. 2017] Cai, H.; Ren, K.; Zhang, W.; Malialis, K.; Wang, J.; Yu, Y.; and Guo, D. 2017. Real-time bidding by reinforcement learning in display advertising. In WSDM.
[Chen, Goodfellow, and Shlens 2015] Chen, T.; Goodfellow, I.; and Shlens, J. 2015. Net2net: Accelerating learning via knowledge transfer. ICLR.
[Domhan, Springenberg, and Hutter 2015] Domhan, T.; Springen- berg, J. T.; and Hutter, F. 2015. Speeding up automatic hyper- parameter optimization of deep neural networks by extrapolation of learning curves. In IJCAI.
[Gastaldi 2017] Gastaldi, X. 2017. Shake-shake regularization.
arXiv preprint arXiv:1705.07485. [Goodfellow et al. 2013] Goodfellow,
J.; Warde-Farley, D.; Mirza, M.; Courville, A.; and Bengio, Y. 2013. Maxout networks. ICML.
[Han et al. 2015] Han, S.; Pool, J.; Tran, J.; and Dally, W. 2015. Learning both weights and connections for efï¬cient neural net- work. In NIPS.
[He et al. 2016a] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016a. Deep residual learning for image recognition. In CVPR.
[He et al. 2016b] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016b. Identity mappings in deep residual networks. In ECCV.
[Huang et al. 2017] Huang, G.; Liu, Z.; Weinberger, K. Q.; and van der Maaten, L. 2017. Densely connected convolutional net- works. CVPR.
[Ioffe and Szegedy 2015] Ioffe, S., and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing in- ternal covariate shift. ICML.
[Kakade 2002] Kakade, S. 2002. A natural policy gradient. NIPS. [Kingma and Ba 2015] Kingma, D., and Ba, J. 2015. Adam: A
method for stochastic optimization. ICLR.
[Klein et al. 2017] Klein, A.; Falkner, S.; Springenberg, J. T.; and Hutter, F. 2017. Learning curve prediction with bayesian neural networks. ICLR.
[Krizhevsky and Hinton 2009] Krizhevsky, A., and Hinton, G. 2009. Learning multiple layers of features from tiny images.
[Krizhevsky, Sutskever, and Hinton 2012] Krizhevsky, A.; Imagenet classiï¬ca- Sutskever, I.; and Hinton, G. E. tion with deep convolutional neural networks. In NIPS. 2012.
[Lin, Chen, and Yan 2013] Lin, M.; Chen, Q.; and Yan, S. 2013. Network in network. arXiv preprint arXiv:1312.4400.
[Mendoza et al. 2016] Mendoza, H.; Klein, A.; Feurer, M.; Sprin- genberg, J. T.; and Hutter, F. 2016. Towards automatically-tuned neural networks. In Workshop on Automatic Machine Learning. [Miller, Todd, and Hegde 1989] Miller, G. F.; Todd, P. M.; and Hegde, S. U. 1989. Designing neural networks using genetic algo- rithms. In ICGA. Morgan Kaufmann Publishers Inc.
[Negrinho and Gordon 2017] Negrinho, R., and Gordon, G. 2017. Deeparchitect: Automatically designing and training deep architec- tures. arXiv preprint arXiv:1704.08792.
[Netzer et al. 2011] Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. Y. 2011. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning.
[Real et al. 2017] Real, E.; Moore, S.; Selle, A.; Saxena, S.; Sue- matsu, Y. L.; Le, Q.; and Kurakin, A. 2017. Large-scale evolution of image classiï¬ers. ICML.
[Schulman et al. 2015] Schulman, J.; Levine, S.; Abbeel, P.; Jordan, M. I.; and Moritz, P. 2015. Trust region policy optimization. In ICML.
[Schuster and Paliwal 1997] Schuster, M., and Paliwal, K. K. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Sig- nal Processing.
[Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature. [Simonyan and Zisserman 2015] Simonyan, K., and Zisserman, A. 2015. Very deep convolutional networks for large-scale image recognition. ICLR.
[Snoek, Larochelle, and Adams 2012] Snoek, J.; Larochelle, H.; and Adams, R. P. 2012. Practical bayesian optimization of ma- chine learning algorithms. In NIPS.
[Springenberg et al. 2014] Springenberg, J. T.; Dosovitskiy, A.; Brox, T.; and Riedmiller, M. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
[Stanley and Miikkulainen 2002] Stanley, K. O., and Miikkulainen, R. 2002. Evolving neural networks through augmenting topolo- gies. Evolutionary computation.
[Sutskever et al. 2013] Sutskever, I.; Martens, J.; Dahl, G.; and Hin- ton, G. 2013. On the importance of initialization and momentum in deep learning. In ICML.
[Sutskever, Vinyals, and Le 2014] Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural net- works. In NIPS.
[Sutton and Barto 1998] Sutton, R. S., and Barto, A. G. 1998. Re- inforcement learning: An introduction. MIT press Cambridge.
[Williams 1992] Williams, R. J. 1992. Simple statistical gradient- learning. following algorithms for connectionist reinforcement Machine learning.
[Zagoruyko and Komodakis 2016] Zagoruyko, S., and Komodakis, arXiv preprint Wide residual networks. N. 2016. arXiv:1605.07146.
[Zoph and Le 2017] Zoph, B., and Le, Q. V. 2017. Neural architec- ture search with reinforcement learning. ICLR. | {
"id": "1705.07485"
} |
1707.03904 | Quasar: Datasets for Question Answering by Search and Reading | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar . | http://arxiv.org/pdf/1707.03904 | Bhuwan Dhingra, Kathryn Mazaitis, William W. Cohen | cs.CL, cs.IR, cs.LG | null | null | cs.CL | 20170712 | 20170809 | 7 1 0 2
g u A 9 ] L C . s c [
2 v 4 0 9 3 0 . 7 0 7 1 : v i X r a
# QUASAR: DATASETS FOR QUESTION ANSWERING BY SEARCH AND READING
# Kathryn Mazaitis School of Computer Science Carnegie Mellon University {bdhingra, krivard, wcohen}@cs.cmu.edu
# Bhuwan
# Cohen
# Abstract
We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The QUASAR-S dataset consists of 37000 cloze-style (ï¬ll-in-the-gap) queries constructed from deï¬nitions of software entity tags on the popular website Stack Overï¬ow. The posts and comments on the website serve as the background cor- pus for answering the cloze questions. The QUASAR-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 (Callan et al., 2009) serves as the background corpus for ex- tracting these answers. We pose these datasets as a challenge for two related sub- tasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and doc- uments from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neu- ral models, and show that these lag be- hind human performance by 16.4% and 32.1% for QUASAR-S and -T respectively. The datasets are available at https:// github.com/bdhingra/quasar.
# Introduction
to information seeking questions posed in natu- ral language. Depending on the knowledge source available there are two main approaches for fac- toid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase (Bollacker et al., 2008), are easier to process automatically since the information is organized according to a ï¬xed In this case the question is parsed into schema. a logical form in order to query against the KB. However, even the largest KBs are often incom- plete (Miller et al., 2016; West et al., 2014), and hence can only answer a limited subset of all pos- sible factoid questions.
For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in tex- tual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, how- ever, challenging, and typical QA pipelines con- sist of the following two components: (1) search- ing for the passages relevant to the given question, and (2) reading the retrieved text in order to se- lect a span of text which best answers the question (Chen et al., 2017; Watanabe et al., 2017).
Like most other language technologies, the cur- rent research focus for both these steps is ï¬rmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in par- ticular, has been signiï¬cantly boosted in the last few years with the introduction of large-scale read- ing comprehension datasets such as CNN / Daily- Mail (Hermann et al., 2015) and Squad (Rajpurkar et al., 2016). State-of-the-art systems for these datasets (Dhingra et al., 2017; Seo et al., 2017) fo- cus solely on step (2) above, in effect assuming the relevant passage of text is already known.
Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source,
In this paper, we introduce two new datasets for QUestion Answering by Search And Reading
javascript â javascript not to be confused with java is a dynamic weakly-typed language used for XXXXX as well as server-side scripting . client-side JavaScript is not weakly typed, it is strong typed. JavaScript is a Client Side Scripting Language. JavaScript was the **original** client-side web scripting language.
# Question
# Answer Context excerpt
Context excerpt JavaScript is not weakly typed, it is strong typed.
# Question Answer Context excerpt
7-Eleven stores were temporarily converted into Kwik E-marts to promote the release of what movie? the simpsons movie In July 2007 , 7-Eleven redesigned some stores to look like Kwik-E-Marts in select cities to promote The Simpsons Movie . Tie-in promotions were made with several companies , including 7-Eleven , which transformed se- lected stores into Kwik-E-Marts . â 7-Eleven Becomes Kwik-E-Mart for â Simpsons Movie â Promotion â .
Figure 1: Example short-document instances from QUASAR-S (top) and QUASAR-T (bottom)
â QUASAR. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehen- sion. QUASAR-S consists of 37,362 cloze-style questions constructed from deï¬nitions of software entities available on the popular website Stack Overï¬ow1. The answer to each question is re- stricted to be another software entity, from an out- put vocabulary of 4874 entities. QUASAR-T con- sists of 43,013 trivia questions collected from var- ious internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases.
ity to answer questions given large corpora. Prior datasets (such as those used in (Chen et al., 2017)) are constructed by ï¬rst selecting a passage and then constructing questions about that pas- sage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for pas- sages that may contain candidate answers, and ag- gregating information/resolving conï¬icts between candidates from many passages. The purpose of Quasar is to allow research into these subprob- lems, and in particular whether the search step can beneï¬t from integration and joint training with downstream reading systems.
While production quality QA systems may have access to the entire world wide web as a knowl- edge source, for QUASAR we restrict our search to speciï¬c background corpora. This is necessary to avoid uninteresting solutions which directly ex- tract answers from the sources from which the questions were constructed. For QUASAR-S we construct the knowledge source by collecting top 50 threads2 tagged with each entity in the dataset on the Stack Overï¬ow website. For QUASAR-T we use ClueWeb09 (Callan et al., 2009), which contains about 1 billion web pages collected be- tween January and February 2009. Figure 1 shows some examples.
the interest- ing feature of being a closed-domain dataset about computer programming, and successful ap- proaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed- domain QA datasets available. QUASAR-T, on the other hand, consists of open-domain questions based on trivia, which refers to âbits of informa- tion, often of little importanceâ. Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that QUASAR-T requires a deeper reading of documents to answer correctly.
Unlike existing reading comprehension tasks, the QUASAR tasks go beyond the ability to only understand a given passage, and require the abil-
1Stack Overï¬ow is a website featuring questions and answers (posts) from a wide range of topics in computer programming. The entity deï¬nitions were scraped from https://stackoverflow.com/tags.
2A question along with the answers provided by other users is collectively called a thread. The threads are ranked in terms of votes from the community. Note that these questions are different from the cloze-style queries in the QUASAR-S dataset.
We evaluate QUASAR against human testers, as well as several baselines ranging from na¨ıve heuristics to state-of-the-art machine readers. The best performing baselines achieve 33.6% and 28.5% on QUASAR-S and QUASAR-T, while hu- man performance is 50% and 60.6% respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies â retrieving more documents in the search phase
leads to a higher coverage of answers, but makes the comprehension task more difï¬cult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved docu- ments for each question.
# 2 Existing Datasets
Open-Domain QA: Early research into open- domain QA was driven by the TREC-QA chal- lenges organized by the National Institute of Stan- dards and Technology (NIST) (Voorhees and Tice, 2000). Both dataset construction and evalua- tion were done manually, restricting the size of the dataset to only a few hundreds. WIKIQA (Yang et al., 2015) was introduced as a larger- scale dataset for the subtask of answer sentence selection, however it does not identify spans of the actual answer within the selected sentence. More recently, Miller et al. (2016) introduced the MOVIESQA dataset where the task is to answer questions about movies from a background cor- pus of Wikipedia articles. MOVIESQA contains â¼ 100k questions, however many of these are similarly phrased and fall into one of only 13 dif- ferent categories; hence, existing systems already have â¼ 85% accuracy on it (Watanabe et al., 2017). MS MARCO (Nguyen et al., 2016) con- sists of diverse real-world queries collected from Bing search logs, however many of them not fac- tual, which makes their evaluation tricky. Chen et al. (2017) study the task of Machine Reading at Scale which combines the aspects of search and reading for open-domain QA. They show that jointly training a neural reader on several distantly supervised QA datasets leads to a performance im- provement on all of them. This justiï¬es our moti- vation of introducing two new datasets to add to the collection of existing ones; more data is good data.
Reading Comprehension: Reading Compre- hension (RC) aims to measure the capability of systems to âunderstandâ a given piece of text, by posing questions over it. It is assumed that the passage containing the answer is known before- hand. Several datasets have been proposed to measure this capability. Richardson et al. (2013) used crowd-sourcing to collect MCTest â 500 stories with 2000 questions over them. Signiï¬-
cant progress, however, was enabled when Her- mann et al. (2015) introduced the much larger CNN / Daily Mail datasets consisting of 300k and 800k cloze-style questions respectively. Chil- drenâs Book Test (CBT) (Hill et al., 2016) and Who-Did-What (WDW) (Onishi et al., 2016) are similar cloze-style datasets. However, the au- tomatic procedure used to construct these ques- tions often introduces ambiguity and makes the task more difï¬cult (Chen et al., 2016). Squad (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2016) attempt to move toward more gen- eral extractive QA by collecting, through crowd- sourcing, more than 100k questions whose an- swers are spans of text in a given passage. Squad in particular has attracted considerable interest, but recent work (Weissenborn et al., 2017) sug- gests that answering the questions does not require a great deal of reasoning.
Recently, Joshi et al. (2017) prepared the Triv- iaQA dataset, which also consists of trivia ques- tions collected from online sources, and is similar to QUASAR-T. However, the documents retrieved for TriviaQA were obtained using a commercial search engine, making it difï¬cult for researchers to vary the retrieval step of the QA system in a controlled fashion; in contrast we use ClueWeb09, a standard corpus. We also supply a larger col- lection of retrieved passages, including many not containing the correct answer to facilitate research into retrieval, perform a more extensive analysis of baselines for answering the questions, and pro- vide additional human evaluation and annotation of the questions. In addition we present QUASAR- S, a second dataset. SearchQA (Dunn et al., 2017) is another recent dataset aimed at facilitating re- search towards an end-to-end QA pipeline, how- ever this too uses a commercial search engine, and does not provide negative contexts not contain- ing the answer, making research into the retrieval component difï¬cult.
# 3 Dataset Construction
Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solu- tions, and the correct solution. In this section, we describe how each of these ï¬elds was generated for each QUASAR variant.
# 3.1 Question sets
QUASAR-S: The software question set was built from the deï¬nitional âexcerptâ entry for each tag (entity) on StackOverï¬ow. For example the ex- cerpt for the âjavaâ tag is, âJava is a general- purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).â Not every excerpt in- cludes the tag being deï¬ned (which we will call the âhead tagâ), so we prepend the head tag to the front of the string to guarantee relevant re- sults later on in the pipeline. We then completed preprocessing of the software questions by down- casing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. â.netâ, âc++â). Each pre- processed excerpt was then converted to a series of cloze questions using a simple heuristic: ï¬rst searching the string for mentions of other entities, then repleacing each mention in turn with a place- holder string (Figure 2).
This heuristic is noisy, since the software do- main often overloads existing English words (e.g. âcanâ may refer to a Controller Area Network bus; âswapâ may refer to the temporary storage of in- active pages of memory on disk; âusingâ may re- fer to a namespacing keyword). To improve pre- cision we scored each cloze based on the rela- tive incidence of the term in an English corpus versus in our StackOverï¬ow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as âcanâ âswapâ and âusingâ, but also âimageâ âser- viceâ and âpacketâ). A more sophisticated entity recognition system could make recall improve- ments here.
QUASAR-T: The trivia question set was built from a collection of just under 54,000 trivia ques- tions collected by Reddit user 007craft and re- leased in December 20153. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in format- ting, spelling, and accuracy. We ï¬ltered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of 52,000 free-response style ques- tions remaining. The questions range in difï¬culty,
3https://www.reddit.com/r/trivia/ comments/3wzpvt/free_database_of_50000_ trivia_questions/
from straightforward (âWho recorded the song âRocket Manââ âElton Johnâ) to difï¬cult (âWhat was Robin Williams paid for Disneyâs Aladdin in 1982â âScale $485 day + Picasso Paintingâ) to de- batable (âAccording to Earth Medicine whatâs the birth totem for marchâ âThe Falconâ)4
# 3.2 Context Retrieval
The context document for each record consists of a list of ranked and scored pseudodocuments rele- vant to the question.
Context documents for each query were gen- erated in a two-phase fashion, ï¬rst collecting a large pool of semirelevant text, then ï¬lling a tem- porary index with short or long pseudodocuments from the pool, and ï¬nally selecting a set of N top- ranking pseudodocuments (100 short or 20 long) from the temporary index. the
for 50+ each from question-and-answer http://stackoverflow.com. Stack- Overï¬ow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy5 to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identiï¬er, a post identiï¬er, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.
To build the context documents for QUASAR-S, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each anno- tated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:
SHOULD(PHRASE(question text)) SHOULD(BOOLEAN(question text)) MUST(tags:$headtag)
where âquestion textâ refers to the sequence of tokens in the cloze question, with the placeholder
4In Earth Medicine, March has two birth totems, the fal- con and the wolf.
# 5https://scrapy.org
Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).
# Preprocessed Excerpt
java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the java virtual-machine jvm .
# Cloze Questions
Question java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the @placeholder virtual-machine jvm . java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the java @placeholder jvm . java â java is a general-purpose object-oriented programming language designed to be used in con- junction with the java virtual-machine @placeholder .
Figure 2: Cloze generation
removed. The ï¬rst SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term in- dicates that only pseudodocuments annotated with the head tag of the cloze should be considered.
The top 100N pseudodocuments were re- trieved, and the top N unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions show- ing zero results for this query were discarded.
the pool of text for each question was composed of 100 HTML docu- ments retrieved from ClueWeb09. Each question- answer pair was converted to a #combine query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove (s/[.(){}<>:*â_]+//g) or replace (s/[,?â]+/ /g) illegal characters. Any ques- tions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho6. For long pseudodocuments we used the full page text, trun- cated to 2048 characters. For short pseudodocu- ments we used individual sentences as extracted by the Stanford NLP sentence segmenter, trun- cated to 200 characters.
To build the context documents for the trivia set, the pseudodocuments from the pool were col- lected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for QUASAR-S,
# 6http://jericho.htmlparser.net/docs/
index.html
without the head tag ï¬lter:
SHOULD(PHRASE(question text)) SHOULD(BOOLEAN(question text))
The top 100N pseudodocuments were re- trieved, and the top N unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions show- ing zero results for this query were discarded.
# 3.3 Candidate solutions
The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question. QUASAR-S used a closed vocab- ulary of 4874 tags as its candidate list. Since the questions in QUASAR-T are in free-response for- mat, we constructed a separate list of candidate solutions for each question. Since most of the cor- rect answers were noun phrases, we took each se- quence of NN* -tagged tokens in the context doc- ument, as identiï¬ed by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list.
# 3.4 Postprocessing
Once context documents had been built, we ex- tracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of QUASAR as a whole. We also split the full set into training, val- idation and test sets. The ï¬nal size of each data subset after all discards is listed in Table 1.
Dataset Total (train / val / test) Single-Token (train / val / test) Answer in Short (train / val / test) Answer in Long (train / val / test) QUASAR-S QUASAR-T 31,049 / 3,174 / 3,139 37,012 / 3,000 / 3,000 â 18,726 / 1,507 / 1,508 30,198 / 3,084 / 3,044 25,465 / 2,068 / 2,043 30,417 / 3,099 / 3,064 26,318 / 2,129 / 2,102
Table 1: Dataset Statistics. Single-Token refers to the questions whose answer is a single token (for QUASAR-S all answers come from a ï¬xed vocabulary). Answer in Short (Long) indicates whether the answer is present in the retrieved short (long) pseudo-documents.
# 4 Evaluation
# 4.1 Metrics
reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.
Evaluation is straightforward on QUASAR-S since each answer comes from a ï¬xed output vocabu- lary of entities, and we report the average accu- racy of predictions as the evaluation metric. For QUASAR-T, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation dif- ï¬cult. Here we pick the two metrics from Ra- jpurkar et al. (2016); Joshi et al. (2017). In prepro- cessing the answer we remove punctuation, white- space and deï¬nite and indeï¬nite articles from the strings. Then, exact match measures whether the two strings, after preprocessing, are equal or not. For F1 match we ï¬rst construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap be- tween the two bags of tokens. These metrics are far from perfect for QUASAR-T; for example, our human testers were penalized for entering â0â as answer instead of âzeroâ. However, a comparison between systems may still be meaningful.
# 4.2 Human Evaluation
We also asked the volunteers to provide annota- tions to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For QUASAR-S the annotators were asked to mark the relation between the head entity (from whose deï¬nition the cloze was con- structed) and the answer entity. For QUASAR-T the annotators were asked to mark the genre of the question (e.g., Arts & Literature)7 and the entity type of the answer (e.g., Person). When multi- ple annotators marked the same question differ- ently, we took the majority vote when possible and discarded ties. In total we collected 226 re- lation annotations for 136 questions in QUASAR- S, out of which 27 were discarded due to conï¬ict- ing ties, leaving a total of 109 annotated questions. For QUASAR-T we collected annotations for a to- tal of 144 questions, out of which 12 we marked In the remaining 132, a total of as ambiguous. 214 genres were annotated (a question could be annotated with multiple genres), while 10 ques- tions had conï¬icting entity-type annotations which we discarded, leaving 122 total entity-type anno- tations. Figure 3 shows the distribution of these annotations.
To put the difï¬culty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for QUASAR-S, and an avid trivia enthusiast for QUASAR-T) and 1 â 3 non-experts. Each volun- teer was presented with randomly selected ques- tions from the development set and asked to an- swer them via an online app. The experts were evaluated in a âclosed-bookâ setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an âopen-bookâ set- ting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section 3.2). We decided to use short pseudo-documents for this exercise to
# 4.3 Baseline Systems
We evaluate several baselines on QUASAR, rang- ing from simple heuristics to deep neural net- works. Some predict a single token / entity as the answer, while others predict a span of tokens.
4.3.1 Heuristic Models Single-Token: MF-i (Maximum Frequency) counts the number of occurrences of each candi- date answer in the retrieved context and returns the one with maximum frequency. MF-e is the same as MF-i except it excludes the candidates present in the query. WD (Word Distance) mea-
7Multiple genres per question were allowed.
(a) QUASAR-S relations (b) QUASAR-T genres (c) QUASAR-T answer categories
Figure 3: Distribution of manual annotations for QUASAR. Description of the QUASAR-S annotations is in Appendix A.
sures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style QUASAR-S the distances are measured by ï¬rst aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a speciï¬ed threshold, which is tuned on the validation set.
Multi-Token: For QUASAR-T we also test the Sliding Window (SW) and Sliding Window + Dis- tance (SW+D) baselines proposed in (Richardson et al., 2013). The scores were computed for the list of candidate solutions described in Section 3.2.
# 4.3.3 Reading Comprehension Models
Reading comprehension models are trained to ex- tract the answer from the given passage. We test two recent architectures on QUASAR using pub- licly available code from the authors8 9.
GA (Single-Token): The GA Reader (Dhingra et al., 2017) is a multi-layer neural network which extracts a single token from the passage to an- swer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For QUASAR-S we train and test GA on all instances for which the correct answer is found within the retrieved context. For QUASAR- T we train and test GA on all instances where the answer is in the context and is a single token.
# 4.3.2 Language Models
For QUASAR-S, since the answers come from a ï¬xed vocabulary of entities, we test language model baselines which predict the most likely en- tity to appear in a given context. We train three n- gram baselines using the SRILM toolkit (Stolcke et al., 2002) for n = 3, 4, 5 on the entire corpus of all Stack Overï¬ow posts. The output predictions are restricted to the output vocabulary of entities.
BiDAF (Multi-Token): The BiDAF model (Seo et al., 2017) is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writ- ing it had state-of-the-art performance among pub- lished models on the Squad dataset. For QUASAR- T we train and test BiDAF on all instances where the answer is in the retrieved context.
# 4.4 Results
We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the ï¬nal states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overï¬ow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach beneï¬ts from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.
Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the cor- rect answer is present in the context as Search Ac- curacy. The performance of the baseline among these instances is referred to as the Reading Ac- curacy, and the overall performance (which is a product of the two) is referred to as the Overall Ac- curacy. In Figure 4 we compare how these three vary as the number of context documents is var-
8https://github.com/bdhingra/ga-reader 9https://github.com/allenai/ bi-att-flow
GA on QUASAR-S (short) GA on QUASARS (long) GA on QUASAR-T (short) GA on QUASAR-T (long) 058 0.60) St 056} os) 06 04 âât 04 03 > 03 =a 025 0.25) o 0 20 0 0 0 6 70 o 24 6 8 0 2 WM 16 # sentences in context # sentences in context o 10 20 0 #0 50 60 70 o 2 «4 6 8 0 # sentences in context # sentences in context
Figure 4: Variation of Search, Read and Overall accuracies as the number of context documents is varied.
ied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more docu- ments is not sufï¬cient â ï¬nding the few most rele- vant ones will allow the reader to work best.
In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set10. For QUASAR-S the best performing baseline is the BiRNN language model, which achieves 33.6% accuracy. The GA model achieves 48.3% accu- racy on the set of instances for which the an- swer is in context, however, a search accuracy of only 65% means its overall performance is lower. This can improve with improved retrieval. For QUASAR-T, both the neural models signiï¬cantly outperform the heuristic models, with BiDAF get- ting the highest F1 score of 28.5%.
# 5 Conclusion
We have presented the QUASAR datasets for pro- moting research into two related tasks for QA â searching a large corpus of text for relevant pas- sages, and reading the passages to extract an- swers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the search- ing performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improv- ing these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, includ- ing the documents retrieved by our system and the human annotations, are available at https: //github.com/bdhingra/quasar.
# Acknowledgments
The best performing baselines, however, lag be- hind human performance by 16.4% and 32.1% for QUASAR-S and QUASAR-T respectively, indicat- ing the strong potential for improvement. Inter- estingly, for human performance we observe that non-experts are able to match or beat the perfor- mance of experts when given access to the back- ground corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the use- fulness of the search engine for non-experts; it should not be viewed as an upper bound for auto- matic systems which can potentially use the entire background corpus. Further analysis of the human and baseline performance in each category of an- notated questions is provided in Appendix B.
This work was funded by NSF under grants CCF- 1414030 and IIS-1250956 and by grants from Google.
# References
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247â1250.
Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. 2009. Clueweb09 data set.
Danqi Chen, Jason Bolton, and Christopher D Man- the ning. 2016. cnn/daily mail reading comprehension task. ACL . A thorough examination of
10The Search Accuracy for different baselines may be dif- ferent despite the same number of retrieved context docu- ments, due to different preprocessing requirements. For ex- ample, the SW baselines allow multiple tokens as answer, whereas WD and MF baselines do not.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- In Association for Computa- domain questions. tional Linguistics (ACL).
Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov.
Method Optimal Context Search Acc test val Reading Acc test val Overall Acc test val Human Performance Expert (CB) Non-Expert (OB) Language models 3-gram 4-gram 5-gram BiRNNâ Short-documents WD MF-e MF-i GAâ Long-documents WD MF-e MF-i GAâ â â â â â â 10 60 90 70 10 15 15 15 â â â â â â 0.40 0.64 0.67 0.65 0.66 0.69 0.69 0.67 â â â â â â 0.43 0.64 0.68 0.65 0.66 0.69 0.69 0.67 â â â â â â 0.250 0.209 0.237 0.486 0.124 0.185 0.230 0.474 â â â â â â 0.249 0.212 0.234 0.483 0.142 0.197 0.231 0.479 0.468 0.500 0.148 0.161 0.165 0.345 0.100 0.134 0.159 0.315 0.082 0.128 0.159 0.318 â â 0.153 0.171 0.174 0.336 0.107 0.136 0.159 0.316 0.093 0.136 0.159 0.321
Table 2: Performance comparison on QUASAR-S. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with â . Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.
Method Optimal Context Search Acc val test val exact Reading Acc test val f1 test val exact Overall Acc test val Human Performance Expert (CB) Non-Expert (OB) Short-documents MF-i WD SW+D SW MF-e GAâ BiDAFâ ** Long-documents WD SW SW+D MF-i MF-e BiDAFâ ** GAâ ** â â 10 20 20 10 70 70 10 20 20 5 20 20 1 10 â â 0.35 0.40 0.64 0.56 0.45 0.44 0.57 0.43 0.74 0.58 0.44 0.43 0.47 0.44 â â 0.34 0.39 0.63 0.53 0.45 0.44 0.54 0.44 0.73 0.58 0.45 0.44 0.468 0.44 â â 0.053 0.104 0.112 0.216 0.372 0.580 0.454 0.084 0.041 0.064 0.185 0.273 0.370 0.551 â â 0.044 0.082 0.113 0.205 0.342 0.600 0.476 0.067 0.034 0.055 0.187 0.286 0.395 0.556 â â 0.053 0.104 0.157 0.299 0.372 0.580 0.509 0.084 0.056 0.094 0.185 0.273 0.425 0.551 â â 0.044 0.082 0.155 0.271 0.342 0.600 0.524 0.067 0.050 0.088 0.187 0.286 0.445 0.556 0.547 0.515 0.019 0.042 0.072 0.120 0.167 0.256 0.257 0.037 0.030 0.037 0.082 0.119 0.17 0.245 â â 0.015 0.032 0.071 0.109 0.153 0.264 0.259 0.029 0.025 0.032 0.084 0.126 0.185 0.244 0.604 0.606 0.019 0.042 0.101 0.159 0.167 0.256 0.289 0.037 0.041 0.054 0.082 0.119 0.199 0.245 f1 test â â 0.015 0.032 0.097 0.144 0.153 0.264 0.285 0.029 0.037 0.051 0.084 0.126 0.208 0.244
Table 3: Performance comparison on QUASAR-T. CB: Closed-Book, OB: Open Book. Neural baselines are denoted with â . Optimal context is the number of documents used for answer extraction, which was tuned to maximize the overall accuracy on validation set.**We were unable to run BiDAF with more than 10 short-documents / 1 long-documents, and GA with more than 10 long-documents due to memory errors.
2017. Gated-attention readers for text comprehen- sion. ACL .
Matthew Dunn, Levent Sagun, Mike Higgins, Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179 .
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693â 1701.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. ICLR .
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. ACL .
Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. Key-value memory networks for directly reading documents. EMNLP .
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine read- ing comprehension dataset. NIPS .
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. EMNLP .
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. EMNLP .
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3, page 4.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention ï¬ow for machine comprehension. ICLR .
Andreas Stolcke et al. 2002. Srilm-an extensible lan- In Interspeech. volume guage modeling toolkit. 2002, page 2002.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830 .
Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR confer- ence on Research and development in information retrieval. ACM, pages 200â207.
Yusuke Watanabe, Bhuwan Dhingra, and Ruslan Salakhutdinov. 2017. Question answering from unstructured text by retrieval and comprehension. arXiv preprint arXiv:1703.08885 .
Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efï¬cient neural ar- chitecture for question answering. arXiv preprint arXiv:1703.04816 .
Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based ques- tion answering. In Proceedings of the 23rd interna- tional conference on World wide web. ACM, pages 515â526.
Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In EMNLP. Citeseer, pages 2013â 2018.
# A QUASAR-S Relation Deï¬nitions
Table 4 includes the deï¬nition of all the annotated relations for QUASAR-S.
# B Performance Analysis
Figure 5 shows a comparison of the human perfor- mance with the best performing baseline for each category of annotated questions. We see consis- tent differences between the two, except in the following cases. For QUASAR-S, Bi-RNN per- forms comparably to humans for the developed- with and runs-on categories, but much worse in the has-component and is-a categories. For QUASAR- T, BiDAF performs comparably to humans in the sports category, but much worse in history & re- ligion and language, or when the answer type is a number or date/time.
Relation (head â answer) Deï¬nition is-a component-of has-component developed-with extends runs-on synonym used-for head is a type of answer head is a component of answer answer is a component of head head was developed using the answer head is a plugin or library providing additional functionality to larger thing answer answer is an operating system, platform, or framework on which head runs head and answer are the same entity head is a software / framework used for some functionality related to answer
Table 4: Description of the annotated relations between the head entity, from whose deï¬nition the cloze is constructed, and the answer entity which ï¬lls in the cloze. These are the same as the descriptions shown to the annotators.
(a) QUASAR-S relations (b) QUASAR-T genres (c) QUASAR-T answer categories
Figure 5: Performance comparison of humans and the best performing baseline across the categories annotated for the development set. | {
"id": "1703.04816"
} |
1707.03743 | Learning Macromanagement in StarCraft from Replays using Deep Learning | The real-time strategy game StarCraft has proven to be a challenging
environment for artificial intelligence techniques, and as a result, current
state-of-the-art solutions consist of numerous hand-crafted modules. In this
paper, we show how macromanagement decisions in StarCraft can be learned
directly from game replays using deep learning. Neural networks are trained on
789,571 state-action pairs extracted from 2,005 replays of highly skilled
players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting
the next build action. By integrating the trained network into UAlbertaBot, an
open source StarCraft bot, the system can significantly outperform the game's
built-in Terran bot, and play competitively against UAlbertaBot with a fixed
rush strategy. To our knowledge, this is the first time macromanagement tasks
are learned directly from replays in StarCraft. While the best hand-crafted
strategies are still the state-of-the-art, the deep network approach is able to
express a wide range of different strategies and thus improving the network's
performance further with deep reinforcement learning is an immediately
promising avenue for future research. Ultimately this approach could lead to
strong StarCraft bots that are less reliant on hard-coded strategies. | http://arxiv.org/pdf/1707.03743 | Niels Justesen, Sebastian Risi | cs.AI | 8 pages, to appear in the proceedings of the IEEE Conference on
Computational Intelligence and Games (CIG 2017) | null | cs.AI | 20170712 | 20170712 | 7 1 0 2
l u J 2 1 ] I A . s c [ 1 v 3 4 7 3 0 . 7 0 7 1 : v i X r a
# Learning Macromanagement in StarCraft from Replays using Deep Learning
Niels Justesen IT University of Copenhagen Copenhagen, Denmark noju@itu.dk
Sebastian Risi IT University of Copenhagen Copenhagen, Denmark sebr@itu.dk
AbstractâThe real-time strategy game StarCraft has proven to be a challenging environment for artiï¬cial intelligence techniques, and as a result, current state-of-the-art solutions consist of numerous hand-crafted modules. In this paper, we show how macromanagement decisions in StarCraft can be learned directly from game replays using deep learning. Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can signiï¬cantly outperform the gameâs built-in Terran bot, and play competitively against UAlbertaBot with a ï¬xed rush strategy. To our knowledge, this is the ï¬rst time macromanagement tasks are learned directly from replays in StarCraft. While the best hand-crafted strategies are still the state-of-the-art, the deep network approach is able to express a wide range of different strategies and thus improving the networkâs performance further with deep reinforcement learning is an immediately promising avenue for future research. Ultimately this approach could lead to strong StarCraft bots that are less reliant on hard-coded strategies.
# I. INTRODUCTION
Artiï¬cial neural networks have been a promising tool in machine learning for many tasks. In the last decade, the increase in computational resources as well as several algorith- mic improvements, have allowed deep neural networks with many layers to be trained on large datasets. This approach, also re-branded as deep learning, has remarkably pushed the limits within object recognition [13], speech recognition [8], and many other domains. Combined with reinforcement learning, these techniques have surpassed the previous state-of-the-art in playing Atari games [16], the classic board game Go [23] and the 3D ï¬rst-person shooter Doom [15].
An open challenge for these methods are real-time strategy (RTS) games such as StarCraft, which are highly complex on many levels because of their enormous state and actions space with a large number of units that must be controlled in real-time. Furthermore, in contrast to games like Go, AI algorithms in StarCraft must deal with hidden information; the opponentâs base is initially hidden and must be explored continuously throughout the game to know (or guess) what strategy the opponent is following. The game has been a popular environment for game AI researchers with several StarCraft AI competitions such as the AIIDE StarCraft AI
Competition1, CIG StarCraft RTS AI Competition2 and the Student StarCraft AI Competition3.
However, bots participating in these competitions rely mainly on hard-coded strategies [6, 20] and are rarely able to adapt to the opponent during the game. They usually have a modular control architecture that divides the game into smaller task areas, relying heavily on hand-crafted modules and developer domain knowledge. Learning to play the entire game with end-to-end deep learning, as it was done for Atari games [16], is currently an unsolved challenge and perhaps an infeasible approach. A simpler approach, which we follow in this paper, is to apply deep learning to replace a speciï¬c function in a larger AI architecture.
More speciï¬cally, we focus on applying deep learning to macromanagement tasks in StarCraft: Brood War in the context of deciding what to produce next. A neural network is trained to predict these decisions based on a training set extracted from replay ï¬les (i.e. game logs) of highly skilled human players. The trained neural network is combined with the existing StarCraft bot UAlbertaBot, and is responsible for deciding what unit, building, technology, or upgrade to produce next, given the current state of the game. While our approach does not achieve state-of-the-art results on its own, it is a promising ï¬rst step towards self-learning methods for macromanagement in RTS games. Additionally, the approach presented here is not restricted to StarCraft and can be directly applied to other RTS games as well.
II. STARCRAFT StarCraft is a real-time strategy (RTS) game released by Blizzard in 1998. The same year an expansion set called StarCraft: Brood War was released, which became so popular that a professional StarCraft gamer scene emerged. The game is a strategic military combat simulation in a science ï¬ction setting. Each player controls one of three races; Terran, Protoss and Zerg. During the game, they must gather resources to expand their base and produce an army. The winner of a game is the player that manages to destroy the opponentâs base. Figure 1 shows a screenshot from a playerâs perspective con- trolling the Protoss. The screenshot shows numerous workers
# 1http://www.cs.mun.ca/â¼dchurchill/starcraftaicomp/ 2http://cilab.sejong.ac.kr/sc competition/ 3http://sscaitournament.com/
Fig. 1: A screenshot of StarCraft: Brood War, seen from the perspective of the Protoss player. Copyright (c) Blizzard Entertainment 1998.
collecting minerals and gas resources, and some buildings used to produce combat units. To master the game, StarCraft players need quick reactions to accurately and efï¬ciently control a large number of units in real-time. Tasks related to unit control are called micromanagement tasks, while macromanagement refers to the higher-level game strategy the player is following. Part of the macromanagement is the chosen build order, i.e. the order in which the player produces material in the game, which can be viewed as the strategic plan a player is following. In this paper, the term build is used to refer to any of the four types of material that can be produced: units, buildings, upgrades and technologies. Besides the opening build order, it is equally important for the player to be able to adapt to the opponentâs strategy later in the game. For example, if a player becomes aware that the opponent is producing ï¬ying units it is a bad idea to exclusively produce melee troops that are restricted to ground attacks. Players need to be able to react and adjust to the build strategies of their opponent; learning these macromanagement decisions is the focus of this paper. Macromanagement in StarCraft is challenging for a number of reasons, but mostly because areas which are not occupied by friendly units are not observable, a game mechanic known as fog-of-war. This restriction means that players must order units to scout the map to locate the opponentâs bases. The opponentâs strategy must then be deduced continuously from the partial knowledge obtained by scouting units.
Today, most StarCraft players play the sequel expansion set StarCraft II: Legacy of the Void. While this game introduces modern 3D graphics and new units, the core gameplay is the same as in the original. For StarCraft: Brood War, bots can communicate with the game using the Brood War Application Programming Interface (BWAPI)4, which has been the foun- dation of several StarCraft AI competitions.
4http://bwapi.github.io/
# III. RELATED WORK
A. Build Order Planning
Build order planning can be viewed as a search problem, in which the goal is to ï¬nd the build order that optimizes a speciï¬c heuristic. Churchill et al. applied tree search for build order planning with a goal-based approach; the search tries to minimize the time used to reach a given goal [5]. This approach is also implemented in UAlbertaBot and runs in real-time.
Other goal-based methods that have shown promising re- sults in optimizing opening build orders are multi-objective evolutionary algorithms [1, 12, 14]. The downside of goal- based approaches is that goals and thus strategies are ï¬xed, thereby preventing the bot from adapting to its opponent. Justesen et al. recently demonstrated how an approach called Continual Online Evolutionary Planning (COEP) can continu- ally evolve build orders during the game itself to adapt to the opponent [10]. In contrast to a goal-based approach, COEP does not direct the search towards a ï¬xed goal but can instead adapt to the opponentâs units. The downside of this approach is however, that it requires a sophisticated heuristic that is difï¬cult to design.
B. Learning from StarCraft Replays
Players have the option to save a replay ï¬le after each game in StarCraft, which enables them to watch the game without fog-of-war. Several web sites are dedicated to hosting replay ï¬les, as they are a useful resource to improve oneâs strategic knowledge of the game. Replay ï¬les contain the set of actions performed by both players, which the StarCraft engine can use to reproduce the exact events. Replay ï¬les are thus a great resource for machine learning if one wants to learn how players are playing the game. This section will review some previous approaches that learn from replay ï¬les.
Case-based reasoning [9, 19, 30], feature-expanded decision trees [4], and several traditional machine learning algorithms [4] have been used to predict the opponentâs strategy in RTS games by learning from replays. While strategy prediction is a critical part of playing StarCraft, the usefulness of applying these approaches to StarCraft bots has not been demonstrated. Dereszynski et al. trained Hidden Markov Models on 331 replays to learn the probabilities of the opponentâs future unit productions as well as a probabilistic state transition model [7]. The learned model takes as input the partial knowledge about the opponentâs buildings and units and then outputs the probability that the opponent will produce a certain unit in the near future. Synnaeve et al. applied a Bayesian model for build tree prediction in StarCraft from partial observations with robust results even with 30% noise (i.e. up to 30% of the opponentâs buildings are unknown) [26]. These predictive models can be very useful for a StarCraft bot, but they do not directly determine what to produce during the game. Tactical decision making can beneï¬t equally from combat forward models; Uriarte et al. showed how such a model can be ï¬ne- tuned using knowledge learned from replay data [28].
The approach presented in this paper addresses the com- plete challenge that is involved in deciding what to produce. Additionally, our approach learns a solution to this problem using deep learning, which is brieï¬y described next.
# C. Deep Learning
Artiï¬cial neural networks are computational models loosely inspired by the functioning of biological brains. Given an input signal, an output is computed by traversing a large number of connected neural units. The topology and connection weights of these networks can be optimized with evolutionary algo- rithms, which is a popular approach to evolve game-playing behaviors [21]. In contrast, deep learning most often refers to deep neural networks trained with gradient descent methods (e.g. backpropagation) on large amounts of data, which has shown remarkable success in a variety of different ï¬elds. In this case the network topologies are often hand-designed with many layers of computational units, while the parameters are learned through small iterated updates. As computers have become more powerful and with the help of algorithmic improvements, it has become feasible to train deep neural networks to perform at a human-level in object recognition [13] and speech recognition [8].
A combination of deep learning and reinforcement learning has achieved human-level results in Atari video games [16, 17] and beyond human-level in the classic board game Go [23]. In the case of Go, pre-training the networks on game logs of human players to predict actions was a critical step in achieving this landmark result because it allowed further training through self-play with reinforcement learning.
While deep learning has been successfully applied to achieve human-level results for many types of games, it is still an open question how it can be applied to StarCraft. On a much smaller scale Stanescu et al. showed how to train convolutional neural networks as game state evaluators in µRTS [25] and Usunier et al. applied reinforcement learning on small-scale StarCraft combats [29]. To our knowledge no prior work shows how to learn macromanagement actions from replays using deep learning.
Also worth mentioning is a technique known as imitation learning, in which a policy is trained to imitate human players. Imitation learning has been applied to Super Mario [3] and Atari games [2]. These results suggest that learning to play games from human traces is a promising approach that is the foundation of the method presented in this paper.
# IV. APPROACH
This section describes the presented approach, which con- sists of two parts. First, a neural network is trained to predict human macromanagement actions, i.e. what to produce next in a given state. Second, the trained network is applied to an existing StarCraft bot called UAlbertaBot by replacing the module responsible for production decisions. UAlbertaBot is an open source StarCraft bot developed by David Churchill5
5https://github.com/davechurchill/ualbertabot
that won the annual AIIDE StarCraft AI Competition in 2013. The bot consists of numerous hierarchical modules, such as an information manager, building manager and production man- ager. The production manager is responsible for managing the build order queue, i.e. the order in which the bot produces new builds. This architecture enables us to replace the production manager with our neural network, such that whenever the bot is deciding what to produce next, the network predicts what a human player would produce. The modular design of UAlbertaBot is described in more detail in Ontan´on et al. [20].
# A. Dataset
This section gives an overview of the dataset used for training and how it has been created from replay ï¬les. A replay ï¬le for StarCraft contains every action performed throughout the game by each player, and the StarCraft engine can recreate the game by executing these actions in the correct order. To train a neural network to predict the macromanagement decisions made by players, state-action pairs are extracted from replay ï¬les, where a state describes the current game situation and an action corresponds to the next build produced by the player. Additionally, states are encoded as a vector of normalized values to be processed by our neural network.
Replay ï¬les are in a binary format and require preprocessing before knowledge can be extracted. The dataset used in this paper is extracted from an existing dataset. Synnaeve et al. collected a repository of 7,649 replays by scraping the three StarCraft community websites GosuGamers, ICCup and TeamLiquid, which are mainly for highly skilled players including professionals [27]. A large amount of information was extracted from the repository and stored in an SQL [22]. This database contained database by Robertson et al. state changes, including unit attributes, for every 24 frames in the games. Our dataset is extracted from this database, and an overview of the preprocessing steps is shown in Figure 2.
From this database, we extract all events describing ma- terial changes throughout every Protoss versus Terran game, including when (1) builds are produced by the player, (2) units and buildings are destroyed and (3) enemy units and build- ings are observed. These events take the perspective of one player and thus maintain the concept of partially observable states in StarCraft. The set of events thus represent a more abstract version of the game only containing information about material changes and actions that relate to macromanagement tasks. The events are then used to simulate abstract StarCraft games via the build order forward model presented in Justesen and Risi [10]. Whenever the player takes an action in these abstract games, i.e. produces something, the action and state pair is added to our dataset. The state describes the playerâs own material in the game: the number of each unit, building, technology, and upgrade present and under construction, as well as enemy material observed by the player.
The entire state vector consists of a few sub-vectors de- scribed here in order, in which the numbers represent the indexes in the vector:
BWAPI Py. Forward > > model Repiey | : Replay files file parser SQL databi Event files State-action files j () Event file State-action file LS protoss_build:{ 13: [Probe], 377: [Probe], h / Pylon: 0,0,0,0.1094,0,0, protoss lost : { Probe: 0,0,0,0.1094,0,0,0,0. 2244: [Probe], Probe: 0,0,0,0.125,0,0,0,0, 6018: [Zealot], Gateway: 0,0,0,0.1406,0. Probe: 0,0,0,0.1406,0,0, Assimilator: 0,0,0,0.156. Probe: 0,0,0,0.1406,0.0, in terran_spotted:{ 2088: [(1413568, Supply Depot)], 2184: [(1207, Barracks)], in terran_lost : { 3456: [(1195, SCV)], 4856: [(1413573, Marine)] Probe: 0,0,0,0.2031,0,0, Probe: 0,0,0,0.2188,0,0, Pylon: 0,0,0,0.2344,0,0,0,0,
;
!
(c)
# (da)
Fig. 2: An overview of the data preprocessing that converts StarCraft replays into vectorized state-action pairs. (a) shows the process of extracting data from replay ï¬les into an SQL database, which was done by Robinson et al. [22]. (b) shows our extended data processing that ï¬rst extracts events from the database into ï¬les (c) containing builds, kills and observed enemy units. All events are then run through a forward model to generate vectorized state-action pairs with normalized values (d).
1) 0-31: The number of units/buildings of each type present in the game controlled by the player.
2) 32-38: The number of each technology type researched in the game by the player.
3) 39-57: The number of each upgrade type researched in the game by the player. For simplicity, upgrades are treated as a one-time build and our state description thus ignores level 2 and 3 upgrades.
4) 58-115: The number of each build in production by the player.
5) 116-173: The progress of each build in production by the player. If a build type is not in production it has a value of 0. If several builds of the same type are under construction, the value represents the progress of the build that will be completed ï¬rst.
6) 174-206: The number of enemy units/buildings of each type observed.
7) 207-209: The number of supply used by the player and the maximum number of supplies available. Another value is added which is the supply left, i.e. the difference between supply used and maximum supplies available.
All values are normalized into the interval [0, 1]. The preprocessed dataset contains 2,005 state-action ï¬les with a total of 789,571 state-action pairs. Six replays were excluded because the Protoss player used the rare mind control spell on a Terran SCV that allows the Protoss player to produce Terran builds. While the data preprocessing required for training is a relatively long process, the same data can be gathered directly by a playing (or observing) bot during a game.
# B. Network Architecture
Since our dataset contains neither images nor sequential data, a simple multi-layered network architecture with fully- connected layers is used. Our game state contains all the material produced and observed by the player throughout the game, unless it has been destroyed, and thus there is no need for recurrent connections in our model. The network that obtained the best results has four hidden layers. The input layer has 210 units, based on the state vector described in Section IV-A, which is followed by four hidden layers of 128 units with the ReLU activation function. The output layer has one output neuron for each of the 58 build types a Protoss player can produce and uses the softmax activation function. The output of the network is thus the probability of producing each build in the given state.
# C. Training
The dataset of 789,571 state-action pairs is split into a training set of 631,657 pairs (80%) and a test set of 157,914 pairs (20%). The training set is exclusively used for training the network, while the test set is used to evaluate the trained network. The state-action pairs, which come from 2,005 dif- ferent Protoss versus Terran games, are not shufï¬ed prior to the division of the data to avoid that actions from the same game end up in both the training and test set.
The network is trained on the training set, which is shuffled before each epoch. Xavier initialization is used for all weights in the hidden layers and biases are initialized to zero. The learning rate is 0.0001 with the Adam optimization algorithm [11] and a batch size of 100. The optimization algorithm uses the cross entropy loss function â )>, yj log(y:), where y is the output vector of the network and yâ is the one-hot target vector. The problem is thus treated as a classification problem, in which the network tries to predict the next build given a game state. In contrast to classical classification problems, identical data examples (states) in our dataset can have different labels (builds), as human players execute different strategies and also make mistakes while playing. Also, there is no correct build for any state in StarCraft, but some builds are much more likely to be performed by players as they are more likely to result in a win. The network could also be trained to predict whether the player is going to win the game, but how to best incorporate this in the decision-making process is an open question. Instead here we focus on predicting actions made by human players, similarly to the supervised learning step in AlphaGo [23].
D. Applying the Network to a StarCraft Bot
Learning to predict actions in games played by humans is very similar to the act of learning to play. However, this type of imitation learning does have its limits as the agent does not learn to take optimal actions, but instead to take the most probable action (if a human was playing). However, applying the trained network as a macromanagement module of an existing bot could be an important step towards more advanced approaches.
(a) Own material (b) Material under construction (c) Progress of material under construction (e) Supply (d) Opp. material 6 5 1 Bm 00.00 0 Input layer with 210 units. Heo on ome! OBMsa off-i Oo om-co 0 10 0 8 8G i) ommome) ° ole oc (0) 4 hidden layers each with 128 units (ReLU) Output layer O00 090 with 58 units ER EE (Softmax) 05.26 .02 .61 ma o Ul a o bo -.a4 Baa 3.7 50 00 0 4 1211 14 54 47 3 ABBR 8 SG BRAM AAA 900 00 00 00 00 O00 O00 [e) [e) [e) [e) O00 00 AG Sa 00.01 00 .00
Fig. 3: Neural Network Architecture. The input layer consists of a vectorized state containing normalized values representing the number of each unit, building, technology, and upgrade in the game known to the player. Only a small subset is shown on the diagram for clarity. Three inputs also describe the playerâs supplies. The neural network has four hidden fully-connected layers with 128 units each using the ReLU activation function. These layers are followed by an output layer using the softmax activation function and the output of the network is the prediction of each build being produced next in the given state.
In this paper, we build on the UAlbertaBot, which has a production manager that manages a queue of builds that the bots must produce in order. The production manager, which normally uses a goal-based search, is modiï¬ed to use the network trained on replays instead. The production manager in UAlbertaBot is also extended to act as a web client; whenever the module is asked for the next build, the request is forwarded, along with a description of the current game state, to a web server that feeds the game state to the neural network and then returns a build prediction to the module. Since the network is only trained on Protoss versus Terran games, it is only tested in this matchup. Our approach can however easily be applied to the other matchups as well. UAlbertaBot does not handle some of the advanced units well, so these where simply excluded from the output signals of the network. The excluded units are: archons, carriers, dark archons, high templars, reavers and shuttles. After these are excluded from the output vector, values are normalized to again sum to 1. An important question is how to select one build action based on the networkâs outputs. Here two action selection policies are tested:
Greedy action selection: The build with the highest proba- bility is always selected. This approach creates a deterministic behavior with a low variation in the units produced. A major issue of this approach is that rare builds such as upgrades will likely never be selected.
# V. RESULTS
A. Build Prediction
The best network managed to reach a top-1 error rate of 54.6% (averaged over ï¬ve runs) on the test set, which means that it is able to guess the next build around half the time, and with top-3 and top-10 error rates of 22.92% and 4.03%. For a simple comparison, a baseline approach that always predicts the next build to be a probe, which is the most common build in the game for Protoss, has a top-1 error rate of 73.9% and thus performs signiï¬cantly worse. Predicting randomly with uniform probabilities achieves a top-1 error rate of 98.28%. Some initial experiments with different input layers show that we obtain worse error rates by omitting parts of the state vector described in IV-A. For example, when opponent material is excluding from the input layer the networks top-1 error increases to an average of 58.17%. Similarly, omitting the material under construction (together with the progress) increases the average top-1 error rate to 58.01%. The results are summarized in Table I with error rates averaged over ï¬ve runs for each input layer design. The top-1, top-3 and top-10 error rates in the table show the networksâ ability to predict using one, three and ten guesses respectively, determined by their output. All networks were trained for 50 epochs as the error rates stagnated prior to this point. Overï¬tting is minimal with a difference less than 1% between the top-1 training and test errors.
Probabilistic action selection: Builds are selected with the probabilities of the softmax output units. In the example in Figure 3, a probe will be selected with a 5% probability and a zealot with 26% probability. With a low probability, this approach will also select some of the rare builds, and can express a wide range of strategies. Another interesting feature is that it is stochastic and harder to predict by the opponent.
To gain further insights into the policy learned by the network, the best networkâs prediction of building a new base given a varying number of probes is plotted in Figure 4. States are taken from the test set in which the player has only one base. The network successfully learned that humans usually create a base expansion when they have around 20-30 probes.
Top-3 error 54.60% ± 0.12% 22.92% ± 0.09% 4.03% ± 0.14% 58.17% ± 0.16% 24.92% ± 0.10% 4.23% ± 0.04% 58.01% ± 0.42% 24.95% ± 0.31% 4.51% ± 0.16% 60.81% ± 0.09% 26.64% ± 0.11% 4.65% ± 0.21% 73.90% ± 0.00% 73.90% ± 0.00% 73.90% ± 0.00% 98.28% ± 0.04% 94.87% ± 0.05% 82.73% ± 0.08%
top-3 and top-10 error rates of trained networks TABLE I: The top-1, (averaged over ï¬ve runs) with different combinations of inputs. (a) is the playerâs own material, (b) is material under construction, (c) is the progress of material under construction, (d) is the opponentâs material and (e) is supply. The input layer is visualized in Figure 3. Probe is a baseline predictor that always predicts the next build to be a probe and Random predicts randomly with uniform probabilities. The best results (in bold) are achieved by using all the input features.
0.8 0.6 0.4 Nexus prediction 0.2 0 10 20 30 40 50 # of probes
Fig. 4: The prediction of the next build being a Nexus (a base expansion) predicted by the trained neural network. Each data point corresponds to one prediction from one state. These states have only one Nexus and are taken from the test set. The small spike around 11 and 12 probes shows that the network predicts a fast expansion build order if the Protoss player has not build any gateways at this point.
# B. Playing StarCraft
UAlbertaBot is tested playing the Protoss race against the built-in Terran bot, with the trained network as production manager. Both the greedy and probabilistic actions selection strategies are tested in 100 games in the two-player map Astral Balance. The results, summarized in Table II, demonstrates that the probabilistic strategy is clearly superior, winning 68% of all games. This is signiï¬cant at p ⤠0.05 according to the two-tailed Wilcoxon Signed-Rank. The greedy approach, which always selects the action with the highest probability, does not perform as well. While the probabilistic strategy is promising, it is important to note that an UAlbertaBot playing as Protoss and following a powerful hand-designed strategy (dragoon rush), wins 100% of all games against the built-in Terran bot.
To further understand the difference between the two ap- proaches, the builds selected by each selection strategy are analyzed. A subset of these builds are shown in Table III. The probabilistic strategy clearly expresses a more varied strategy than the greedy one. Protoss players often prefer a good mix of zealots and dragoons as it creates a good
Action selection Probabilistic Probabilistic (blind) Greedy Random UAlbertaBot (dragoon rush) Built-in Terran 68% 59% 53% 0% 100%
TABLE II: The win percentage of UAlbertaBot with the trained neural network as a production manager against the built-in Terran bot. The probabilistic strategy selects actions with probabilities equal to the outputs of the network while the greedy network always selects the action with the highest output, and random always picks a random action. The blind probabilistic network does not receive information about the opponentâs material (inputs are set to 0.0). UAlbertaBot playing as Protoss with the scripted dragoon rush strategy wins 100% of all games against the built-in Terran bot.
dynamic army, and the greedy strategy clearly fails to achieve this. Additionally, with the greedy approach the bot never produces any upgrades, because they are too rare in a game to ever become the most probable build. The blind probabilistic approach (which ignores knowledge about the opponent by setting these inputs to zero) reached a lower win rate of just 59%, further corroborating that the opponentâs units and build- ings are important for macromanagement decision making. We also tested the probabilistic approach against UAlbertaBot with the original production manager conï¬gured to follow a ï¬xed marine rush strategy, which was the best opening strategy for UAlbertaBot when playing Terran. Our approach won 45% of 100 games, demonstrating that it can play competitively against this aggressive rush strategy, learning from human replays alone.
Figure 5 visualizes the learned opening strategy with greedy action selection. While the probabilistic strategy shows a better performance in general (Table II), the strategy performed by the greedy action selection is easier to analyze because it is deterministic and has a one-sided unit production. The learned build order shown in Figure 5 is a One Gate Cybernetics Core opening with no zealots before the cybernetics core. This opening was performed regularly against the built-in Terran bot, which does not vary much in its strategy. The opening is followed by a heavy production of dragoons and a few observers. A base expansion usually follows the ï¬rst successful confrontation. Some losses of the greedy approach were caused by UAlbertaBot not being able to produce more buildings, possibly because there was no more space left in the main base. A few losses were also directly caused by some weird behavior in the late game, where the bot (ordered by the neural network) produces around 20 pylons directly after each other. Generally, the neural network expresses a behavior that often prolongs the game, as it prefers expanding bases when leading the game. This is something human players also tend to do, but since UAlbertaBot does not handle the late game very well, it is not a good strategy for this particular bot.
The behavior of the probabilistic strategy is more difï¬cult to analyze, as it is stochastic. It usually follows the same opening as the greedy approach, with small variations, but then later in the game, it begins to mix its unit production between zealots, dragoons and dark templars. The timings of base expansions are very different from game to game as well as the use of upgrades.
Assimilator Cybernetics Core 1322 "| 0.001 Fa 0.002 A. 0.001 Fa 0.001 4 0.001 Fa 0.002 A. 0.002 Fa 0.002 sel 0.006 a 0.001 4 0.006 Ea 0.011 4 0.021 0.004 A 0.025 A. 0.003 A. 0.006 4 0.001 "| 0.043 "| 0.132 0.117 al 0.098 ge 0.006 al 0.358 "| 0.001 4 0.021 al 0.164 0.879 A. 0.870 a 0.989 A 0.616 al 0.998 al 0.922 Ea 0.680 1650 1879 2037 Frame
Fig. 5: The opening build order learned by the neural network when playing against the built-in Terran bot (the build order also depends on the enemy units observed). The number next to each build icon represents the probability of the build being produced next, and points on the timescale indicate when the bot requests the network for the next build. In this example the network follows the greedy strategy, always picking the build with the highest probability.
Probe Zealot Dragoon Dark templar Observer Scout Corsair Leg enhancements Ground weapons Ground armor Plasma shields Action selection Probabilistic Greedy 50.84 70.12 14.62 1.46 17.3 32.75 1.00 0.00 3.56 2.40 0.11 0.00 0.13 0.00 0.32 0.00 0.03 0.00 0.07 0.00 0.01 0.00
TABLE III: The average number of different unit types produced by the two different action selection strategies against the built-in Terran bot. The results show that the greedy strategy executes a very one-sided unit production while the probabilistic strategy is more varied.
# VI. DISCUSSION
This paper demonstrated that macromanagement tasks can be learned from replays using deep learning, and that the learned policy can be used to outperform the built-in bot in StarCraft. In this section, we discuss the short-comings of this approach and give suggestions for future research that could lead to strong StarCraft bots by extending this line of work. The built-in StarCraft bot is usually seen as a weak player compared to humans. It gives a sufï¬cient amount of competi- tion for new players but only until they begin to learn estab- lished opening strategies. A reasonable expectation would be that UAlbertaBot, using our trained network, would defeat the built-in bot almost every time. By analyzing the games played, it becomes apparent the performance of UAlbertaBot decrease in the late game. It simply begins to make mistakes as it takes weird micromanagement decisions when it controls several bases and groups of units. The strategy learned by our network further enforces this faulty behavior, as it prefers base expansions and heavy unit production (very similar to skilled human players) over early and risky aggressions. The trained network was also observed to make a few faulty decisions, but rarely and only in the very late game. The reason for these faults might be because some outputs are excluded, since UAlbertaBot does not handle these builds well.
promising for a modular-based bot as it could optimize the macromanagement policy to ï¬t the ï¬xed micromanagement policy. Additionally, learning a macromanagement policy to speciï¬cally beat other bots that are competing in a tournament is a promising future direction.
This paper also introduces a new benchmark for machine learning, where the goal is to predict the next unit, building, technology or upgrade that is produced by a human player given a game state in StarCraft. An interesting extension to the presented approach, which could potentially improve the results, could involve including positional information as features for the neural network. The features could be graphical and similar to the minimap in the game that gives an abstract overview of where units and buildings are located on the map. Regularization techniques such as dropout [24] or L2 regularization [18] could perhaps reduce the error rate of deeper networks and ultimately improve the playing bot.
Finally, it would be interesting to apply our trained network to a more sophisticated StarCraft bot that is able to manage several bases well and can control advanced units such as spell casters and shuttles. This is currently among our future goals, and hopefully this bot will participate in the coming StarCraft competitions.
Despite the presented approach not achieving a skill level on pair with humans, it should be fairly straightforward to extend it further with reinforcement learning. Supervised learning on replays can be applied to pre-train networks, ensuring that the initial exploration during reinforcement learning is sensible, which proved to be a critical step to surpass humans in the game Go [23]. Reinforcement learning is especially
# VII. CONCLUSION
This paper presented an approach that learns from StarCraft replays to predict the next build produced by human players. 789,571 state-action pairs were extracted from 2,005 replays of highly skilled players. We trained a neural network with supervised learning on this dataset, with the best network achieving top-1 and top-3 error rates of 54.6% and 22.9%. To
demonstrate the usefulness of this approach, the open source StarCraft bot UAlbertaBot was extended to use such a neural network as a production manager, thereby allowing the bot to produce builds based on the networks predictions. Two action selection strategies were introduced: A greedy approach that always selects the action with the highest probability, and a probabilistic approach that selects actions corresponding to the probabilities of the networkâs softmax output. The probabilistic strategy proved to be the most successful and managed to achieve a win rate of 68% against the games built-in Terran bot. Additionally, we demonstrated that the presented approach was able to play competitively against UAlbertaBot with a ï¬xed rush strategy. Future research will show whether reinforcement learning can improve these results further, which could narrow the gap between humans and computers in StarCraft.
REFERENCES [1] J. Blackford and G. B. Lamont. The real-time strategy game
multi-objective build order problem. In AIIDE, 2014.
[2] M. Bogdanovic, D. Markovikj, M. Denil, and N. de Freitas. Deep Apprenticeship Learning for Playing Video Games. PhD thesis, Citeseer, 2014.
[3] Z. Chen and D. Yi. The game imitation: Deep supervised convolutional networks for quick video game ai. arXiv preprint arXiv:1702.05663, 2017.
[4] H.-C. Cho, K.-J. Kim, and S.-B. Cho. Replay-based strategy prediction and build order adaptation for starcraft ai bots. In Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pages 1â7. IEEE, 2013.
[5] D. Churchill and M. Buro. Build order optimization in starcraft. In AIIDE, pages 14â19, 2011.
[6] D. Churchill, M. Preuss, F. Richoux, G. Synnaeve, A. Uriarte, S. Ontan´on, and M. Certick`y. Starcraft bots and competitions. 2016.
[7] E. W. Dereszynski, J. Hostetler, A. Fern, T. G. Dietterich, T.-T. Hoang, and M. Udarbe. Learning probabilistic behavior models in real-time strategy games. In AIIDE, 2011.
[8] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
[9] J.-L. Hsieh and C.-T. Sun. Building a player strategy model In Neural by analyzing replays of real-time strategy games. Networks, 2008. IJCNN 2008.(IEEE World Congress on Com- putational Intelligence). IEEE International Joint Conference on, pages 3106â3111. IEEE, 2008.
[10] N. Justesen and S. Risi. Continual online evolution for in- In The Genetic and
game build order adaptation in starcraft. Evolutionary Computation Conference (GECCO), 2017. [11] D. Kingma and J. Ba. Adam: A method for stochastic opti-
mization. arXiv preprint arXiv:1412.6980, 2014.
[12] H. K¨ostler and B. Gmeiner. A multi-objective genetic algorithm ii. KI-K¨unstliche for build order optimization in starcraft Intelligenz, 27(3):221â233, 2013.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classiï¬- cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. [14] M. Kuchem, M. Preuss, and G. Rudolph. Multi-objective as- sessment of pre-optimized build orders exempliï¬ed for starcraft 2. In Computational Intelligence in Games (CIG), 2013 IEEE Conference on, pages 1â8. IEEE, 2013.
[15] G. Lample and D. S. Chaplot. Playing fps games with deep reinforcement learning. arXiv preprint arXiv:1609.05521, 2016.
[16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 2015.
[17] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous In International methods for deep reinforcement Conference on Machine Learning, 2016.
[18] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473â493, 1992. [19] S. OntaËn´on, K. Mishra, N. Sugandh, and A. Ram. Case- based planning and execution for real-time strategy games. In International Conference on Case-Based Reasoning, pages 164â 178. Springer, 2007.
[20] S. Ontan´on, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss. A survey of real-time strategy game ai research and competition in starcraft. IEEE Transactions on Computa- tional Intelligence and AI in games, 5(4):293â311, 2013. [21] S. Risi and J. Togelius. Neuroevolution in games: State of the art and open challenges. IEEE Transactions on Computational Intelligence and AI in Games, 2015.
[22] G. Robertson and I. D. Watson. An improved dataset and extraction process for starcraft ai. In FLAIRS Conference, 2014. [23] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershel- vam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[24] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural net- works from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
[25] M. Stanescu, N. A. Barriga, A. Hess, and M. Buro. Evaluating real-time strategy game states using convolutional neural net- works. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1â7. IEEE, 2016.
[26] G. Synnaeve and P. Bessiere. A bayesian model for plan recognition in rts games applied to starcraft. arXiv preprint arXiv:1111.3735, 2011.
[27] G. Synnaeve and P. Bessiere. A dataset for starcraft ai\ & an example of armies clustering. arXiv preprint arXiv:1211.4552, 2012.
[28] A. Uriarte and S. Ontan´on. Automatic learning of combat In Eleventh Artiï¬cial Intelligence and models for rts games. Interactive Digital Entertainment Conference, 2015.
[29] N. Usunier, G. Synnaeve, Z. Lin, and S. Chintala. Episodic ex- ploration for deep deterministic policies: An application to star- craft micromanagement tasks. arXiv preprint arXiv:1609.02993, 2016.
[30] B. G. Weber and M. Mateas. A data mining approach to strategy In Computational Intelligence and Games, 2009. prediction. CIG 2009. IEEE Symposium on, pages 140â147. IEEE, 2009. | {
"id": "1609.02993"
} |
1707.03017 | Learning Visual Reasoning Without Strong Priors | Achieving artificial visual reasoning - the ability to answer image-related
questions which require a multi-step, high-level process - is an important step
towards artificial general intelligence. This multi-modal task requires
learning a question-dependent, structured reasoning process over images from
language. Standard deep learning approaches tend to exploit biases in the data
rather than learn this underlying structure, while leading methods learn to
visually reason successfully but are hand-crafted for reasoning. We show that a
general-purpose, Conditional Batch Normalization approach achieves
state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4%
error rate. We outperform the next best end-to-end method (4.5%) and even
methods that use extra supervision (3.1%). We probe our model to shed light on
how it reasons, showing it has learned a question-dependent, multi-step
process. Previous work has operated under the assumption that visual reasoning
calls for a specialized architecture, but we show that a general architecture
with proper conditioning can learn to visually reason effectively. | http://arxiv.org/pdf/1707.03017 | Ethan Perez, Harm de Vries, Florian Strub, Vincent Dumoulin, Aaron Courville | cs.CV, cs.AI, cs.CL, stat.ML | Full AAAI 2018 paper is at arXiv:1709.07871. Presented at ICML 2017's
Machine Learning in Speech and Language Processing Workshop. Code is at
http://github.com/ethanjperez/film | null | cs.CV | 20170710 | 20171218 | 7 1 0 2 c e D 8 1
] V C . s c [ 5 v 7 1 0 3 0 . 7 0 7 1 : v i X r a
# Learning Visual Reasoning Without Strong Priors
Ethan Perez12, Harm de Vries1, Florian Strub3, Vincent Dumoulin1, Aaron Courville14
1MILA, Universit´e of Montr´eal, Canada; 2Rice University, U.S.A. 3Univ. Lille, CNRS, Centrale Lille, Inria, UMR 9189 CRIStAL France 4CIFAR Fellow, Canada ethanperez@rice.edu, mail@harmdevries.com, florian.strub@inria.fr dumouliv@iro.umontreal.ca, courvila@iro.umontreal.ca
# Abstract
Achieving artiï¬cial visual reasoning â the ability to answer image-related questions which require a multi-step, high-level process â is an important step towards artiï¬cial general intel- ligence. This multi-modal task requires learning a question- dependent, structured reasoning process over images from lan- guage. Standard deep learning approaches tend to exploit bi- ases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-of- the-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step pro- cess. Previous work has operated under the assumption that vi- sual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively. Index Terms: Deep Learning, Language and Vision
Note: A full paper extending this study is available at http: //arxiv.org/abs/1709.07871, with additional refer- ences, experiments, and analysis.
(a) What number of cylin- small purple ders are things or yellow rubber things? Predicted: 2 (b) What color is the other is the same object shape as the large brown matte thing? Predicted: Brown that
Figure 1: Examples from CLEVR and our modelâs answer.
this, recent efforts have built new learning architectures that ex- plicitly model reasoning or relational associations [10, 11, 13], some of which even outperform humans [10, 11].
In this paper, we show that a general model can achieve strong visual reasoning from language. We use Conditional Batch Normalization [14, 15, 16] with a Recurrent Neural Net- work (RNN) and a Convolutional Neural Network (CNN) to show that deep learning architectures built without strong priors can learn underlying structure behind visual reasoning, directly from language and images. We demonstrate this by achieving state-of-the-art visual reasoning on CLEVR and ï¬nding struc- tured patterns while exploring the internals of our model.
# 1. Introduction
The ability to use language to reason about every-day visual input is a fundamental building block of human intelligence. Achieving this capacity to visually reason is thus a meaningful step towards artiï¬cial agents that truly understand the world. Advances in both image-based learning and language-based learning using deep neural networks have made huge strides in difï¬cult tasks such as object recognition [1, 2] and machine translation [3, 4]. These advances have in turn fueled research on the intersection of visual and linguistic learning [5, 6, 7, 8, 9]. To this end, [9] recently proposed the CLEVR dataset to test multi-step reasoning from language about images, as tradi- tional visual question-answering datasets such as [5, 7] ask sim- pler questions on images that can often be answered in a single glance. Examples from CLEVR are shown in Figure 1. Struc- tured, multi-step reasoning is quite difï¬cult for standard deep learning approaches [10, 11], including those successful on traditional visual question answering datasets. Previous work highlights that standard deep learning approaches tend to ex- ploit biases in the data rather than reason [9, 12]. To overcome
# 2. Method
Our model processes the multi-modal question-image input us- ing a RNN and CNN combined via Conditional Batch Normal- ization (CBN). CBN has proven highly effective for image styl- ization [14, 16], speech recognition [17], and traditional visual question answering tasks [15]. We start by explaining CBN in Section 2.1 and then describe our model in Section 2.2.
# 2.1. Conditional batch normalization
Batch normalization (BN) is a widely used technique to improve neural network training by normalizing activations throughout the network with respect to each mini-batch. BN has been shown to accelerate training and improve generalization by re- ducing covariate shift throughout the network [18]. To explain BN, we deï¬ne B = {Fi,.,.,.}N i=1 as a mini-batch of N sam- ples, where F corresponds to input feature maps whose sub- scripts c, h, w refers to the cth feature map at the spatial loca- tion (h, w). We also deï¬ne γc and βc as per-channel, trainable
Answer: Yes Are â>| t a | - than yellow things
Figure 2: The linguistic pipeline (left), visual pipeline (middle), and CBN residual block architecture (right) of our model.
scalars and ⬠as a constant damping factor for numerical stabil- ity. BN is defined at training time as follows:
Fy,c,w,n â Es[Fc,-,.] Var [F.,c,.,.] + ⬠BN (Fi,c,hw|Yes Be) = Ye + Be. (1)
Conditional Batch Normalization (CBN) [14, 15, 16] in- stead learns to output new BN parameters Ëγi,c and Ëβi,c as a function of some input xi:
Ëγi,c = fc(xi) Ëβi,c = hc(xi), (2)
where f and h are arbitrary functions such as neural networks. Thus, f and h can learn to control the distribution of CNN acti- vations based on xi.
Combined with ReLU non-linearities, CBN empowers a conditioning model to manipulate feature maps of a target CNN by scaling them up or down, negating them, shutting them off, selectively thresholding them, and more. Each feature map is modulated independently, giving the conditioning model an ex- ponential (in the number of feature maps) number of ways to affect the feature representation.
Rather than output Ëγi,c directly, we output âËγi,c, where:
Ëγi,c = 1 + âËγi,c, (3)
since initially zero-centered Ëγi,c can zero out CNN feature map activations and thus gradients. In our implementation, we opt to output âËγi,c rather than Ëγi,c, but for simplicity, in the rest of this paper, we will explain our method using Ëγi,c.
# 2.2. Model
Our model consists of a linguistic pipeline and a visual pipeline as depicted in Figure 2. The linguistic pipeline processes a question q using a Gated Recurrent Unit (GRU) [19] with 4096 hidden units that takes in learned, 200-dimensional word em- beddings. The ï¬nal GRU hidden state is a question embedding eq. From this embedding, the model predicts the CBN param- eters (γm,n i,· ) for the nth CBN layer of the mth residual block via linear projection with a trainable weight matrix W and bias vector b:
(γm,n i,· , βm,n i,· ) = W m,neq + bm,n (4)
The visual pipeline extracts 14 à 14 image features using the conv4 layer of a ResNet-101 [2] pre-trained on ImageNet [20], as done in [10] for CLEVR. Image features are processed by a 3 à 3 convolution followed by several â 3 for our model â CBN residual blocks with 128 feature maps, and a ï¬nal clas- siï¬er. The classiï¬er consists of a 1 à 1 convolution to 512 fea- ture maps, global max-pooling, and a two-layer MLP with 1024 hidden units that outputs a distribution over ï¬nal answers.
Each CBN residual block starts with a 1 à 1 convolution followed by two 3 à 3 convolutions with CBN as depicted in Figure 2. Drawing from [11, 21], we concatenate coordinate feature maps indicating relative spatial position (scaled from â1 to 1) to the image features, each residual blockâs input, and the classiï¬erâs input. We train our model end-to-end from scratch with Adam (learning rate 3eâ4) [22], early stopping on the validation set, weight decay (1eâ5), batch size 64, and BN and ReLU throughout the visual pipeline, using only image- question-answer triplets from the training set.
# 3. Experiments
# 3.1. CLEVR dataset
CLEVR is a generated dataset of 700K (image, question, an- swer, program) tuples. Images contain 3D-rendered objects of various shapes, materials, colors, and sizes. Questions are multi-step and compositional in nature, as shown in Figure 1. They range from counting questions (âHow many green objects have the same size as the green metallic block?â) to comparison questions (âAre there fewer tiny yellow cylinders than yellow metal cubes?â) and can be 40+ words long. Answers are each one word from a set of 28 possible answers. Programs are an additional supervisory signal consisting of step-by-step instruc- tions, such as filter shape[cube], relate[right], and count, on how to answer the question. Program labels are difï¬cult to generate or come by for real world datasets. Our model avoids using this extra supervision, learning to reason effectively directly from linguistic and visual input.
# 3.2. Results
Our results on CLEVR are shown in Table 1. Our model achieves a new overall state-of-the-art, outperforming humans and previous, leading models, which often use additional pro- gram supervision. Notably, CBN outperforms Stacked Atten- tion networks (CNN+LSTM+SA in 1) by 21.0%. Stacked At- tention networks are highly effective for visual question answer- ing with simpler questions [23] and are the previously leading model for visual reasoning that does not build in reasoning, making them a relevant baseline for CBN. We note also that our modelâs pattern of performance more closely resembles that of humans than other models do. Strong performance (< 1% er- ror) in exist and query attribute categories is perhaps explained by our modelâs close resemblance to standard CNNs, which traditionally excel at these classiï¬cation-type tasks. Our model also demonstrates strong performance on more complex categories such as count and compare attribute.
Comparing numbers of objects gives our model more difï¬- culty, understandably so; this question type requires more high- level reasoning steps â querying attributes, counting, and com- paring â than other question type. The best model from [10] beats our model here but is trained with extra supervision via 700K program labels. As shown in Table 1, the equivalent, more comparable model from [10] which uses 9K program labels sig- niï¬cantly underperforms our method in this category.
Model Overall Count Exist Compare Numbers Query Attribute Compare Attribute Human [10] 92.6 86.7 96.6 86.5 95.0 96.0 Q-type baseline [10] LSTM [10] CNN+LSTM [10] CNN+LSTM+SA [11] N2NMN* [13] PG+EE (9K prog.)* [10] PG+EE (700K prog.)* [10] CNN+LSTM+RNâ [11] 41.8 46.8 52.3 76.6 83.7 88.6 96.9 95.5 34.6 41.7 43.7 64.4 68.5 79.7 92.7 90.1 50.2 61.1 65.2 82.7 85.7 89.7 97.1 97.8 51.0 69.8 67.1 77.4 84.9 79.1 98.7 93.6 36.0 36.8 49.3 82.6 90.0 92.6 98.1 97.9 51.3 51.8 53.0 75.4 88.7 96.0 98.9 97.1 CNN+GRU+CBN 97.6 94.5 99.2 93.8 99.2 99.0
Table 1: CLEVR accuracy by baseline methods, competing methods, and our method (CBN). Methods denoted with (*) use extra supervisory information through program labels. Methods denoted with (â ) use data augmentation and no pre-trained CNN.
# 3.3. What does conditional batch norm learn?
To understand what our model learns, we use t-SNE [24] to visualize the CBN parameter vectors (γ, β), of 2,000 ran- dom validation points, modulating ï¬rst and last CBN lay- The (γ, β) ers in our model, as shown in Figure 4. parameters of the ï¬rst and last CBN layers are grouped by the low-level and high-level reasoning functions nec- essary to answer CLEVR questions, For respectively. for equal color and example, query color are close for the ï¬rst layer but apart for layer, and the same is true for equal shape the last and query shape, equal size and query size, and equal material and query material. Conversely, equal shape, equal size, and equal material CBN parameters are grouped in the last layer but split in the ï¬rst layer. Similar patterns emerge when visualizing residual block activa- tions. Thus, we see that CBN learns a sort of function-based modularity, directly from language and image inputs and with- out an architectural prior on modularity. Simply with end-to- end training, our model learns to handle not only different types of questions differently, but also different types of question sub- parts differently, working from low-level to high-level processes as is the proper approach to answer CLEVR questions.
3.4 5 6 7 8 9 10 11 12 13 14 15 16 17 1B 19 a 2 2B Program length
Figure 3: Validation error rate by program length.
ple where our model correctly counts two cyan objects and two yellow objects but simultaneously does not answer that there are the same number of cyan and yellow objects. In fact, it does not answer that the number of cyan blocks is more, less, or equal to the number of yellow blocks. These errors could be prevented by directly minimizing logical inconsistency, which is an inter- esting avenue for future work orthogonal to our approach.
These types of mistakes in a state-of-the-art visual rea- soning model suggest that more work is needed to truly achieve human-like reasoning and logical consistency. We view CLEVR as a curriculum of tasks and believe that the key to the most meaningful and advanced reasoning lies in tackling these last few percentage points of error.
Additionally, we observe that many points that break the previously mentioned clustering patterns do so in meaningful ways. For example, Figure 4 shows that some count questions have last layer CBN parameters far from those of other count questions but close to those of exist questions. Closer ex- amination reveals that these count questions have answers of either 0 or 1, making them similar to exist questions.
# 3.4. Error analysis
An analysis of our modelâs errors reveals that 94% of its count- ing mistakes are off-by-one errors, indicating our model has learned underlying concepts behind counting, such as close re- lationships between close numbers.
# 4. Related Work
One leading approach for visual reasoning is the Program Gen- erator + Execution Engine model from [10]. This approach con- sists of a sequence-to-sequence Program Generator (PG), which takes in a question and outputs a sequence corresponding to a tree of composable Neural Modules, each of which is a two- layer residual block similar to ours. This tree of Neural Mod- ules is assembled to form the Execution Engine (EE) that then predicts an answer from the image. The PG+EE model uses a strong prior by training with program labels and explicitly modeling the compositional nature of reasoning. Our approach learns to reason directly from textual input without using addi- tional cues or a specialized architecture.
As shown in Figure 3, our CBN model struggles more on questions that require more steps, as indicated by the length of the corresponding CLEVR programs; error rates for questions requiring 10 or fewer steps are around 1.5%, while error rates for questions requiring 17 or more steps are around 5.5%, more than three times higher.
Furthermore, the model sometimes makes curious reason- ing mistakes a human would not. In Figure 5, we show an exam-
This modular approach is part of a recent line of work in Neural Module Networks [13, 25, 26]. Of these, End-to-End Module Networks (N2NMN) [13] also tackle visual reasoning but do not perform as well as other approaches. These methods also use strong priors by modeling the compositionality of reasoning, using program-level supervision, and building per- module, hand-crafted neural architectures for speciï¬c functions.
First CBN Layer Parameters Last CBN Layer Parameters e 0- exist e° 1-less_than » 2-greater_than * 3- count e° 4-query_material ° 5-query_size 6 - query_color e° 7-query_shape e° 8- equal_color e 9- equal_integer © 10 - equal_shape e 11-equal_size e 12 - equal_material
Figure 4: t-SNE plots of γ, β of the ï¬rst BN layer of the ï¬rst residual block (left) and the last BN layer of the last residual block (right). CBN parameters are grouped by low-level reasoning functions for the ï¬rst layer and by high-level reasoning functions for the last layer.
Question How many yellow things are there? How many cyan things are there? Are there as many yellow things as cyan things? Are there more yellow things than cyan things? Are there fewer yellow things than cyan things? Answer 2 2 No No No
Figure 5: An interesting failure example where our model counts correctly but compares counts erroneously. Its third an- swer is incorrect and inconsistent with its other answers.
architecture conditions 50 BN layers of a pre-trained ResNet. We show that a few layers of CBN after a ResNet can also be highly effective, even for complex problems. We also show how CBN models can learn to carry out multi-step processes and rea- son in a structured way â from low-level to high-level.
Additionally, CBN is essentially a post-BN, feature-wise afï¬ne conditioning, with BNâs trainable scalars turned off. Thus, there are many interesting connections with other con- ditioning methods. A common approach, used for example in Conditional DCGANs [27], is to concatenate constant feature maps of conditioning information to the input of convolutional layers, which amounts to adding a post-convolutional, feature- wise conditional bias. Other approaches, such as LSTMs [28] and Hierarchical Mixtures of Experts [29], gate an inputâs fea- tures as a function of that same input (rather than a separate, conditioning input), which amounts to a feature-wise, condi- tional scaling, restricted to between 0 and 1. CBN consists of both scaling and shifting, each unrestricted, giving it more ca- pacity than many of these related approaches. We leave explor- ing these connections more in-depth for future work.
Relation Networks (RNs) from [11] are another leading ap- proach for visual reasoning. RNs use an MLP to carry out pairwise comparisons over each location of extracted convolu- tional features over an image, including LSTM-extracted ques- tion features as input to this MLP. RNs then element-wise sum over the resulting comparison vectors to form another vector from which a ï¬nal classiï¬er predicts the answer. This approach is end-to-end differentiable and trainable from scratch to high performance, as we show in Table 1. Our approach lifts the explicitly relational aspect of this model, freeing our approach from the use of a comparison-based prior, as well as the scaling difï¬culties of pairwise comparisons over spatial locations.
CBN itself has its own line of work. The results of [14, 16] show that the closely related Conditional Instance Normaliza- tion is able to successfully modulate a convolutional style- transfer network to quickly and scalably render an image in a huge variety of different styles, simply by learning to output a different set of BN parameters based on target style. For visual question answering, answering general questions often of natu- ral images, de Vries et al. [15] show that CBN performs highly on real-world VQA and GuessWhat?! datasets, demonstrating CBNâs effectiveness beyond the simpler CLEVR images. Their
5. Conclusion
With a simple and general model based on CBN, we show it is possible to achieve state-of-the-art visual reasoning on CLEVR without explicitly incorporating reasoning priors. We show that our model learns an underlying structure required to answer CLEVR questions by ï¬nding clusters in the CBN parameters of our model; earlier parameters are grouped by low-level reason- ing functions while later parameters are grouped by high-level reasoning functions. Simply by manipulating feature maps with CBN, a RNN can effectively use language to inï¬uence a CNN to carry out diverse and multi-step reasoning tasks over an image. It is unclear whether CBN is the most effective general way to use conditioning information for visual reasoning or other tasks, as well as what precisely about CBN is so effective. Other ap- proaches [27, 28, 29, 30, 31, 32, 33] employ a similar, repetitive conditioning, so perhaps there is an underlying principle that ex- plains the success of these approaches. Regardless, we believe that CBN is a general and powerful technique for multi-modal and conditional tasks, especially where more complex structure is involved.
6. Acknowledgements We would like to thank the developers of PyTorch (http: //pytorch.org/) for their elegant deep learning frame- work. Also, our implementation was based off the open-source code from [10]. We thank Mohammad Pezeshki, Dzmitry Bah- danau, Yoshua Bengio, Nando de Freitas, Joelle Pineau, Olivier Pietquin, J´er´emie Mary, Chin-Wei Huang, Layla Asri, and Max Smith for helpful feedback and discussions, as well as Justin Johnson for CLEVR test set evaluations. We thank NVIDIA for donating a DGX-1 computer used in this work. We also acknowledge FRQNT through the CHIST-ERA IGLU project and CPER Nord-Pas de Calais, Coll`ege Doctoral Lille Nord de France and FEDER DATA Advanced data science and technolo- gies 2015-2020 for funding our research.
7. References [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet clas- siï¬cation with deep convolutional neural networks,â in Proc. of NIPS, 2012.
[2] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proc. of CVPR, 2016.
[3] K. Cho, B. van Merrienboer, C¸ . G¨ulc¸ehre, F. Bougares, H. Schwenk, and Y. Bengio, âLearning phrase representations us- ing RNN encoder-decoder for statistical machine translation,â in Proc. of EMNLP, vol. abs/1406.1078, 2014.
[4] I. Sutskever, O. Vinyals, and Q. V. Le, âSequence to sequence learning with neural networks,â in Proc. of NIPS, 2014.
[5] M. Malinowski and M. Fritz, âA multi-world approach to question answering about real-world scenes based on uncertain input,â in Proc. of NIPS, 2014.
[6] D. Geman, S. Geman, N. Hallonquist, and L. Younes, âVisual tur- ing test for computer vision systems,â vol. 112, no. 12. National Acad Sciences, 2015, pp. 3618â3623.
[7] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh, âVQA: Visual Question Answering,â in Proc. of ICCV, 2015.
[8] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville, âGuessWhat?! Visual object discovery through multi-modal dialogue,â in Proc. of CVPR, 2017.
[9] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zit- nick, and R. B. Girshick, âCLEVR: A diagnostic dataset for com- positional language and elementary visual reasoning,â in Proc. of CVPR, 2017.
[10] J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, F. Li, C. L. Zitnick, and R. B. Girshick, âInferring and executing programs for visual [Online]. Available: reasoning,â 2017. http://arxiv.org/abs/1705.03633
[11] A. Santoro, D. Raposo, D. G. T. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, , and T. Lillicrap, âA simple neural network module for relational reasoning,â CoRR, vol. abs/1706.01427, 2017. [Online]. Available: http://arxiv.org/abs/ 1706.01427
[12] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh, âMaking the V in VQA matter: Elevating the role of image un- derstanding in Visual Question Answering,â in Proc. of CVPR, 2017.
[13] R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko, âLearning to reason: End-to-end module networks for visual question answering,â CoRR, vol. abs/1704.05526, 2017. [Online]. Available: http://arxiv.org/abs/1704.05526
[14] V. Dumoulin, J. Shlens, and M. Kudlur, âA learned representation for artistic style,â in Proc. of ICLR, 2017.
[15] H. de Vries, F. Strub, J. Mary, H. Larochelle, O. Pietquin, and A. C. Courville, âModulating early visual processing by language,â arXiv preprint arXiv:1707.00683, 2017. [Online]. Available: http://arxiv.org/abs/1707.00683
[16] G. Ghiasi, H. Lee, M. Kudlur, V. Dumoulin, and J. Shlens, âExploring the structure of a real-time, arbitrary neural artistic stylization network,â CoRR, vol. abs/1705.06830, 2017. [Online]. Available: http://arxiv.org/abs/1705.06830
[17] T. Kim, I. Song, and Y. Bengio, âDynamic layer normalization for adaptive neural acoustic modeling in speech recognition,â in Proc. of InterSpeech, 2017.
[18] S. Ioffe and C. Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â in Proc. of ICML, 2015.
[19] J. Chung, C¸ . G¨ulc¸ehre, K. Cho, and Y. Bengio, âEmpirical evalu- ation of gated recurrent neural networks on sequence modeling,â in Deep Learning workshop at NIPS, 2014.
[20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li, âImagenet large scale visual recognition challenge,â In- ternational Journal of Computer Vision, vol. 115, no. 3, pp. 211â 252, 2015.
[21] N. Watters, A. Tachetti, T. Weber, R. Pascanu, P. Battaglia, interaction networks,â CoRR, vol. , and D. Zoran, âVisual abs/1706.01433, 2017. [Online]. Available: http://arxiv.org/abs/ 1706.01433
[22] D. P. Kingma and J. Ba, âAdam: A method for stochastic opti- mization,â in Proc. of ICLR, 2015.
[23] Z. Yang, X. He, J. Gao, L. Deng, and A. J. Smola, âStacked atten- tion networks for image question answering,â in Proc. of CVPR, 2016.
[24] L. van der Maaten and G. Hinton, âVisualizing data using t-sne,â JMLR, vol. 9, no. Nov, pp. 2579â2605, 2008.
[25] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, âNeural mod- ule networks,â in Proc. of CVPR, 2016.
[26] J. Andreas, R. Marcus, T. Darrell, and D. Klein, âLearning to compose neural networks for question answering,â in Proc. of NAACL, 2016.
âUnsuper- and vised representation learning with deep convolutional gen- erative [Online]. Available: http://arxiv.org/abs/1511.06434
[28] S. Hochreiter and J. Schmidhuber, âLong short-term memory,â Neural Comput., vol. 9, no. 8, pp. 1735â1780, Nov. 1997. [Online]. Available: http://dx.doi.org/10.1162/neco.1997.9.8. 1735
[29] M. I. Jordan and R. A. Jacobs, âHierarchical mixtures of experts and the em algorithm,â Neural Comput., vol. 6, http: no. 2, pp. 181â214, Mar. 1994. [Online]. Available: //dx.doi.org/10.1162/neco.1994.6.2.181
[30] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu, âWavenet: A generative model for raw audio,â 2016. [Online]. Available: http://arxiv.org/abs/1609.03499
[31] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves et al., âConditional image generation with pixelcnn de- coders,â in Proc. of NIPS, 2016.
[32] S. E. Reed, A. van den Oord, N. Kalchbrenner, S. G. Colmenarejo, Z. Wang, D. Belov, and N. de Freitas, âParallel multiscale autoregressive density estimation,â 2017. [Online]. Available: http://arxiv.org/abs/1703.03664
[33] S. Reed, A. van den Oord, N. Kalchbrenner, V. Bapst, M. Botvinick, and N. de Freitas, âGenerating interpretable images with controllable structure,â in Proc. of ICLR, 2017. | {
"id": "1707.00683"
} |
1707.02286 | Emergence of Locomotion Behaviours in Rich Environments | The reinforcement learning paradigm allows, in principle, for complex
behaviours to be learned directly from simple reward signals. In practice,
however, it is common to carefully hand-design the reward function to encourage
a particular solution, or to derive it from demonstration data. In this paper
explore how a rich environment can help to promote the learning of complex
behavior. Specifically, we train agents in diverse environmental contexts, and
find that this encourages the emergence of robust behaviours that perform well
across a suite of tasks. We demonstrate this principle for locomotion --
behaviours that are known for their sensitivity to the choice of reward. We
train several simulated bodies on a diverse set of challenging terrains and
obstacles, using a simple reward function based on forward progress. Using a
novel scalable variant of policy gradient reinforcement learning, our agents
learn to run, jump, crouch and turn as required by the environment without
explicit reward-based guidance. A visual depiction of highlights of the learned
behavior can be viewed following https://youtu.be/hx_bgoTF7bs . | http://arxiv.org/pdf/1707.02286 | Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver | cs.AI | null | null | cs.AI | 20170707 | 20170710 | 7 1 0 2
l u J 0 1 ] I A . s c [
2 v 6 8 2 2 0 . 7 0 7 1 : v i X r a
# Emergence of Locomotion Behaviours in Rich Environments
Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver DeepMind
# Abstract
The reinforcement learning paradigm allows, in principle, for complex behaviours In practice, however, it is to be learned directly from simple reward signals. common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Speciï¬cally, we train agents in diverse environmental contexts, and ï¬nd that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion â behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed in this video.
# Introduction
Reinforcement learning has demonstrated remarkable progress, achieving high levels of performance in Atari games [1], 3D navigation tasks [2, 3], and board games [4]. What is common among these tasks is that there is a well-deï¬ned reward function, such as the game score, which can be optimised to produce the desired behaviour. However, there are many other tasks where the ârightâ reward function is less clear, and optimisation of a naïvely selected one can lead to surprising results that do not match the expectations of the designer. This is particularly prevalent in continuous control tasks, such as locomotion, and it has become standard practice to carefully handcraft the reward function, or else elicit a reward function from demonstrations.
Reward engineering has led to a number of successful demonstrations of locomotion behaviour, however, these examples are known to be brittle: they can lead to unexpected results if the reward function is modiï¬ed even slightly, and for more advanced behaviours the appropriate reward function is often non-obvious in the ï¬rst place. Also, arguably, the requirement of careful reward design sidesteps a primary challenge of reinforcement learning: how an agent can learn for itself, directly from a limited reward signal, to achieve rich and effective behaviours. In this paper we return to this challenge.
Our premise is that rich and robust behaviours will emerge from simple reward functions, if the environment itself contains sufï¬cient richness and diversity. Firstly, an environment that presents a spectrum of challenges at different levels of difï¬culty may shape learning and guide it towards solutions that would be difï¬cult to discover in more limited settings. Secondly, the sensitivity to reward functions and other experiment details may be due to a kind of overï¬tting, ï¬nding idiosyncratic solutions that happen to work within a speciï¬c setting, but are not robust when the agent is exposed to a wider range of settings. Presenting the agent with a diversity of challenges thus increases the
performance gap between different solutions and may favor the learning of solutions that are robust across settings.
We focus on a set of novel locomotion tasks that go signiï¬cantly beyond the previous state-of-the-art for agents trained directly from reinforcement learning. They include a variety of obstacle courses for agents with different bodies (Quadruped, Planar Walker, and Humanoid [5, 6]). The courses are procedurally generated such that every episode presents a different instance of the task.
Our environments include a wide range of obstacles with varying levels of difï¬culty (e.g. steepness, unevenness, distance between gaps). The variations in difï¬culty present an implicit curriculum to the agent â as it increases its capabilities it is able to overcome increasingly hard challenges, resulting in the emergence of ostensibly sophisticated locomotion skills which may naïvely have seemed to require careful reward design or other instruction. We also show that learning speed can be improved by explicitly structuring terrains to gradually increase in difï¬culty so that the agent faces easier obstacles ï¬rst and harder obstacles only when it has mastered the easy ones.
In order to learn effectively in these rich and challenging domains, it is necessary to have a reliable and scalable reinforcement learning algorithm. We leverage components from several recent approaches to deep reinforcement learning. First, we build upon robust policy gradient algorithms, such as trust region policy optimization (TRPO) and proximal policy optimization (PPO) [7, 8], which bound parameter updates to a trust region to ensure stability. Second, like the widely used A3C algorithm [2] and related approaches [3] we distribute the computation over many parallel instances of agent and environment. Our distributed implementation of PPO improves over TRPO in terms of wall clock time with little difference in robustness, and also improves over our existing implementation of A3C with continuous actions when the same number of workers is used.
The paper proceeds as follows. In Section 2 we describe the distributed PPO (DPPO) algorithm that enables the subsequent experiments, and validate its effectiveness empirically. Then in Section 3 we introduce the main experimental setup: a diverse set of challenging terrains and obstacles. We provide evidence in Section 4 that effective locomotion behaviours emerge directly from simple rewards; furthermore we show that terrains with a âcurriculumâ of difï¬culty encourage much more rapid progress, and that agents trained in more diverse conditions can be more robust.
# 2 Large scale reinforcement learning with Distributed PPO
Our focus is on reinforcement learning in rich simulated environments with continuous state and action spaces. We require algorithms that are robust across a wide range of task variation, and that scale effectively to challenging domains. We address each of these issues in turn.
Robust policy gradients with Proximal Policy Optimization Deep reinforcement learning algo- rithms based on large-scale, high-throughput optimization methods, have produced state-of-the-art results in discrete and low-dimensional action spaces, e.g. on Atari games [9] and 3D navigation [2, 3]. In contrast, many prior works on continuous action spaces (e.g. [10, 7, 11, 12, 6, 13]), although impressive, have focused on comparatively small problems, and the use of large-scale, distributed optimization is less widespread and the corresponding algorithms are less well developed (but see e.g. [14, 15, 16]). We present a robust policy gradient algorithm, suitable for high-dimensional continuous control problems, that can be scaled to much larger domains using distributed computation.
Policy gradient algorithms provide an attractive paradigm for continuous control. They operate by directly maximizing the expected sum of rewards J(@) = E,,(7) [o,7~tr(sz, az)] with respect to the parameters 6 of the stochastic policy 7g(a|s). The expectation is with respect to the distribution of trajectories T = (so,ao, 51,-..) induced jointly by the policy 7 and the system dynamics p(sr41|St, 4): po(T) = p(s0)â¢(a0|80)p(s1|S0, 40) .... The gradient of the objective with respect to 4 is given by VoJ = Eg [D>, Vo log 79 (az|sz)(Ri â by)], where Ry = Dyer v tr(se, ay) and b; is an baseline that does not depend on a; or future states and actions. The baseline is often chosen to be b, = V°(s,) = Eo [R:|s:]. In practice the expected future return is typically approximated with a sample rollout and V° is replaced by a learned approximation V4(s) with parameters ¢. Policy gradient estimates can have high variance (e.g. [18]}) and algorithms can be sensitive to the settings of their hyperparameters. Several approaches have been proposed to make policy gradient algorithms more robust. One effective measure is to employ a trust region constraint that restricts
2
the amount by which any update is allowed to change the policy [19, 7, 14]. A popular algorithm that makes use of this idea is trust region policy optimization (TRPO; [7]). In every iteration given current parameters θold, TRPO collects a (relatively large) batch of data and optimizes the t γtâ1 Ïθ(at|st) surrogate loss JT RP O(θ) = EÏθold (Ï ) subject to a constraint on how much the policy is allowed to change, expressed in terms of the Kullback-Leibler divergence (KL) KL [Ïθold |Ïθ] < δ. Aθ is the advantage function given as Aθ(st, at) = Eθ [Rt|st, at] â V θ(st). The Proximal Policy Optimization (PPO) algorithm [8] can be seen as an approximate version of TRPO that relies only on ï¬rst order gradients, making it more convenient to use with recurrent neural networks (RNNs) and in a large-scale distributed setting. The trust region constraint is implemented via a regularization term. The coefï¬cient of this regularization term is adapted depending on whether the constraint had previously been violated or not (a similar idea but without the adaptive coefï¬cient has also been used [13]). Algorithm Box 1 shows the core PPO algorithm in pseudo-code.
# Algorithm 1 Proximal Policy Optimization (adapted from [8])
fori ⬠{1,--- ,N}do Run policy 7 for T timesteps, collecting {s;, a1, rz} Estimate advantages A; = )>,,.,7° âry â Va(sz) Told <~ 76 for j ⬠{1,--- , M}do Jppo(0) = 4 ale Ay â AKL [roial>] Update 6 by a gradient method w.r.t. Jppo(0) end for for j ⬠{1,--- ,B}do Ler() =~ Dia (se te = Valse)? Update ¢ by a gradient method w.r.t. Ler (¢) end for if KL[to1a8] > BrighK Larger then Afar else if KL[7o1a>] < BiowKLtarget then A A/a end if end for
In algorithm 1, the hyperparameter KLtarget is the desired change in the policy per iteration. The scaling term α > 1 controls the adjustment of the KL-regularization coefï¬cient if the actual change in the policy stayed signiï¬cantly below or signiï¬cantly exceeded the target KL (i.e. falls outside the interval [βlowKLtarget, βhighKLtarget]).
Scalable reinforcement learning with Distributed PPO To achieve good performance in rich, simulated environments, we have implemented a distributed version of the PPO algorithm (DPPO). Data collection and gradient calculation are distributed over workers. We have experimented with both synchronous and asynchronous updates and have found that averaging gradients and applying them synchronously leads to better results in practice.
The original PPO algorithm estimates advantages using the complete sum of rewards. To facilitate the use of RNNs with batch updates while also supporting variable length episodes we follow a strategy similar to and use truncated backpropagation through time with a window of length kKâ. This makes it natural (albeit not a requirement) to use /â-step returns also for estimating the advantage, i.e. we sum the rewards over the same Kâ-step windows and bootstrap from the value function after K-steps: Ay = Dy trig + 71 Va(si4.) â Valse): The publicly available implementation of PPO by John Schulman adds several modifications to the core algorithm. These include normalization of inputs and rewards as well as an additional term in the loss that penalizes large violations of the trust region constraint. We adopt similar augmentations in the distributed setting but find that sharing and synchronization of various statistics across workers requires some care. The implementation of our distributed PPO (DPPO) is in TensorFlow, the parameters reside on a parameter server, and workers synchronize their parameters after every gradient step. Pseudocode and further details are provided in the supplemental material.
3
â PPoa â Pro4 â PPo8 Planar Walker = PPO E Humanoid f Reacher2-Memory ° 5 10 15 20 2 20 4 60 80 100 120 140 o 5 10 15 20 25 hours (wall clack) hours (wall clack) hours (wall lock)
Figure 1: DPPO benchmark performance on the Planar Walker (left), Humanoid (middle), and Memory Reacher (right) tasks. In all cases, DPPO achieves performance equivalent to TRPO, and scales well with the number of workers used. The Memory Reacher task demonstrates that it can be used with recurrent networks.
# 2.1 Evaluation of Distributed PPO
We compare DPPO to several baseline algorithms. The goal of these experiments is primarily to establish that the algorithm allows robust policy optimization with limited parameter tuning and that the algorithm scales effectively. We therefore perform the comparison on a selected number of benchmark tasks related to our research interests, and compare to two algorithmic alternatives: TRPO and continuous A3C. For details of the comparison please see the supplemental material.
Benchmark tasks We consider three continuous control tasks for benchmarking the algorithms. All environments rely on the Mujoco physics engine [21]. Two tasks are locomotion tasks in obstacle- free environments and the third task is a planar target-reaching task that requires memory. Planar walker: a simple bipedal walker with 9 degrees-of-freedom (DoF) and 6 torque actuated joints. It receives a primary reward proportional to its forward velocity, additional terms penalize control and the violation of box constraints on torso height and angle. Episodes are terminated early when the walker falls. Humanoid: The humanoid has 28 DoF and 21 acutated joints. The humanoid, too, receives a reward primarily proportional to its velocity along the x-axis, as well as a constant reward at every step that, together with episode termination upon falling, encourage it to not fall. Memory reacher: A random-target reaching task with a simple 2 DoF robotic arm conï¬ned to the plane. The target position is provided for the ï¬rst 10 steps of each episode during which the arm is not allowed to move. When the arm is allowed to move, the target has already disappeared and the RNN memory must be relied upon in order for the arm to reach towards the correct target location. The reward in this task is the distance between the positions of end-effector and target, and it tests the ability of DPPO to optimize recurrent network policies.
Results Results depicted in Fig. 1 show that DPPO achieves performance similar to TRPO and that DPPO scales well with the number of workers used, which can signiï¬cantly reduce wall clock time. Since it is fully gradient based it can also be used directly with recurrent networks as demonstrated by the Memory reacher task. DPPO is also faster (in wallclock) than our implementation of A3C when the same number of workers is used.
# 3 Methods: environments and models
Our goal is to study whether sophisticated locomotion skills can emerge from simple rewards when learning from varied challenges with a spectrum of difï¬culty levels. Having validated our scalable DPPO algorithm on simpler benchmark tasks, we next describe the settings in which we will demonstrate the emergence of more complex behavior.
# 3.1 Training environments
In order to expose our agents to a diverse set of locomotion challenges we use a physical simulation environment roughly analogous to a platform game, again implemented in Mujoco [21]. We procedu- rally generate a large number of different terrains with a variety of obstacles; a different instance of the terrain and obstacles is generated in each episode.
4
Bodies We consider three different torque-controlled bodies, described roughly in terms of increas- ing complexity. Planar walker: a simple walking body with 9 DoF and 6 actuated joints constrained to the plane. Quadruped: a simple three-dimensional quadrupedal body with 12 DoF and 8 actuated joints. Humanoid: a three-dimensional humanoid with 21 actuated dimensions and 28 DoF. The bodies can be seen in action in ï¬gures 4, 5, and 7 respectively. Note that the Planar walker and Humanoid bodies overlap with those used in the benchmarking tasks described in the previous section, however the benchmark tasks only consisted of simple locomotion in an open plane.
Rewards We keep the reward for all tasks simple and consistent across terrains. The reward consists of a main component proportional to the velocity along the x-axis, encouraging the agent to make forward progress along the track, plus a small term penalizing torques. For the walker the reward also includes the same box constraints on the pose as in section 2. For the quadruped and humanoid we penalize deviations from the center of the track, and the humanoid receives an additional reward per time-step for not falling. Details can be found in the supplemental material. We note that differences in the reward functions across bodies are the consequence of us adapting previously proposed reward functions (cf. e.g. [12, 18]) rather than the result of careful tuning, and while the reward functions vary slightly across bodies we do not change them to elicit different behaviors for a single body.
Terrain and obstacles All of our courses are procedurally generated; in every episode a new course is generated based on pre-deï¬ned statistics. We consider several different terrain and obstacle types: (a) hurdles: hurdle-like obstacles of variable height and width that the walker needs to jump or climb over; (b) gaps: gaps in the ground that must be jumped over; (c) variable terrain: a terrain with different features such as ramps, gaps, hills, etc.; (d) slalom walls: walls that form obstacles that require walking around, (e) platforms: platforms that hover above the ground which can be jumped on or crouched under. Courses consist of a sequence of random instantiations of the above terrain types within user-speciï¬ed parameter ranges.
We train on different types of courses: single-type courses (e.g. gaps only, hurdles only, etc.); mixtures of single-type courses (e.g. every episode a different terrain type is chosen); and mixed terrains (individual courses consisting of more than one terrain type). We consider stationary courses for which the obstacle statistics are effectively ï¬xed over the the length of the course, and âcurriculumâ courses in which the difï¬culty of the terrain increases gradually over the length of the course. Fig. 3 shows a few different course types.
Observations The agents receive two sets of observations [22]: (1) a set of egocentric, âpro- prioceptiveâ features containing joint angles and angular velocities; for the Quadruped and Hu- manoid these features also contain the readings of a velocimeter, accelerometer, and a gyroscope positioned at the torso providing egocentric ve- locity and acceleration information, plus con- tact sensors attached to the feet and legs. The Humanoid also has torque sensors in the joints of the lower limbs. (2) a set of âexteroceptiveâ features containing task-relevant information in- cluding the position with respect to the center of the track as well as the proï¬le of the terrain ahead. Information about the terrain is provided as an array of height measurements taken at sampling points that translate along the x- and y-axis with the body and the density of which decreases with distance from the body. The Pla- nar Walker is conï¬ned to the xz-plane (i.e. it cannot move side-to-side), which simpliï¬es its perceptual features. See supplemental material for details.
ac » |fe i ee t 1 i Prop: Joints/Sensors Extero: Terrain etc. âPN itt
Figure 2: Schematic of the network architecture. We use an architecture similar to [22], consisting of a component processing information local to the controlled body (egocentric information; blue) and a modulatory component that processes environ- ment and task related âexteroceptiveâ information such as the terrain shape (green).
5
Figure 3: Examples of the terrain types used in the experiments. Left to right and top to bottom: hurdles, platforms, gaps, slalom walls, variable terrain.
Figure 4: Walker skills: Time-lapse images of a representative Planar Walker policy traversing rubble; jumping over a hurdle; jumping over gaps and crouching to pass underneath a platform.
# 3.2 Policy parameterization
Similar to [22] we aim to achieve a separation of concerns between the basic locomotion skills and terrain perception and navigation. We structure our policy into two subnetworks, one of which receives only proprioceptive information, and the other which receives only exteroceptive information. As explained in the previous paragraph with proprioceptive information we refer to information that is independent of any task and local to the body while exteroceptive information includes a representation of the terrain ahead. We compared this architecture to a simple fully connected neural network and found that it greatly increased learning speed. Fig. 2 shows a schematic.
# 4 Results
We apply the Distributed PPO algorithm to a variety of bodies, terrains, and obstacles. Our aim is to establish whether simple reward functions can lead to the emergence of sophisticated locomotion skills when agents are trained in rich environments. We are further interested whether the terrain structure can affect learning success and robustness of the resulting behavior.
Planar Walker We train the walker on hurdles, gaps, platforms, and variable terrain separately, on a mixed course containing all features interleaved, and on a mixture of terrains (i.e. the walker was placed on different terrains in different episodes). It acquired a robust gait, learned to jump over hurdles and gaps, and to walk over or crouch underneath platforms. All of these behaviors emerged spontaneously, without special cased shaping rewards to induce each separate behaviour. Figure 4 shows motion sequences of the Planar Walker traversing a rubble-ï¬eld, jumping over a hurdle, and over gaps, and crouching under a platform. Examples of the respective behaviors can be found in the supplemental video. The emergence of these skills was robust across seeds. At the end of learning the Planar Walker jumped over hurdles nearly as tall as its own body.
Quadruped The quadruped is a generally less agile body than the walker but it adds a third dimension to the control problem. We considered three different terrain types: variable terrain, slalom walls, gaps, and a variation of the hurdles terrain which contained obstacles that can be avoided, and others that require climbing or jumping.
The Quadruped, too, learns to navigate most obstacles quite reliably, with only small variations across seeds. It discovers that jumping up or forward (in some cases with surprising accuracy) is a suitable strategy to overcome hurdles, and gaps, and it learns to navigate walls, turning left and right as appropriate â in both cases despite only receiving reward for moving forward. For the variation of the hurdles-terrain it learns to distinguish between obstacles that it can and / or has to climb over, and those it has to walk around. The variable terrain may seem easy but is, in fact, surprisingly hard
6
Figure 5: Time-lapse images of a representative Quadruped policy traversing gaps (left); and navigating obstacles (right)
Easy test environment Hard test environment. ae Planar Walker ae Quadruped a)® #0 lm Hurdles mmm Simple mm Hurdles b a vs pee ° rm | " Ne Yas a se asf ill! * Ih Bos 08 om | iI" 20 wt il Bos 06 eal Se INN gos oa Zn f° wo } yw! E id See, 3 WM Se Em ~ ; ° ° 0, 0.0 0.0 0.2 04 06 08 10 12 14 00 0.2 04 06 08 10 12 14 Friction Rubble Model Incline Friction Rubble Model âTraining steps 1e7 âTraining steps 1e7
Figure 6: a) Curriculum training: Evaluation of policies trained on hurdle courses with different statistics: âregularâ courses contain arbitrarily interleaved high and low hurdles (blue); âcurriculumâ courses gradually increase hurdle height over the course of the track (green). During training we eval- uate both policies on validation courses with low/âeasy" hurdles (left) and tall/âhard" hurdles (right). The performance of the policy trained on the curriculum courses increases faster. b) Robustness of Planar Walker policies (left) and Quadruped policies (right): We evaluate how training on hurdles (green) increases policy robustness relative to training on ï¬at terrain (blue). Policies are assessed on courses with unobserved changes in ground friction, terrain surface (rubble), strength of the body actuators, and incline of the ground plane. There is a notable advantage in some cases for policies trained on the hurdle terrain. All plots show the average returns normalized for each terrain setting.
because the body shape of the Quadruped is poorly suited (i.e. the legs of the quadruped are short compared to the variations in the terrain). Nevertheless it learns strategies to traverse reasonably robustly. Fig. 5 shows some representative motion sequences; further examples can be found in the supplemental video.
Analyses We investigate whether the nature of the terrain affects learning. In particular, it is easy to imagine that training on, for instance, very tall hurdles only will not be effective. For training to be successful in our setup it is required that the walker occasionally âsolvesâ obstacles by chance â and the probability of this happening, is, of course, minuscule when all hurdles are very tall. We verify this by training a Planar Walker on two different types of hurdles-terrains. The ï¬rst possesses stationary statistics with high- and low hurdles being randomly interleaved. In the second terrain the difï¬culty, as given by the minimum and maximum height of the hurdles, increases gradually over the length of the course. We measure learning progress by evaluating policies during learning on two test terrains, an easy one with shallow hurdles and a difï¬cult one with tall hurdles. Results are shown in Fig. 6a for a representative Planar Walker policy. The policy trained on the terrain with gradually increasing difï¬culty improves faster than the one trained on a stationary terrain.
We further study whether training on varying terrains leads to more robust gaits compared to usual task of moving forward on a plane. To this end we train Planar Walker and Quadruped policies on a ï¬at course as well as on the (more challenging) hurdles. We then evaluate representative policies from each experiment with respect to their robustness to (a) unobserved variations in surface friction, (b) unobserved rumble-strips, (c) changes in the model of the body, (d) unobserved inclines / declines of the ground. Results depicted in Fig. 6b show a trend of training on hurdles increasing robustness on other forms of unobserved variation in the terrain.
Humanoid Our ï¬nal set of experiments considers the 28-DoF Humanoid, a considerably more complex body than Planar Walker and Quadruped. The set of terrains is qualitatively similar to the ones used for the other bodies, including gaps, hurdles, a variable terrain, as well as the slalom walls. We also trained agents on mixtures of the above terrains.
7
Figure 7: Time lapse sequences of the Humanoid navigating different terrains
As for the previous experiments we considered a simple reward function, primarily proportional to the velocity along the x-axis (see above). We experimented with two alternative termination conditions: (a) episodes were terminated when the minimum distance between head and feet fell below 0.9m; (b) episodes were terminated when the minimum distance between head and ground fell below 1.1m.
In general, the humanoid presents a considerably harder learning problem largely because with its relatively large number of degrees of freedoms it is prone to exploit redundancies in the task speciï¬cation and / or to get stuck in local optima, resulting in entertaining but visually unsatisfactory gaits. Learning results tend to be sensitive to the particular algorithm, exploration strategy, reward function, termination condition, and weight initialization.
The results we obtained for the humanoid were indeed much more diverse than for the other two bodies, with signiï¬cant variations across seeds for the same setting of the hyperparameters. Some of the variations in the behaviors were associated with differences in learning speed and asymptotic performance (suggesting a local optimum); others were not (suggesting alternative solution strategies).
Nevertheless we obtained for each terrain several well performing agents, both in terms of performance and in terms of visually pleasing gaits. Fig. 7 shows several examples of agents trained on gaps, hurdles, slalom walls, and variable terrain. As in the previous experiments the terrain diversity and the inherent curriculum led the agents to discover robust gaits, the ability to overcome obstacles, to jump across gaps, and to navigate slalom courses. We highlight several solution strategies for each terrain in the supplemental video, including less visually appealing ones. To test the robustness of the learned behaviors we further constructed two test courses with (a) statistics rather different from the training terrains and (b) unobserved perturbations in the form of see-saws and random forces applied to the Humanoidâs torso, which is also presented in the video. Qualitatively we see moderately large levels of robustness to these probe challenges (see supplemental video).
# 5 Related work
Physics-based character animation is a long-standing and active ï¬eld that has produced a large body of work with impressive results endowing simulated characters with locomotion and other movement skills (see [23] for a review). For instance, [24] show sophisticated skill sequencing for maneuvering obstacles on a parametric terrain, while [25, 26, 27] demonstrate how terrain adaptive behaviors or other skilled movements can emerge as the result of optimization problems. While there are very diverse approaches, essentially all rely on signiï¬cant prior knowledge of the problem domain and many on demonstrations such as motion capture data.
Basic locomotion behaviors learned end-to-end via RL have been demonstrated, for instance, by [7, 12, 6, 13] or guided policy search [10]. Locomotion in the context of higher-level tasks has been considered in [22]. Terrain-adaptive locomotion with RL has been demonstrated by [28], but they still impose considerable structure on their solution. Impressive results were recently achieved with learned locomotion controllers for a 3D humanoid body [29], but these rely on a domain-speciï¬c structure and human motion capture data to bootstrap the movement skills for navigating ï¬at terrains.
The idea of curricula is long-standing in the machine learning literature (e.g. [30]). It has been exploited for learning movement skills for instance by [31]. The present work combines and develops elements from many of these research threads, but pushes uniquely far in a particular direction â using simple RL rewards and curriculum training to produce adaptive locomotion in challenging environments while imposing only limited structure on the policy and behavior.
8
# 6 Discussion
We have investigated the question whether and to what extent training agents in a rich environment can lead to the emergence of behaviors that are not directly incentivized via the reward function. This departs from the common setup in control where a reward function is carefully tuned to achieve particular solutions. Instead, we use deliberately simple and generic reward functions but train the agent over a wide range of environmental conditions. Our experiments suggest that training on diverse terrain can indeed lead to the development of non-trivial locomotion skills such as jumping, crouching, and turning for which designing a sensible reward is not easy. While we do not claim that environmental variations will be sufï¬cient, we believe that training agents in richer environments and on a broader spectrum of tasks than is commonly done today is likely to improve the quality and robustness of the learned behaviors â and also the ease with which they can be learned. In that sense, choosing a seemingly more complex environment may actually make learning easier.
# Acknowledgments
We thank Joseph Modayil and many other colleagues at DeepMind for helpful discussions and comments on the manuscript.
9
# References
[1] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015. [2] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016.
[3] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.
[4] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[5] Yuval Tassa, Tom Erez, and Emanuel Todorov. Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 4906â4913. IEEE, 2012.
[6] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015. [7] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy
optimization. In ICML, pages 1889â1897, 2015.
[8] Pieter Abbeel and John Schulman. Deep reinforcement learning through policy optimization. Tuto- rial at Neural Information Processing Systems, 2016. URL https://nips.cc/Conferences/2016/ Schedule?showEvent=6198.
[9] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
[10] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In NIPS, 2014.
[11] S Levine, C Finn, T Darrell, and P Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[12] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[13] Nicolas Heess, Gregory Wayne, David Silver, Timothy P. Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.
[14] Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Rémi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efï¬cient actor-critic with experience replay. CoRR, abs/1611.01224, 2016.
[15] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. arXiv preprint arXiv:1610.00633, 2016.
[16] Ivaylo Popov, Nicolas Heess, Timothy P. Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin A. Riedmiller. Data-efï¬cient deep reinforcement learning for dexterous manipulation. CoRR, abs/1704.03073, 2017.
[17] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
[18] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. CoRR, abs/1604.06778, 2016.
[19] Jan Peters, Katharina Mülling, and Yasemin Altün. Relative entropy policy search. In Proceedings of the Twenty-Fourth AAAI Conference on Artiï¬cial Intelligence (AAAI 2010), 2010.
[20] PPO. https://github.com/joschu/modular_rl, 2016. [21] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026â5033. IEEE, 2012.
[22] Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016.
[23] Thomas Geijtenbeek and Nicolas Pronost. Interactive character animation using simulated physics: A state-of-the-art review. In Computer Graphics Forum, volume 31, pages 2492â2515. Wiley Online Library, 2012.
[24] Libin Liu, KangKang Yin, Michiel van de Panne, and Baining Guo. Terrain runner: control, parameteriza- tion, composition, and planning for highly dynamic motions. ACM Transactions on Graphics (TOG), 31 (6):154, 2012.
10
[25] Jia-chi Wu and Zoran Popovi´c. Terrain-adaptive bipedal locomotion control. ACM Transactions on Graphics, 29(4):72:1â72:10, Jul. 2010.
[26] Igor Mordatch, Martin De Lasa, and Aaron Hertzmann. Robust physics-based locomotion using low- dimensional planning. ACM Transactions on Graphics (TOG), 29(4):71, 2010.
[27] Igor Mordatch, Emanuel Todorov, and Zoran Popovic. Discovery of complex behaviors through contact- invariant optimization. ACM Trans. Graph., 31(4):43:1â43:8, 2012.
[28] Xue Bin Peng, Glen Berseth, and Michiel van de Panne. Terrain-adaptive locomotion skills using deep reinforcement learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2016), 35(4), 2016.
[29] Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel van de Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2017), 36(4), 2017.
[30] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In International Conference on Machine Learning, ICML, 2009.
[31] Andrej Karpathy and Michiel Van De Panne. Curriculum learning for motor skills. In Canadian Conference on Artiï¬cial Intelligence, pages 325â330. Springer, 2012.
11
# A Distributed PPO
# A.1 Algorithm details
Pseudocode for the Distributed PPO algorithm is provided in Algorithm Boxes 2 and 3. W is the number of workers; D sets a threshold for the number of workers whose gradients must be available to update the parameters. M, B is the number of sub-iterations with policy and baseline updates given a batch of datapoints. T is the number of data points collected per worker before parameter updates are computed. K is the number of time steps for computing K-step returns and truncated backprop through time (for RNNs)
# Algorithm 2 Distributed Proximal Policy Optimization (chief)
for i â {1, · · · , N } do for j â {1, · · · , M } do Wait until at least W â D gradients wrt. θ are available average gradients and update global θ end for for j â {1, · · · , B} do Wait until at least W â D gradients wrt. Ï are available average gradients and update global Ï end for end for
# Algorithm 3 Distributed Proximal Policy Optimization (worker)
fori ⬠{1,--- NI do for w ⬠{1,---T/K} do Run policy 7 for K timesteps, collecting {s:, az, re fort ⬠{(iâ1)K,...,ik â 1} Estimate return R, = 37â KY (CDE p + 7K V5 (six) Estimate advantages Ar = Ri â Vo (se) Store partial trajectory information end for Told <~ 76 form ⬠{1,--- ,M}do Ippo(0) = i, eee Ay â AKL [rroia|t9] â Emax(0, KL [moral] â 2KLtarget)â if KL[to1ao > 4K Leargee then break and continue with next outer iteration i + 1 end if Compute Vo Jpro send gradient wrt. to 0 to chief wait until gradient accepted or dropped; update parameters end for for b ⬠{1,---,B}do Lai () = â ja (Re â Valse)? Compute Vela. send gradient wrt. to ¢ to chief wait until gradient accepted or dropped; update parameters end for if KL[to1a>] > BrignKLiarget then A+ ar else if KL[7o1a>] < Brow KLtarget then Ae X/& end if
# end if end for
Normalization Following [20] we perform the following normalization steps:
1. We normalize observations (or states st) by subtracting the mean and dividing by the standard deviation using the statistics aggregated over the course of the entire experiment.
12
2. We scale the reward by a running estimate of its standard deviation, again aggregated over the course of the entire experiment.
3. We use per-batch normalization of the advantages.
Sharing of algorithm parameters across workers In the distributed setting we have found it to be important to share relevant statistics for data normalization across workers. Normalization is applied during data collection and statistics are updated locally after every environment step. Local changes to the statistics are applied to the global statistics after data collection when an iteration is complete (not shown in pseudo-code). The time-varying regularization parameter λ is also shared across workers but updates are determined based on local statistics based on the average KL computed locally for each worker, and applied separately by each worker with an adjusted Ëα = 1 + (α â 1)/K.
Additional trust region constraint We also adopt an additional penalty term that becomes active when the KL exceeds the desired change by a certain margin (the threshold is 2KLtarget in our case). In our distributed implementation this criterion is tested and applied on a per-worker basis.
Stability is further improved by early stopping when changes lead to too large a change in the KL.
# A.2 Algorithm comparison
TRPO has been established as a robust algorithm that learns high-performing policies and requires little parameter tuning. Our primary concern was therefore whether DPPO can achieve results comparable to TRPO. Secondarily, we were interested in whether the algorithm scales to large numbers of workers and allows speeding up experiments where large numbers of data points are required to obtain reliable gradient estimates. We therefore compare to TRPO in a regime where a large number number samples is used to compute parameter updates (N = 100000). For simple tasks we expect TRPO to produce good results in this regime (for the benchmark tasks a smaller N would likely be sufï¬cient).
For DPPO we perform a coarse search over learning rate for policy and baseline. All experiments in section 2.1 use the same learning rates (0.00005 and 0.0001 respectively.) In each iteration we use batches of size of 64000 (walker), 128000 (humanoid), and 24000 (reacher) time steps. Data collection and gradient computation are distributed across varying numbers of workers. Due to early termination this number is sometimes smaller (when an episode terminates early the remaining steps in the current unroll window of length K are being ignored during gradient calculation). An alternative point of comparison would be to use a ï¬xed overall number of time steps and vary the number of time steps per worker.
Networks use tanh nonlinearities and parameterize the mean and standard deviation of a condi- tional Gaussian distribution over actions. Network sizes were as follows: Planar Walker: 300,200; Humanoid: 300,200,100; Memory Reacher: 200; and 100 LSTM units.
For A3C with continuous actions we also perform a coarse search over relevant hyper parameters, especially the learning rate and entropy cost. Due to differences in the code base network architectures were not exactly identical to those used for DPPO but used the same numbers of hidden units.
We note that a like-for-like comparison of the algorithms is difï¬cult since they are implemented in different code bases and especially for distributed algorithms performance in wall clock time is affected both by conceptual changes to the algorithm as well as by implementation choices. A more careful benchmarking of several recent high-throughput algorithms will be the subject of future work.
# B Additional experimental details
# B.1 Observations
For all courses terrain height (and platform height where applicable) was provided as a heightï¬eld where each "pixel" indicates the height of the terrain (platform) within a small region. This heightï¬eld was then sampled at particular points relative to the position of the agent.
Planar walker The exteroceptive features for the planar walker consist of sampling points of the terrain and, where applicable, platform height. There were 50 equally spaced points along the x-axis
13
starting 2m behind the agent and extending 8m ahead. Platform height was represented separately from terrain height with a separate set of sampling points. In addition the exteroceptive features contained the height of the walker body above the ground (measured at its current location) as well as the difference between the agents position and the next sampling grid center (the intention behind this last input is to resolve the aliasing arising from the piece-wise constant terrain representation with ï¬nite sampling).
Quadruped & Humanoid The Quadruped and Humanoid use the same set of exteroceptive features, effectively a two-dimensional version of what is used for the walker. The sampling points are placed on a variable-resolution grid and range from 1.2m behind the agent to 5.6m ahead of it along the x-axis as well as 4m to the left and to the right. To reduce dimensionality of the input data sampling density decreases with increasing distance from the position of the body. In addition to the height samples the exteroceptive features include the height of the body above the ground, and the x and y distance of the walker body to the next sampling grid center (to reduce aliasing; see above).
# B.2 Rewards
Planar walker r= 10.0v, + 0.5n, â |A;, â 1.2| â 10.0I[Ap, < 0.3] â 0.1]]ul|? Here n, is the projection of the z-axis of the torso coordinate frame onto the z-axis of the global coordinate frame (this value varies from 1.0 to -1.0) depending on whether the Planar Walkerâs torso is upright or upside down. Ay, is the height of the Planar Walkerâs torso above the feet. I{-] is the indicator function. v,, is the velocity along the x-axis.
Quadruped r= v, + 0.05n, â 0.01||u||? where n, is the projection of the z-axis of the torso coordinate frame onto the z-axis of the global coordinate frame (this value varies from 1.0 to -1.0) depending on whether the Quadruped is upright or upside down.
Humanoid =r = min(vr, Ymax) â 0.005(vz + v7) â 0.05y? â 0.02||ul|? + 0.02 where vnax is a cutoff for the velocity reward which we usually set to 4m/s.
14 | {
"id": "1610.05182"
} |
1707.01891 | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | Trust region methods, such as TRPO, are often used to stabilize policy
optimization algorithms in reinforcement learning (RL). While current trust
region strategies are effective for continuous control, they typically require
a prohibitively large amount of on-policy interaction with the environment. To
address this problem, we propose an off-policy trust region method, Trust-PCL.
The algorithm is the result of observing that the optimal policy and state
values of a maximum reward objective with a relative-entropy regularizer
satisfy a set of multi-step pathwise consistencies along any path. Thus,
Trust-PCL is able to maintain optimization stability while exploiting
off-policy data to improve sample efficiency. When evaluated on a number of
continuous control tasks, Trust-PCL improves the solution quality and sample
efficiency of TRPO. | http://arxiv.org/pdf/1707.01891 | Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans | cs.AI | ICLR 2018 | null | cs.AI | 20170706 | 20180222 | 8 1 0 2
b e F 2 2 ] I A . s c [
3 v 1 9 8 1 0 . 7 0 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# TRUST-PCL: AN OFF-POLICY TRUST REGION METHOD FOR CONTINUOUS CONTROL
# Oï¬r Nachum, Mohammad Norouzi, Kelvin Xu, & Dale Schuurmansâ {ofirnachum,mnorouzi,kelvinxx,schuurmans}@google.com Google Brain
# ABSTRACT
Trust region methods, such as TRPO, are often used to stabilize policy optimiza- tion algorithms in reinforcement learning (RL). While current trust region strate- gies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we pro- pose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efï¬ciency. When evaluated on a number of continuous control tasks, Trust-PCL signiï¬cantly improves the solution quality and sample efï¬ciency of TRPO.1
# INTRODUCTION
The goal of model-free reinforcement learning (RL) is to optimize an agentâs behavior policy through trial and error interaction with a black box environment. Value-based RL algorithms such as Q-learning (Watkins, 1989) and policy-based algorithms such as actor-critic (Konda & Tsitsiklis, 2000) have achieved well-known successes in environments with enumerable action spaces and pre- dictable but possibly complex dynamics, e.g., as in Atari games (Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016). However, when applied to environments with more sophisticated action spaces and dynamics (e.g., continuous control and robotics), success has been far more limited.
In an attempt to improve the applicability of Q-learning to continuous control, Silver et al. (2014) and Lillicrap et al. (2015) developed an off-policy algorithm DDPG, leading to promising results on continuous control environments. That said, current off-policy methods including DDPG often improve data efï¬ciency at the cost of optimization stability. The behaviour of DDPG is known to be highly dependent on hyperparameter selection and initialization (Metz et al., 2017); even when using optimal hyperparameters, individual training runs can display highly varying outcomes.
On the other hand, in an attempt to improve the stability and convergence speed of policy-based RL methods, Kakade (2002) developed a natural policy gradient algorithm based on Amari (1998), which subsequently led to the development of trust region policy optimization (TRPO) (Schulman et al., 2015). TRPO has shown strong empirical performance on difï¬cult continuous control tasks often outperforming value-based methods like DDPG. However, a major drawback is that such meth- ods are not able to exploit off-policy data and thus require a large amount of on-policy interaction with the environment, making them impractical for solving challenging real-world problems.
Efforts at combining the stability of trust region policy-based methods with the sample efï¬ciency of value-based methods have focused on using off-policy data to better train a value estimate, which can be used as a control variate for variance reduction (Gu et al., 2017a;b).
In this paper, we investigate an alternative approach to improving the sample efï¬ciency of trust region policy-based RL methods. We exploit the key fact that, under entropy regularization, the
âAlso at the Department of Computing Science, University of Alberta, daes@ualberta.ca 1An implementation of Trust-PCL is available at https://github.com/tensorflow/models/ tree/master/research/pcl_rl
1
Published as a conference paper at ICLR 2018
optimal policy and value function satisfy a set of pathwise consistency properties along any sam- pled path (Nachum et al., 2017), which allows both on and off-policy data to be incorporated in an actor-critic algorithm, PCL. The original PCL algorithm optimized an entropy regularized max- imum reward objective and was evaluated on relatively simple tasks. Here we extend the ideas of PCL to achieve strong results on standard, challenging continuous control benchmarks. The main observation is that by alternatively augmenting the maximum reward objective with a relative en- tropy regularizer, the optimal policy and values still satisfy a certain set of pathwise consistencies along any sampled trajectory. The resulting objective is equivalent to maximizing expected reward subject to a penalty-based constraint on divergence from a reference (i.e., previous) policy.
We exploit this observation to propose a new off-policy trust region algorithm, Trust-PCL, that is able to exploit off-policy data to train policy and value estimates. Moreover, we present a simple method for determining the coefï¬cient on the relative entropy regularizer to remain agnostic to reward scale, hence ameliorating the task of hyperparameter tuning. We ï¬nd that the incorporation of a relative entropy regularizer is crucial for good and stable performance. We evaluate Trust- PCL against TRPO, and observe that Trust-PCL is able to solve difï¬cult continuous control tasks, while improving the performance of TRPO both in terms of the ï¬nal reward achieved as well as sample-efï¬ciency.
# 2 RELATED WORK
Trust Region Methods. Gradient descent is the predominant optimization method for neural networks. A gradient descent step is equivalent to solving a trust region constrained optimization,
minimize â¬(0 + d0) = â¬(0) + Ve(0)'dd st. dOâ¢dO<e, (1)
dOâ¢dO<e,
which yields the locally optimal update dd = ânV¢(0) such that 7 = v/e/||V(4)||; hence by considering a Euclidean ball, gradient descent assumes the parameters lie in a Euclidean space.
However, in machine learning, particularly in the context of multi-layer neural network training, Euclidean geometry is not necessarily the best way to characterize proximity in parameter space. It is often more effective to define an appropriate Riemannian metric that respects the loss surface (Amari (2012), which allows much steeper descent directions to be identified within a local neigh- borhood (e.g., [Amari] (1998); [Martens & Grosse] (2015p). Whenever the loss is defined in terms of a Bregman divergence between an (unknown) optimal parameter 6* and model parameter 0, i.e, (0) = Dr(6*, 8), it is natural to use the same divergence to form the trust region:
minimize Dp(6*,9+d0) s.t. Dp(0,0+d0) <e. (2)
The natural gradient (Amari|/1998) is a generalization of gradient descent where the Fisher informa- tion matrix F'(9) is used ti to define the local geometry of the parameter space around @. If a parameter update is constrained by d@" F(6)d@ < ¢, a descent direction of d@ = ânF(0)~!V£(6) is obtained. This geometry is especially effective for optimizing the log-likelihood of a conditional probabilistic model, where the objective is in fact the KL divergence Dx 1,(9", #). The local optimization is,
minimize Dx, (6",9+d0) s.t. Dxi(0,0+d0) = <e. (3)
# ue
ames natural gradient approximates the trust region by D(a, b) © (a â a)(a â b), which s accurate up to a second order Taylor approximation. Previous work (Kakade| Kaba 005) Bore Schneider 2003 } Peters & Schaal|/2008 } Schulman et al. 2015) has applied natural gradient to policy optimization, locally improving expected reward subject to variants of do'F F(0)d0 < e. Recently, TRPO (Schulman et al. 2015} 2016) has achieved state-of-the-art results in continuous control by adding several approximations to the natural gradient to make nonlinear policy optimization feasible.
Another approach to trust region optimization is given by proximal gradient methods (Parikh et al., 2014). The class of proximal gradient methods most similar to our work are those that replace the hard constraint in (2) with a penalty added to the objective. These techniques have recently become popular in RL (Wang et al., 2016; Heess et al., 2017; Schulman et al., 2017b), although in terms of ï¬nal reward performance on continuous control benchmarks, TRPO is still considered to be the state-of-the-art.
2
Published as a conference paper at ICLR 2018
Norouzi et al. (2016) make the observation that entropy regularized expected reward may be ex- pressed as a reversed KL divergence DKL(θ, θâ), which suggests that an alternative to the constraint in (3) should be used when such regularization is present:
s.t. Dr (0+, 0) = do" F(6 + d0)d0 <e.
minimize DKL(θ + dθ, θâ)
(4)
Unfortunately, this update requires computing the Fisher matrix at the endpoint of the update. The use of F'(@) in previous work can be considered to be an approximation when entropy regularization is present, but it is not ideal, particularly if dé is large. In this paper, by contrast, we demonstrate that the optimal dé under the reverse KL constraint Dx,,(@ + dé, 0) < ⬠can indeed be characterized. Defining the constraint in this way appears to be more natural and effective than that of TRPO.
Softmax Consistency. To comply with the information geometry over policy parameters, previous work has used the relative entropy (i.e., KL divergence) to regularize policy optimization; resulting in a softmax relationship between the optimal policy and state values (Peters et al., 2010; Azar et al., 2012; 2011; Fox et al., 2016; Rawlik et al., 2013) under single-step rollouts. Our work is unique in that we leverage consistencies over multi-step rollouts.
The existence of multi-step softmax consistencies has been noted by prior workâï¬rst by Nachum et al. (2017) in the presence of entropy regularization. The existence of the same consistencies with relative entropy has been noted by Schulman et al. (2017a). Our work presents multi-step con- sistency relations for a hybrid relative entropy plus entropy regularized expected reward objective, interpreting relative entropy regularization as a trust region constraint. This work is also distinct from prior work in that the coefï¬cient of relative entropy can be automatically determined, which we have found to be especially crucial in cases where the reward distribution changes dramatically during training.
Most previous work on softmax consistency (e.g., Fox et al. (2016); Azar et al. (2012); Nachum et al. (2017)) have only been evaluated on relatively simple tasks, including grid-world and discrete algo- rithmic environments. Rawlik et al. (2013) conducted evaluations on simple variants of the CartPole and Pendulum continuous control tasks. More recently, Haarnoja et al. (2017) showed that soft Q- learning (a single-step special case of PCL) can succeed on more challenging environments, such as a variant of the Swimmer task we consider below. By contrast, this paper presents a successful appli- cation of the softmax consistency concept to difï¬cult and standard continuous-control benchmarks, resulting in performance that is competitive with and in some cases beats the state-of-the-art.
# 3 NOTATION & BACKGROUND
We model an agentâs behavior by a policy distribution Ï(a | s) over a set of actions (possibly discrete or continuous). At iteration t, the agent encounters a state st and performs an action at sampled from Ï(a | st). The environment then returns a scalar reward rt â¼ r(st, at) and transitions to the next state st+1 â¼ Ï(st, at). When formulating expectations over actions, rewards, and state transitions we will often omit the sampling distributions, Ï, r, and Ï, respectively.
Maximizing Expected Reward. The standard objective in RL is to maximize expected future discounted reward. We formulate this objective on a per-state basis recursively as
Orr (8,7) = Ears [r + YOpr(sâ,7)] - (5) The overall, state-agnostic objective is the expected per-state objective when states are sampled from interactions with the environment:
OER(Ï) = Es[OER(s, Ï)].
(6)
Most policy-based algorithms, critic (Konda & Tsitsiklis, 2000), aim to optimize OER given a parameterized policy.
Path Consistency Learning (PCL). Inspired by Williams & Peng (1991), Nachum et al. (2017) augment the objective OER in (5) with a discounted entropy regularizer to derive an objective,
OENT(s, Ï) = OER(s, Ï) + Ï H(s, Ï) , where Ï â¥ 0 is a user-speciï¬ed temperature parameter that controls the degree of entropy regular- ization, and the discounted entropy H(s, Ï) is recursively deï¬ned as
H(s,7) = Ea,s[âlog (a | s) + yH(sâ, 7)] - (8)
3
(7)
Published as a conference paper at ICLR 2018
Note that the objective OENT(s, Ï) can then be re-expressed recursively as,
Ognt(s,7) = Eayr,sâ[r â T log r(a | s) + yOrnr(sâ, 7)] - (9)
Nachum et al. (2017) show that the optimal policy Ïâ for OENT and V â(s) = OENT(s, Ïâ) mutually satisfy a softmax temporal consistency constraint along any sequence of states s0, . . . , sd starting at s0 and a corresponding sequence of actions a0, . . . , adâ1:
d-1 V*(80) = E.|7"V"(sa) + 9 (ri ~ tT log" (ailsi)) | - (10) i=0
This observation led to the development of the PCL algorithm, which attempts to minimize squared error between the LHS and RHS of (10) to simultaneously optimize parameterized Ïθ and VÏ. Im- portantly, PCL is applicable to both on-policy and off-policy trajectories.
Trust Region Policy Optimization (TRPO). As noted, standard policy-based algorithms for max- imizing OER can be unstable and require small learning rates for training. To alleviate this issue, Schulman et al. (2015) proposed to perform an iterative trust region optimization to maximize OER. At each step, a prior policy ËÏ is used to sample a large batch of trajectories, then Ï is subsequently optimized to maximize OER while remaining within a constraint deï¬ned by the average per-state KL-divergence with ËÏ. That is, at each iteration TRPO solves the constrained optimization problem,
maximize Oge(t) 8.t. â Esnap[ KL (#(â|s) || t(â|s))] < â¬. e8D)
The prior policy is then replaced with the new policy Ï, and the process is repeated.
# 4 METHOD
To enable more stable training and better exploit the natural information geometry of the parameter space, we propose to augment the entropy regularized expected reward objective OENT in (7) with a discounted relative entropy trust region around a prior policy ËÏ,
maximize E,[Ognr(7)]s.t. Es[G(s,7,7)] <e, (12)
where the discounted relative entropy is recursively deï¬ned as
G(s,7,7) = Ea, [log (als) â log #(a|s) + yG(s',7,7)] . (13)
This objective attempts to maximize entropy regularized expected reward while maintaining natural proximity to the previous policy. Although previous work has separately proposed to use relative entropy and entropy regularization, we ï¬nd that the two components serve different purposes, each of which is beneï¬cial: entropy regularization helps improve exploration, while the relative entropy improves stability and allows for a faster learning rate. This combination is a key novelty.
Using the method of Lagrange multipliers, we cast the constrained optimization problem in (13) into maximization of the following objective,
ORELENT(s, Ï) = OENT(s, Ï) â λG(s, Ï, ËÏ) . (14)
Again, the environment-wide objective is the expected per-state objective when states are sampled from interactions with the environment,
ORELENT(Ï) = Es[ORELENT(s, Ï)]. (15)
4.1 PATH CONSISTENCY WITH RELATIVE ENTROPY
A key technical observation is that the ORELENT objective has a similar decomposition structure to OENT, and one can cast ORELENT as an entropy regularized expected reward objective with a set of transformed rewards, i.e.,
ORELENT(s, Ï) = â¼ OER(s, Ï) + (Ï + λ)H(s, Ï), (16)
4
(13)
Published as a conference paper at ICLR 2018
â¼ OER(s, Ï) is an expected reward objective on a transformed reward distribution function where Ër(s, a) = r(s, a) + λ log ËÏ(a|s). Thus, in what follows, we derive a corresponding form of the multi-step path consistency in (10). Let Ïâ denote the optimal policy, deï¬ned as Ïâ = argmaxÏ ORELENT(Ï). As in PCL (Nachum et al., 2017), this optimal policy may be expressed as
Ej. 1~F (Se ae), Sega Lt > VV* (S41 s ails:) exn{ Fev (sesae), ia (se41)] = ak where V* are the softmax state values defined recursively as
Ex, .7(s¢,a),8¢41 [ht + WV (5 V*(s:) = (7 +A) tog f exp { Fuv(sea) sean lf + W"(se44)) \ da. (18) A T+X
We may re-arrange (17) to yield
V â(st) = EËrtâ¼Ër(st,at),st+1[Ërt â (Ï + λ) log Ïâ(at|st) + γV â(st+1)] = Ert,st+1[rt â (Ï + λ) log Ïâ(at|st) + λ log ËÏ(at+i|st+i) + γV â(st+1)]. (19) (20)
This is a single-step temporal consistency which may be extended to multiple steps by further ex- panding V*(s,41) on the RHS using the same identity. Thus, in general we have the following softmax temporal consistency constraint along any sequence of states defined by a starting state s; and a sequence of actions ay, ... , @t-4+dâ1: d-1
d-1 Vi(si) = E |y!V*(se4a) + 007 (reg â (7 + A) log (aryilseps) + Alog #(ar4i]8e4)) TepisSt+i â0 (21)
4.2 TRUST-PCL
We propose to train a parameterized policy Ïθ and value estimate VÏ to satisfy the multi-step con- sistencies in (21). Thus, we deï¬ne a consistency error for a sequence of states, actions, and rewards st:t+d â¡ (st, at, rt, . . . , st+dâ1, at+dâ1, rt+dâ1, st+d) sampled from the environment as
C(st:t+d, θ, Ï) = â VÏ(st) + γdVÏ(st+d) +
= âVa(se) + Â¥V(sta) + d-1 (22) SO! (rete = (7 + A) log ro (aeqilseps) + Alog mg (ae4ilse4a)) - i=0
We aim to minimize the squared consistency error on every sub-trajectory of length d. That is, the loss for a given batch of episodes (or sub-episodes) S = {s(k) 0:Tk
B T,-1 £(S,0,6) => > C(s{), 1.9.6)". (23) k=1 t=0
t=0 We perform gradient descent on θ and Ï to minimize this loss. In practice, we have found that it is beneï¬cial to learn the parameter Ï at least as fast as θ, and accordingly, given a mini-batch of episodes we perform a single gradient update on θ and possibly multiple gradient updates on Ï (see Appendix for details).
In principle, the mini-batch S may be taken from either on-policy or off-policy trajectories. In our implementation, we utilized a replay buffer prioritized by recency. As episodes (or sub-episodes) are sampled from the environment they are placed in a replay buffer and a priority p(s0:T ) is given to a trajectory s0:T equivalent to the current training step. Then, to sample a batch for training, B episodes are sampled from the replay buffer proportional to exponentiated priority exp{βp(s0:T )} for some hyperparameter β ⥠0.
For the prior policy ÏËθ, we use a lagged geometric mean of the parameters. At each training step, we update Ëθ â αËθ + (1 â α)θ. Thus on average our training scheme attempts to maximize entropy regularized expected reward while penalizing divergence from a policy roughly 1/(1 â α) training steps in the past.
5
(17)
.
Published as a conference paper at ICLR 2018
4.3 AUTOMATIC TUNING OF THE LAGRANGE MULTIPLIER λ
The use of a relative entropy regularizer as a penalty rather than a constraint introduces several difï¬culties. The hyperparameter λ must necessarily adapt to the distribution of rewards. Thus, λ must be tuned not only to each environment but also during training on a single environment, since the observed reward distribution changes as the agentâs behavior policy improves. Using a constraint form of the regularizer is more desirable, and others have advocated its use in practice (Schulman et al., 2015) speciï¬cally to robustly allow larger updates during training.
To this end, we propose to redirect the hyperparameter tuning from \ to e. Specifically, we present a method which, given a desired hard constraint on the relative entropy defined by â¬, approximates the equivalent penalty coefficient A(c). This is a key novelty of our work and is distinct from previous attempts at automatically tuning a regularizing coefficient, which iteratively increase and decrease the coefficient based on observed training behavior (Schulman et al.| 2017b} Heess et al} 2017).
We restrict our analysis to the undiscounted setting γ = 1 with entropy regularizer Ï = 0. Addi- tionally, we assume deterministic, ï¬nite-horizon environment dynamics. An additional assumption we make is that the expected KL-divergence over states is well-approximated by the KL-divergence starting from the unique initial state s0. Although in our experiments these restrictive assumptions are not met, we still found our method to perform well for adapting λ during training.
In this setting the optimal policy of (14) is proportional to exponentiated scaled reward. Speciï¬cally, for a full episode s0:T = (s0, a0, r0, . . . , sT â1, aT â1, rT â1, sT ), we have
1° (so) x Aso) exp {SCOT 04)
where (sor) = an t(a;|s;) and R(so:r) = ye a r;. The normalization factor of 7* is
a r;. The normalization factor of 7* is { Rison) \ .
Z = Es0:T â¼ËÏ exp . (25)
We would like to approximate the trajectory-wide KL-divergence between Ïâ and ËÏ. We may ex- press the KL-divergence analytically:
KL(n* ||) = Eyopvn- [oe Ur) (26) T(Sorr
=Egy.par* [As sor) _ toe Z] (27)
â log Z + Esgnws [As (so:r) ote) (28) 7(So:7)
# [As (so:r) Rlosn) Xr )
Rlosn) Xr âlogZ + Es. px% ) exp{R(s0r)/A â log 2)| . (29)
Since all expectations are with respect to ËÏ, this quantity is tractable to approximate given episodes sampled from ËÏ
Therefore, in Trust-PCL, given a set of episodes sampled from the prior policy 73 and a desired maximum divergence eâ¬, we can perform a simple line search to find a suitable \(e) which yields KL(x"*||79) as close as possible to â¬. The preceding analysis provided a method to determine \(¢) given a desired maximum divergence e. However, there is still a question of whether ¢ should change during training. Indeed, as episodes may possibly i increase in length, K L(7*||7) naturally increases when compared to the average per- state K L(x*(â|s)||7(â|s)), and vice versa for decreasing length. Thus, in practice, given an ⬠and a set of sampled episodes S= {s\*) Te })_, we approximate the best \ which yields a maximum divergence of < < k=1 Lk- This makes it so that ⬠corresponds more to a constraint on the length- averaged KL- divergence.
To avoid incurring a prohibitively large number of interactions with the environment for each pa- rameter update, in practice we use the last 100 episodes as the set of sampled episodes S. While
6
Published as a conference paper at ICLR 2018
this is not exactly the same as sampling episodes from ÏËθ, it is not too far off since ÏËθ is a lagged version of the online policy Ïθ. Moreover, we observed this protocol to work well in practice. A more sophisticated and accurate protocol may be derived by weighting the episodes according to the importance weights corresponding to their true sampling distribution.
# 5 EXPERIMENTS
We evaluate Trust-PCL against TRPO on a number of benchmark tasks. We choose TRPO as a base- line since it is a standard algorithm known to achieve state-of-the-art performance on the continuous control tasks we consider (see e.g., leaderboard results on the OpenAI Gym website (Brockman et al., 2016)). We ï¬nd that Trust-PCL can match or improve upon TRPOâs performance in terms of both average reward and sample efï¬ciency.
5.1 SETUP
We chose a number of control tasks available from OpenAI Gym (Brockman et al., 2016). The ï¬rst task, Acrobot, is a discrete-control task, while the remaining tasks (HalfCheetah, Swimmer, Hopper, Walker2d, and Ant) are well-known continuous-control tasks utilizing the MuJoCo envi- ronment (Todorov et al., 2012).
For TRPO we trained using batches of Q = 25, 000 steps (12, 500 for Acrobot), which is the approximate batch size used by other implementations (Duan et al., 2016; Schulman, 2017). Thus, at each training iteration, TRPO samples 25, 000 steps using the policy ÏËθ and then takes a single step within a KL-ball to yield a new Ïθ.
Trust-PCL is off-policy, so to evaluate its performance we alternate between collecting experience and training on batches of experience sampled from the replay buffer. Speciï¬cally, we alternate between collecting P = 10 steps from the environment and performing a single gradient step based on a batch of size Q = 64 sub-episodes of length P from the replay buffer, with a recency weight of β = 0.001 on the sampling distribution of the replay buffer. To maintain stability we use α = 0.99 and we modiï¬ed the loss from squared loss to Huber loss on the consistency error. Since our policy is parameterized by a unimodal Gaussian, it is impossible for it to satisfy all path consistencies, and so we found this crucial for stability.
For each of the variants and for each environment, we performed a hyperparameter search to ï¬nd the best hyperparameters. The plots presented here show the reward achieved during training on the best hyperparameters averaged over the best 4 seeds of 5 randomly seeded training runs. Note that this reward is based on greedy actions (rather than random sampling).
Experiments were performed using Tensorï¬ow (Abadi et al., 2016). Although each training step of Trust-PCL (a simple gradient step) is considerably faster than TRPO, we found that this does not have an overall effect on the run time of our implementation, due to a combination of the fact that each environment step is used in multiple training steps of Trust-PCL and that a majority of the run time is spent interacting with the environment. A detailed description of our implementation and hyperparameter search is available in the Appendix.
5.2 RESULTS
We present the reward over training of Trust-PCL and TRPO in Figure 1. We ï¬nd that Trust-PCL can match or beat the performance of TRPO across all environments in terms of both ï¬nal reward and sample efï¬ciency. These results are especially signiï¬cant on the harder tasks (Walker2d and Ant). We additionally present our results compared to other published results in Table 1. We ï¬nd that even when comparing across different implementations, Trust-PCL can match or beat the state-of-the-art.
5.2.1 HYPERPARAMETER ANALYSIS
The most important hyperparameter in our method is â¬, which determines the size of the trust region and thus has a critical role in the stability of the algorithm. To showcase this effect, we present the reward during training for several different values of ⬠in Figure [2] As ⬠increases, instability in- creases as well, eventually having an adverse effect on the agentâs ability to achieve optimal reward.
7
Published as a conference paper at ICLR 2018
Acrobot HalfCheetah Swimmer Hopper Walker2d Ant
Figure 1: The results of Trust-PCL against a TRPO baseline. Each plot shows average greedy reward with single standard deviation error intervals capped at the min and max across 4 best of 5 randomly seeded training runs after choosing best hyperparameters. The x-axis shows millions of environment steps. We observe that Trust-PCL is consistently able to match and, in many cases, beat TRPOâs performance both in terms of reward and sample efï¬ciency.
Hopper Walker2d
Figure 2: The results of Trust-PCL across several values of ¢, defining the size of the trust re- gion. Each plot shows average greedy reward across 4 best of 5 randomly seeded training runs after choosing best hyperparameters. The x-axis shows millions of environment steps. We observe that instability increases with â¬, thus concluding that the use of trust region is crucial.
Note that standard PCL standard PCL would fail (Nachum et al.|/2017) corresponds to ⬠â oo (that is, \ = 0). Therefore, in the se environments, and the use of trust region is crucial.
The main advantage of Trust-PCL over existing trust region methods for continuous control is its ability to learn in an off-policy manner. The degree to which Trust-PCL is off-policy is determined by a combination of the hyparparameters α, β, and P . To evaluate the importance of training off-policy, we evaluate Trust-PCL with a hyperparameter setting that is more on-policy. We set α = 0.95, β = 0.1, and P = 1, 000. In this setting, we also use large batches of Q = 25 episodes of length P (a total of 25, 000 environment steps per batch). Figure 3 shows the results of Trust-PCL with our original parameters and this new setting. We note a dramatic advantage in sample efï¬ciency when using off-policy training. Although Trust-PCL (on-policy) can achieve state-of-the-art reward performance, it requires an exorbitant amount of experience. On the other hand, Trust-PCL (off-
8
Published as a conference paper at ICLR 2018
Hopper Walker2d
Figure 3: The results of Trust-PCL varying the degree of on/off-policy. We see that Trust-PCL (on-policy) has a behavior similar to TRPO, achieving good ï¬nal reward but requiring an exorbitant number of experience collection. When collecting less experience per training step in Trust-PCL (off-policy), we are able to improve sample efï¬ciency while still achieving a competitive ï¬nal re- ward.
Domain HalfCheetah Swimmer Hopper Walker2d Ant TRPO-GAE 4871.36 137.25 3765.78 6028.73 2918.25 TRPO (rllab) 2889 â â 1487 1520 TRPO (ours) 4343.6 288.1 3516.7 2838.4 4347.5 Trust-PCL 7057.1 297.0 3804.9 5027.2 6104.2 IPG 4767 â â 3047 4415
Table 1: Results for best average reward in the ï¬rst 10M steps of training for our implementations (TRPO (ours) and Trust-PCL) and external implementations. TRPO-GAE are results of Schulman (2017) available on the OpenAI Gym website. TRPO (rllab) and IPG are taken from Gu et al. (2017b). These results are each on different setups with different hyperparameter searches and in some cases different evaluation protocols (e.g.,TRPO (rllab) and IPG were run with a simple linear value network instead of the two-hidden layer network we use). Thus, it is not possible to make any deï¬nitive claims based on this data. However, we do conclude that our results are overall competitive with state-of-the-art external implementations.
policy) can be competitive in terms of reward while providing a signiï¬cant improvement in sample efï¬ciency.
One last hyperparameter is Ï , determining the degree of exploration. Anecdotally, we found Ï to not be of high importance for the tasks we evaluated. Indeed many of our best results use Ï = 0. Including Ï > 0 had a marginal effect, at best. The reason for this is likely due to the tasks themselves. Indeed, other works which focus on exploration in continuous control have found the need to propose exploration-advanageous variants of these standard benchmarks (Haarnoja et al., 2017; Houthooft et al., 2016).
# 6 CONCLUSION
We have presented Trust-PCL, an off-policy algorithm employing a relative-entropy penalty to im- pose a trust region on a maximum reward objective. We found that Trust-PCL can perform well on a set of standard control tasks, improving upon TRPO both in terms of average reward and sample efï¬- ciency. Our best results on Trust-PCL are able to maintain the stability and solution quality of TRPO while approaching the sample-efï¬ciency of value-based methods (see e.g., Metz et al. (2017)). This gives hope that the goal of achieving both stability and sample-efï¬ciency without trading-off one for the other is attainable in a single unifying RL algorithm.
9
Published as a conference paper at ICLR 2018
# 7 ACKNOWLEDGMENT
We thank Matthew Johnson, Luke Metz, Shane Gu, and the Google Brain team for insightful com- ments and discussions.
# REFERENCES
Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorï¬ow: A system for large- scale machine learning. arXiv:1605.08695, 2016.
Shun-Ichi Amari. Natural gradient works efï¬ciently in learning. Neural Comput., 10, 1998.
Shun-Ichi Amari. Differential-geometrical methods in statistics, volume 28. Springer Science & Business Media, 2012.
Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming with function approximation. AISTATS, 2011.
Mohammad Gheshlaghi Azar, Vicenc¸ G´omez, and Hilbert J Kappen. Dynamic policy programming. JMLR, 13, 2012.
J Andrew Bagnell and Jeff Schneider. Covariant policy search. 2003.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. 2016.
Roy Fox, Ari Pakman, and Naftali Tishby. G-learning: Taming the noise in reinforcement learning via soft updates. Uncertainty in Artiï¬cal Intelligence, 2016. URL http://arxiv.org/abs/ 1512.08562.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efï¬cient policy gradient with an off-policy critic. ICLR, 2017a.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, Bernhard Sch¨olkopf, and Sergey Levine. Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning. arXiv preprint arXiv:1706.00387, 2017b.
Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017.
Nicolas Heess, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, Ali Eslami, Martin Riedmiller, et al. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017.
Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109â1117, 2016.
Sham M Kakade. A natural policy gradient. In NIPS, 2002.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015.
Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms, 2000.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, 2015.
10
Published as a conference paper at ICLR 2018
Luke Metz, Julian Ibarz, Navdeep Jaitly, and James Davidson. Discrete sequential prediction of continuous actions for deep RL. CoRR, abs/1705.05035, 2017. URL http://arxiv.org/ abs/1705.05035.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. arXiv:1312.5602, 2013.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016.
Oï¬r Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. CoRR, abs/1702.08892, 2017. URL http: //arxiv.org/abs/1702.08892.
Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction. NIPS, 2016.
Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and Trends®) in Optimization, 1(3):127-239, 2014.
Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks, 21, 2008.
Jan Peters, Katharina Mulling, and Yasemin Altun. Relative entropy policy search. In AAAI, 2010.
Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and rein- forcement learning by approximate inference. In Twenty-Third International Joint Conference on Artiï¬cial Intelligence, 2013.
John Schulman. Modular rl. http://github.com/joschu/modular_rl, 2017. Accessed: 2017-06-01.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, 2015.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. ICLR, 2016.
John Schulman, Pieter Abbeel, and Xi Chen. Equivalence between policy gradients and soft q- learning. arXiv preprint arXiv:1704.06440, 2017a.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017b.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 387â395, 2014.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026â 5033. IEEE, 2012.
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q- learning. AAAI, 2016.
Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efï¬cient actor-critic with experience replay. arXiv preprint arXiv:1611.01224, 2016.
Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989.
Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 1991.
11
Published as a conference paper at ICLR 2018
A IMPLEMENTATION BENEFITS OF TRUST-PCL
We have already highlighted the ability of Trust-PCL to use off-policy data to stably train both a parameterized policy and value estimate, which sets it apart from previous methods. We have also noted the ease with which exploration can be incorporated through the entropy regularizer. We elaborate on several additional beneï¬ts of Trust-PCL.
Compared to TRPO, Trust-PCL is much easier to implement. Standard TRPO implementations perform second-order gradient calculations on the KL-divergence to construct a Fisher information matrix (more speciï¬cally a vector product with the inverse Fisher information matrix). This yields a vector direction for which a line search is subsequently employed to ï¬nd the optimal step. Compare this to Trust-PCL which employs simple gradient descent. This makes implementation much more straightforward and easily realizable within standard deep learning frameworks.
Even if one replaces the constraint on the average KL-divergence of TRPO with a simple regu- larization penalty (as in proximal policy gradient methods (Schulman et al., 2017b; Wang et al., 2016)), optimizing the resulting objective requires computing the gradient of the KL-divergence. In Trust-PCL, there is no such necessity. The per-state KL-divergence need not have an analyt- ically computable gradient. In fact, the KL-divergence need not have a closed form at all. The only requirement of Trust-PCL is that the log-density be analytically computable. This opens up the possible policy parameterizations to a much wider class of functions. While continuous control has traditionally used policies parameterized by unimodal Gaussians, with Trust-PCL the policy can be replaced with something much more expressiveâfor example, mixtures of Gaussians or auto- regressive policies as in Metz et al. (2017).
We have yet to fully explore these additional beneï¬ts in this work, but we hope that future investi- gations can exploit the ï¬exibility and ease of implementation of Trust-PCL to further the progress of RL in continuous control environments.
# B EXPERIMENTAL SETUP
We describe in detail the experimental setup regarding implementation and hyperparameter search.
# B.1 ENVIRONMENTS
In Acrobot, episodes were cut-off at step 500. For the remaining environments, episodes were cut- off at step 1, 000.
Acrobot, HalfCheetah, and Swimmer are all non-terminating environments. Thus, for these envi- ronments, each episode had equal length and each batch contained the same number of episodes. Hopper, Walker2d, and Ant are environments that can terminate the agent. Thus, for these environ- ments, the batch size throughout training remained constant in terms of steps but not in terms of episodes.
There exists an additional common MuJoCo task called Humanoid. We found that neither our implementation of TRPO nor Trust-PCL could make more than negligible headway on this task, and so omit it from the results. We are aware that TRPO with the addition of GAE and enough ï¬ne- tuning can be made to achieve good results on Humanoid (Schulman et al., 2016). We decided to not pursue a GAE implementation to keep a fair comparison between variants. Trust-PCL can also be made to incorporate an analogue to GAE (by maintaining consistencies at varying time scales), but we leave this to future work.
IMPLEMENTATION DETAILS
We use fully-connected feed-forward neural networks to represent both policy and value.
The policy Ïθ is represented by a neural network with two hidden layers of dimension 64 with tanh activations. At time step t, the network is given the observation st. It produces a vector µt, which is combined with a learnable (but t-agnostic) parameter ξ to parametrize a unimodal Gaussian with mean µt and standard deviation exp(ξ). The next action at is sampled randomly from this Gaussian.
12
Published as a conference paper at ICLR 2018
The value network V, is represented by a neural network with two hidden layers of dimension 64 with tanh activations. At time step t the network is given the observation s, and the component-wise squared observation s; © s;. It produces a single scalar value.
# B.2.1 TRPO LEARNING
At each training iteration, both the policy and value parameters are updated. The policy is trained by performing a trust region step according to the procedure described in Schulman et al. (2015).
The value parameters at each step are solved using an LBFGS optimizer. To avoid instability, the value parameters are solved to fit a mixture of the empirical values and the expected values. That is, we determine ¢ to minimize >, cpacn(Vo(s) â &V3(s) â (1 â %)V3(s))?, where again ¢ is the previous value parameterization. We use & = 0.9. This method for training ¢ is according to that used in|Schulman] (2017).
B.2.2 TRUST-PCL LEARNING
At each training iteration, both the policy and value parameters are updated. The speciï¬c updates are slightly different between Trust-PCL (on-policy) and Trust-PCL (off-policy).
For Trust-PCL (on-policy), the policy is trained by taking a single gradient step using the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001. The value network update is inspired by that used in TRPO we perform 5 gradients steps with learning rate 0.001, calculated with regards to a mix between the empirical values and the expected values according to the previous ËÏ. We use κ = 0.95.
For Trust-PCL (off-policy), both the policy and value parameters are updated in a single step using the Adam optimizer with learning rate 0.0001. For this variant, we also utilize a target value network (lagged at the same rate as the target policy network) to replace the value estimate at the ï¬nal state for each path. We do not mix between empirical and expected values.
B.3 HYPERPARAMETER SEARCH
We found the most crucial hyperparameters for effective learning in both TRPO and Trust- PCL to be e (the constraint defining the size of the trust region) and d (the rollout determin- ing how to evaluate the empirical value of a state). For TRPO we performed a grid search over ⬠⬠{0.01,0.02,0.05,0.1},d ⬠{10,50}. For Trust-PCL we performed a grid search over ⬠⬠{0.001, 0.002, 0.005, 0.01}, d ⬠{10,50}. For Trust-PCL we also experimented with the value of 7, either keeping it at a constant 0 (thus, no exploration) or decaying it from 0.1 to 0.0 by a smoothed exponential rate of 0.1 every 2,500 training iterations.
We ï¬x the discount to γ = 0.995 for all environments.
# C PSEUDOCODE
A simpliï¬ed pseudocode for Trust-PCL is presented in Algorithm 1.
13
Published as a conference paper at ICLR 2018
# Algorithm 1 Trust-PCL
Input: Environment ENV, trust region constraint ¢, learning rates 7,7), discount factor 7, rollout d, batch size Q, collect steps per train step P, number of training steps N, replay buffer RB with exponential lag §, lag on prior policy a.
function Gradients({sâ S;. ») pte D) 1/C is the consi. stency error sine in Equation|22| P-1 C(g(h , Compute Ad = ye 1Xnso © (s.,. tiptar®s #)VoC(s} hep ttp+ar 99): k , Compute Ad = De 1 Spo cn toe teptar9 8VEC(S tap ta 9, @). Return Ad, Ad end function Initialize 0, 6, A, set 0 = 0. Initialize empty replay buffer RB(). for i = 0 to Nâ1do // Collect Sample P steps si.4p ~ 7 on ENV. Insert s;.44p to RB. // Train Sample batch {s\) t4P B 7 from RB to contain a total of Q transitions (B + Q/P). Aé, Ad = Gradients( ({s\Â¥ tp tbe): Update 6¢ 0â, Ad. Update @ + ¢ â m Ad. // Update auxiliary variables Update 6 = a6 + (1 â a)é. Update in terms of ⬠according to Section[4.3]
# end for
14 | {
"id": "1605.08695"
} |
1707.01495 | Hindsight Experience Replay | Dealing with sparse rewards is one of the biggest challenges in Reinforcement
Learning (RL). We present a novel technique called Hindsight Experience Replay
which allows sample-efficient learning from rewards which are sparse and binary
and therefore avoid the need for complicated reward engineering. It can be
combined with an arbitrary off-policy RL algorithm and may be seen as a form of
implicit curriculum.
We demonstrate our approach on the task of manipulating objects with a
robotic arm. In particular, we run experiments on three different tasks:
pushing, sliding, and pick-and-place, in each case using only binary rewards
indicating whether or not the task is completed. Our ablation studies show that
Hindsight Experience Replay is a crucial ingredient which makes training
possible in these challenging environments. We show that our policies trained
on a physics simulation can be deployed on a physical robot and successfully
complete the task. | http://arxiv.org/pdf/1707.01495 | Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, Wojciech Zaremba | cs.LG, cs.AI, cs.NE, cs.RO | null | null | cs.LG | 20170705 | 20180223 | 8 1 0 2
b e F 3 2 ] G L . s c [
3 v 5 9 4 1 0 . 7 0 7 1 : v i X r a
# Hindsight Experience Replay
Marcin Andrychowicz*, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel', Wojciech Zarembaâ OpenAI
# Abstract
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be com- bined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum.
We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https: //goo.gl1/SMrQnI.
# 1 Introduction
Reinforcement learning (RL) combined with neural networks has recently led to a wide range of successes in learning policies for sequential decision-making problems. This includes simulated environments, such as playing Atari games (Mnih et al., 2015), and defeating the best human player at the game of Go (Silver et al., 2016), as well as robotic tasks such as helicopter control (Ng et al., 2006), hitting a baseball (Peters and Schaal, 2008), screwing a cap onto a bottle (Levine et al., 2015), or door opening (Chebotar et al., 2016).
However, a common challenge, especially for robotics, is the need to engineer a reward function that not only reflects the task at hand but is also carefully shaped (Ng et al., 1999) to guide the policy optimization. For example, Popov et al. (2017) use a cost function consisting of five relatively complicated terms which need to be carefully weighted in order to train a policy for stacking a brick on top of another one. The necessity of cost engineering limits the applicability of RL in the real world because it requires both RL expertise and domain-specific knowledge. Moreover, it is not applicable in situations where we do not know what admissible behaviour may look like. It is therefore of great practical relevance to develop algorithms which can learn from unshaped reward signals, e.g. a binary signal indicating successful task completion.
One ability humans have, unlike the current generation of model-free RL algorithms, is to learn almost as much from achieving an undesired outcome as from the desired one. Imagine that you are learning how to play hockey and are trying to shoot a puck into a net. You hit the puck but it misses the net on the right side. The conclusion drawn by a standard RL algorithm in such a situation would
*
marcin@openai.com t
Equal advising.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
be that the performed sequence of actions does not lead to a successful shot, and little (if anything) would be learned. It is however possible to draw another conclusion, namely that this sequence of actions would be successful if the net had been placed further to the right.
In this paper we introduce a technique called Hindsight Experience Replay (HER) which allows the algorithm to perform exactly this kind of reasoning and can be combined with any off-policy RL algorithm. It is applicable whenever there are multiple goals which can be achieved, e.g. achieving each state of the system may be treated as a separate goal. Not only does HER improve the sample efficiency in this setting, but more importantly, it makes learning possible even if the reward signal is sparse and binary. Our approach is based on training universal policies (Schaul et al., 2015a) which take as input not only the current state, but also a goal state. The pivotal idea behind HER is to replay each episode with a different goal than the one the agent was trying to achieve, e.g. one of the goals which was achieved in the episode.
# 2 Background
In this section we introduce reinforcement learning formalism used in the paper as well as RL algorithms we use in our experiments.
# 2.1 Reinforcement Learning
We consider the standard reinforcement learning formalism consisting of an agent interacting with an environment. To simplify the exposition we assume that the environment is fully observable. An environment is described by a set of states S, a set of actions A, a distribution of initial states p(so), a reward function r : S x A â R, transition probabilities p(s:41|s¢, a), and a discount factor 7 ⬠[0,1].
A deterministic policy is a mapping from states to actions: 7 : S â A. Every episode starts with sampling an initial state s9. At every timestep ¢ the agent produces an action based on the current state: ay = 7(8,). Then it gets the reward r; = r(sz, a) and the environmentâs new state is sampled from the distribution p(-|s;, a). A discounted sum of future rewards is called a return: Ry = y=, yi. The agentâs goal is to maximize its expected return E,,,[Ro|so]. The Q-function or action-value function is defined as Qâ(s;, ar) = E[R;|s¢, ai].
Let * denote an optimal policy i.e. any policy 1* s.t. Q⢠(s, a) > Q7(s,a) for every s ⬠S,ae A and any policy 7. All optimal policies have the same Q-function which is called optimal Q-function and denoted Q*. It is easy to show that it satisfies the following equation called the Bellman equation:
Q"(s.4) = Exep¢jsa) [r(ssa) + ymax Q*(s',a!)|
# 2.2 Deep Q-Networks (DQN)
Deep Q-Networks (DQN) (Mnih et al., 2015) is a model-free RL algorithm for discrete action spaces. Here we sketch it only informally, see Mnih et al. (2015) for more details. In DQN we maintain a neural network Q which approximates Q*. A greedy policy w.r.t. Q is defined as T@Q(s) = argmax,¢ ,Q(s, a). An â¬-greedy policy w.r.t. Q is a policy which with probability ⬠takes a random action (sampled uniformly from A) and takes the action 7g(s) with probability 1 â «.
During training we generate episodes using e-greedy policy w.r.t. the current approximation of the action-value function Q. The transition tuples (s;, a¢, 4, $41) encountered during training are stored in the so-called replay buffer. The generation of new episodes is interleaved with neural network training. The network is trained using mini-batch gradient descent on the loss £ which encourages the approximated Q-function to satisfy the Bellman equation: £ = E (Q(s:, az) â yt). where yz = ry + YMaxec.4 Q(sy41, aâ) and the tuples (51, a4, rz, 8¢41) are sampled from the replay buffer'.
In order to make this optimization procedure more stable the targets y, are usually computed using a separate target network which changes at a slower pace than the main network. A common practice
'The targets y, depend on the network parameters but this dependency is ignored during backpropagation.
is to periodically set the weights of the target network to the current weights of the main network (e.g. Mnih et al. (2015)) or to use a polyak-averagedâ (Polyak and Juditsky, 1992) version of the main network instead (Lillicrap et al., 2015).
# 2.3 Deep Deterministic Policy Gradients (DDPG)
Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) is a model-free RL algorithm for continuous action spaces. Here we sketch it only informally, see Lillicrap et al. (2015) for more details. In DDPG we maintain two neural networks: a target policy (also called an actor) 7: S > A and an action-value function approximator (called the critic) Q : S x A â R. The criticâs job is to approximate the actorâs action-value function Qâ.
Episodes are generated using a behavioral policy which is a noisy version of the target policy, e.g. mâ¢(s) = 1(s) + N(0,1). The critic is trained in a similar way as the Q-function in DQN but the targets y, are computed using actions outputted by the actor, ie. y, = ry + YQ(S141,7(St41))- The actor is trained with mini-batch gradient descent on the loss £, = âE,Q(s,7(s)), where s is sampled from the replay buffer. The gradient of £, w.r.t. actor parameters can be computed by backpropagation through the combined critic and actor networks.
# 2.4 Universal Value Function Approximators (UVFA)
Universal Value Function Approximators (UVFA) (Schaul et al., 2015a) is an extension of DQN to the setup where there is more than one goal we may try to achieve. Let G be the space of possible goals. Every goal g ⬠G corresponds to some reward function rz : S x A â R. Every episode starts with sampling a state-goal pair from some distribution p(so, g). The goal stays fixed for the whole episode. At every timestep the agent gets as input not only the current state but also the current goal a:SxG-â Aand gets the reward r, = r,(s;,a;,). The Q-function now depends not only on a state-action pair but also on a goal Q*(s;, a1, 9) = E[R:|s:, at, g]. Schaul et al. (2015a) show that in this setup it is possible to train an approximator to the Q-function using direct bootstrapping from the Bellman equation (just like in case of DQN) and that a greedy policy derived from it can generalize to previously unseen state-action pairs. The extension of this approach to DDPG is straightforward.
# 3 Hindsight Experience Replay
# 3.1 A motivating example
Consider a bit-flipping environment with the state space S = {0, 1}â and the action space A = {0,1,..., â 1} for some integer n in which executing the i-th action flips the i-th bit of the state. For every episode we sample uniformly an initial state as well as a target state and the policy gets a reward of â1 as long as it is not in the target state, i.e. rg(s,a) = â[s F g].
Standard RL algorithms are bound to fail in this environment for n > 40 because they will never experience any reward other than â1. Figure Notice that using techniques for improving exploration (e.g. VIME ment. (Houthooft et al., 2016), count-based exploration (Ostrovski et al., 2017) or bootstrapped DQN (Osband et al., 2016)) does not help here because the real problem is not in lack of diversity of states being visited, rather it is simply impractical to explore such a large state space. The standard solution to this problem would be to use 08 a shaped reward function which is more informative and guides the agent towards the goal, e.g. ry(s,a@) = â||s â g||°. While using a shaped reward solves the problem in our toy environment, it may be difficult to apply to more complicated problems. We investigate the results of reward shaping experimentally in Sec. 4.4. 0.6 0.4 success rate ° =
Instead of shaping the reward we propose a different solution which does not require any domain knowledge. Consider an episode with
Figure 1: Bit-flipping experi- ment.
== DON â DQN+HER 08 0.6 0.4 success rate ° = iy o 2 ° 10 20 30 40 50 number of bits n °
? A polyak-averaged version of a parametric model M which is being trained is a model whose parameters are computed as an exponential moving average of the parameters of over time.
a state sequence s),..., 57 anda goal g 4 s1,..., 87 which implies
that the agent received a reward of â1 at every timestep. The pivotal idea behind our approach is to re-examine this trajectory with a different goal â while this trajectory may not help us learn how to achieve the state g, it definitely tells us something about how to achieve the state s;. This information can be harvested by using an off-policy RL algorithm and experience replay where we replace g in the replay buffer by sp. In addition we can still replay with the original goal g left intact in the replay buffer. With this modification at least half of the replayed trajectories contain rewards different from â1 and learning becomes much simpler. Fig. 1 compares the final performance of DQN with and without this additional replay technique which we call Hindsight Experience Replay (HER). DQN without HER can only solve the task for n < 13 while DQN with HER easily solves the task for n up to 50. See Appendix A for the details of the experimental setup. Note that this approach combined with powerful function approximators (e.g., deep neural networks) allows the agent to learn how to achieve the goal g even if it has never observed it during training.
We more formally describe our approach in the following sections.
# 3.2. Multi-goal RL
We are interested in training agents which learn to achieve multiple different goals. We follow the approach from Universal Value Function Approximators (Schaul et al., 2015a), i.e. we train policies and value functions which take as input not only a state s ⬠S but also a goal g ⬠G. Moreover, we show that training an agent to perform multiple tasks can be easier than training it to perform only one task (see Sec. 4.3 for details) and therefore our approach may be applicable even if there is only one task we would like the agent to perform (a similar situation was recently observed by Pinto and Gupta (2016)).
We assume that every goal g ⬠G corresponds to some predicate f, : S â {0,1} and that the agentâs goal is to achieve any state s that satisfies f,(s) = 1. In the case when we want to exactly specify the desired state of the system we may use S = G and f,(s) = [s = g]. The goals can also specify only some properties of the state, e.g. suppose that S = R? and we want to be able to achieve an arbitrary state with the given value of x coordinate. In this case G = R and f,((x,y)) = [x = g].
Moreover, we assume that given a state s we can easily find a goal g which is satisfied in this state. More formally, we assume that there is given a mapping m : S > G s.t. Vses fm(s)(s) = 1. Notice that this assumption is not very restrictive and can usually be satisfied. In the case where each goal corresponds to a state we want to achieve, ie. G = S and f(s) = [s = g], the mapping m is just an identity. For the case of 2-dimensional state and 1-dimensional goals from the previous paragraph this mapping is also very simple m((x,y)) = 2.
A universal policy can be trained using an arbitrary RL algorithm by sampling goals and initial states from some distributions, running the agent for some number of timesteps and giving it a negative reward at every timestep when the goal is not achieved, i.e. rg(s,a) = â[fy(s) = 0]. This does not however work very well in practice because this reward function is sparse and not very informative.
In order to solve this problem we introduce the technique of Hindsight Experience Replay which is the crux of our approach.
# 3.3 Algorithm
The idea behind Hindsight Experience Replay (HER) is very simple: after experiencing some episode So, S1,.--, 87 We Store in the replay buffer every transition s; â s;, not only with the original goal used for this episode but also with a subset of other goals. Notice that the goal being pursued influences the agentâs actions but not the environment dynamics and therefore we can replay each trajectory with an arbitrary goal assuming that we use an off-policy RL algorithm like DQN (Mnih et al., 2015), DDPG (Lillicrap et al., 2015), NAF (Gu et al., 2016) or SDQN (Metz et al., 2017).
One choice which has to be made in order to use HER is the set of additional goals used for replay. In the simplest version of our algorithm we replay each trajectory with the goal m(sr), i.e. the goal which is achieved in the final state of the episode. We experimentally compare different types and quantities of additional goals for replay in Sec. 4.5. In all cases we also replay each trajectory with the original goal pursued in the episode. See Alg. 1 for a more formal description of the algorithm.
Algorithm 1 Hindsight Experience Replay (HER)
# Given:
e an off-policy RL algorithm A,
an off-policy RL algorithm A, > e.g. DQN, DDPG, NAF, SDQN astrategy S for sampling goals for replay, peg. S(so,---, 87) = m(sr) areward functionr: Sx AxGâ>R. peg. r(s,a,9) = â[fg(s) = 0] Initialize A > e.g. initialize neural networks Initialize replay buffer R episode =1, / do Sample a goal g and an initial state so. fort = 0,Tâ1do Sample an action a; using the behavioral policy from A: ar â 7(S:|[g) > || denotes concatenation Execute the action a; and observe a new state s;41 end for fort = 0,Tâ1do re = 1 (St, Gt, 9) Store the transition (s;||g, az, Te, $¢41||g) in R > standard experience replay Sample a set of additional goals for replay G := S(current episode) for g' ⬠Gdo râ := 7(81, 41,9â) Store the transition (s;||gâ, ae, 7â, Se41||gâ) in R > HER end for end for fort = 1, Ndo Sample a minibatch B from the replay buffer R Perform one step of optimization using A and minibatch B end for
# e
# e
# for
# end for
HER may be seen as a form of implicit curriculum as the goals used for replay naturally shift from ones which are simple to achieve even by a random agent to more difficult ones. However, in contrast to explicit curriculum, HER does not require having any control over the distribution of initial environment states. Not only does HER learn with extremely sparse rewards, in our experiments it also performs better with sparse rewards than with shaped ones (See Sec. 4.4). These results are indicative of the practical challenges with reward shaping, and that shaped rewards would often constitute a compromise on the metric we truly care about (such as binary success/failure).
# 4 Experiments
The video presenting our experiments is available at https: //goo.gl/SMrQnI.
This section is organized as follows. In Sec. 4.1 we introduce multi-goal RL environments we use for the experiments as well as our training procedure. In Sec. 4.2 we compare the performance of DDPG with and without HER. In Sec. 4.3 we check if HER improves performance in the single-goal setup. In Sec. 4.4 we analyze the effects of using shaped reward functions. In Sec. 4.5 we compare different strategies for sampling additional goals for HER. In Sec. 4.6 we show the results of the experiments on the physical robot.
# 4.1 Environments
The are no standard environments for multi-goal RL and therefore we created our own environments. We decided to use manipulation environments based on an existing hardware robot to ensure that the challenges we face correspond as closely as possible to the real world. In all experiments we use a 7-DOF Fetch Robotics arm which has a two-fingered parallel gripper. The robot is simulated using the MuJoCo (Todorov et al., 2012) physics engine. The whole training procedure is performed in the simulation but we show in Sec. 4.6 that the trained policies perform well on the physical robot without any finetuning.
Figure 2: Different tasks: pushing (top row), sliding (middle row) and pick-and-place (bottom row). The red ball denotes the goal position.
Policies are represented as Multi-Layer Perceptrons (MLPs) with Rectified Linear Unit (ReLU) activation functions. Training is performed using the DDPG algorithm (Lillicrap et al., 2015) with Adam (Kingma and Ba, 2014) as the optimizer. For improved efficiency we use 8 workers which average the parameters after every update. See Appendix A for more details and the values of all hyperparameters.
We consider 3 different tasks:
1. Pushing. In this task a box is placed on a table in front of the robot and the task is to move it to the target location on the table. The robot fingers are locked to prevent grasping. The learned behaviour is a mixture of pushing and rolling.
2. Sliding. In this task a puck is placed on a long slippery table and the target position is outside of the robotâs reach so that it has to hit the puck with such a force that it slides and then stops in the appropriate place due to friction.
3. Pick-and-place. This task is similar to pushing but the target position is in the air and the fingers are not locked. To make exploration in this task easier we recorded a single state in which the box is grasped and start half of the training episodes from this stateâ.
States: The state of the system is represented in the MuJoCo physics engine and consists of angles and velocities of all robot joints as well as positions, rotations and velocities (linear and angular) of all objects.
Goals: Goals describe the desired position of the object (a box or a puck depending on the task) with some fixed tolerance of ⬠ic. G = R® and f,(s) = [|g â Sobject| < â¬], where Sopject is the position of the object in the state s. The mapping from states to goals used in HER is simply m(s) = Sobject-
Rewards: Unless stated otherwise we use binary and sparse rewards r(s, a, 9) = â[fg(sâ) = 0] where sâ if the state after the execution of the action a in the state s. We compare sparse and shaped reward functions in Sec. 4.4.
State-goal distributions: For all tasks the initial position of the gripper is fixed, while the initial position of the object and the target are randomized. See Appendix A for details.
>This was necessary because we could not successfully train any policies for this task without using the demonstration state. We have later discovered that training is possible without this trick if only the goal position is sometimes on the table and sometimes in the air.
Observations: In this paragraph relative means relative to the current gripper position. The policy is given as input the absolute position of the gripper, the relative position of the object and the target*, as well as the distance between the fingers. The Q-function is additionally given the linear velocity of the gripper and fingers as well as relative linear and angular velocity of the object. We decided to restrict the input to the policy in order to make deployment on the physical robot easier.
Actions: of the problems we consider require gripper rotation and therefore we keep it fixed. Action space is 4-dimensional. Three dimensions specify the desired relative gripper position at the next timestep. We use MuJoCo constraints to move the gripper towards the desired position but Jacobian-based control could be used instead*. The last dimension specifies the desired distance between the 2 fingers which are position controlled.
Strategy S for sampling goals for replay: Unless stated otherwise HER uses replay with the goal corresponding to the final state in each episode, i.e. S(so,..., 87) = m(sr). We compare different strategies for choosing which goals to replay with in Sec. 4.5.
# 4.2 Does HER improve performance?
In order to verify if HER improves performance we evaluate DDPG with and without HER on all 3 tasks. Moreover, we compare against DDPG with count-based exploration® (Strehl and Littman, 2005; Kolter and Ng, 2009; Tang et al., 2016; Bellemare et al., 2016; Ostrovski et al., 2017). For HER we store each transition in the replay buffer twice: once with the goal used for the generation of the episode and once with the goal corresponding to the final state from the episode (we call this strategy final). In Sec. 4.5 we perform ablation studies of different strategies S for choosing goals for replay, here we include the best version from Sec. 4.5 in the plot for comparison.
== DDPG ââ DDPG+count-based exploration ââ- DDPG+HER ââ DDPG+HER (version from Sec. 4.5) pushing sliding pick-and-place 100% 100% 100% = 80% 80% 80% 60% | J 60% 60% 40% 40% 40% success rate 20% 20% 20% 0% 0% 0% 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 epoch number (every epoch = 800 episodes = 800x50 timesteps)
Figure 3: Learning curves for multi-goal setup. An episode is considered successful if the distance between the object and the goal at the end of the episode is less than 7cm for pushing and pick-and- place and less than 20cm for sliding. The results are averaged across 5 random seeds and shaded areas represent one standard deviation. The red curves correspond to the future strategy with k = 4 from Sec. 4.5 while the blue one corresponds to the final strategy.
From Fig. 3 it is clear that DDPG without HER is unable to solve any of the tasksâ and DDPG with count-based exploration is only able to make some progress on the sliding task. On the other hand, DDPG with HER solves all tasks almost perfectly. It confirms that HER is a crucial element which makes learning from sparse, binary rewards possible.
âThe target position is relative to the current object position.
>The successful deployment on a physical robot (Sec. 4.6) confirms that our control model produces movements which are reproducible on the physical robot despite not being fully physically plausible.
°
We discretize the state space and use an intrinsic reward of the form a/VN, where a is a hyper- parameter and N is the number of times the given state was visited. The discretization works as fol- lows. We take the relative position of the box and the target and then discretize every coordinate using a grid with a stepsize 6 which is a hyperparameter. We have performed a hyperparameter search over a ⬠{0.032, 0.064, 0.125, 0.25, 0.5, 1, 2, 4,8, 16,32}, B ⬠{1cm,2cm, 4cm, 8cm}. The best results were obtained using a = 1 and 6 = 1cm and these are the results we report.
7We also evaluated DQN (without HER) on our tasks and it was not able to solve any of them.
2 © % 3 8 8 s a
= - DDPG â DDPG+countbased exploration ââ DDPG+HER pushing sliding pick-and-place 100% 100% 100% 80% < 80% 80% 60% 60% 60% 40% 40% - 40% 20% 20% â 20% 0% 0% â 0% 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 epoch number (every epoch = 800 episodes = 800x50 timesteps)
Figure 4: Learning curves for the single-goal case.
# 4.3 Does HER improve performance even if there is only one goal we care about?
In this section we evaluate whether HER improves performance in the case where there is only one goal we care about. To this end, we repeat the experiments from the previous section but the goal state is identical in all episodes.
From Fig. 4 it is clear that DDPG+HER performs much better than pure DDPG even if the goal state is identical in all episodes. More importantly, comparing Fig. 3 and Fig. 4 we can also notice that HER learns faster if training episodes contain multiple goals, so in practice it is advisable to train on multiple goals even if we care only about one of them.
# 4.4 How does HER interact with reward shaping?
So far we only considered binary rewards of the form r(s,a,g) = â[|g â Sobject| > â¬]. In this section we check how the performance of DDPG with and without HER changes if we replace this reward with one which is shaped. We considered reward functions of the form r(s,a,g) = Alg â Sopject|â â |g â Sopject|â> where sâ is the state of the environment after the execution of the action a in the state s and A ⬠{0,1}, p ⬠{1, 2} are hyperparameters.
Fig. 5 shows the results. Surprisingly neither DDPG, nor DDPG+HER was able to successfully solve any of the tasks with any of these reward functions®.Our results are consistent with the fact that successful applications of RL to difficult manipulation tasks which does not use demonstrations usually have more complicated reward functions than the ones we tried (e.g. Popov et al. (2017)).
The following two reasons can cause shaped rewards to perform so poorly: (1) There is a huge discrepancy between what we optimize (i.e. a shaped reward function) and the success condition (i.e.: is the object within some radius from the goal at the end of the episode); (2) Shaped rewards penalize for inappropriate behaviour (e.g. moving the box in a wrong direction) which may hinder exploration. It can cause the agent to learn not to touch the box at all if it can not manipulate it precisely and we noticed such behaviour in some of our experiments.
Our results suggest that domain-agnostic reward shaping does not work well (at least in the simple forms we have tried). Of course for every problem there exists a reward which makes it easy (Ng et al., 1999) but designing such shaped rewards requires a lot of domain knowledge and may in some cases not be much easier than directly scripting the policy. This strengthens our belief that learning from sparse, binary rewards is an important problem.
# 4.5 How many goals should we replay each trajectory with and how to choose them?
In this section we experimentally evaluate different strategies (i.e. S in Alg. 1) for choosing goals to use with HER. So far the only additional goals we used for replay were the ones corresponding to
8
We also tried to rescale the distances, so that the range of rewards is similar as in the case of binary rewards, clipping big distances and adding a simple (linear or quadratic) term encouraging the gripper to move towards the object but none of these techniques have led to successful training.
-- DDPG â DDPG+HER} pushing sliding pick-and-place 100% 100% 100% 80% 80% 80% £ 60% 60% 60% 2 8 8 40% 40% 40% g a 20% 20% =e 20% | oles QO 0% 0% = 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 epoch number (every epoch = 800 episodes = 80x50 timesteps)
2
7 : ; _ , 24 Figure 5: Learning curves for the shaped reward r(s,a,g) = â|g â Sopject| (it performed best among the shaped rewards we have tried). Both algorithms fail on all tasks.
== noHER â= final â-®= random â@®= episode â®= future pushing sliding pick-and-place 1.0 I 1.0 DD ââ$â$â$»ââ 2 f @ 08 08 0.8 8 8 06 0.6 0.6 8 3 2 04 04 | 0.4 â % 3 2 20.2 02 02 . ee, âoâ 00 - - 0.0 â- 00 0.0 â----- ~~ 1 2 4 8 16 all 1 2 4 8 16 all 1 2 4 8 16 all pushing sliding pick-and-place 1.0 1.0 1.0 2 B08 08 0.8 ra 3 8 06 0.6 0.6 8 a o 04 â 04 04 S £ 202 0.2 0.2 & 0.0 Feta - 0.0 bo ee = 0.0 â----â -S = 1 2 4 8 16 all 1 2 4 8 16 all 1 2 4 8 16 all number of additional goals used to replay each transition with
Figure 6: Ablation study of different strategies for choosing additional goals for replay. The top row shows the highest (across the training epochs) test performance and the bottom row shows the average test performance across all training epochs. On the right top plot the curves for final, episode and future coincide as all these strategies achieve perfect performance on this task.
the final state of the environment and we will call this strategy final. Apart from it we consider the following strategies:
e future â replay with k random states which come from the same episode as the transition being replayed and were observed after it,
e episode â replay with k random states coming from the same episode as the transition being replayed,
e random â replay with k random states encountered so far in the whole training procedure.
All of these strategies have a hyperparameter k which controls the ratio of HER data to data coming from normal experience replay in the replay buffer.
The plots comparing different strategies and different values of k can be found in Fig. 6. We can see from the plots that all strategies apart from random solve pushing and pick-and-place almost perfectly regardless of the values of k. In all cases future with k equal 4 or 8 performs best and it is the only strategy which is able to solve the sliding task almost perfectly. The learning curves for
Figure 7: The pick-and-place policy deployed on the physical robot.
future with k = 4 can be found in Fig. 3. It confirms that the most valuable goals for replay are the ones which are going to be achieved in the near futureâ. Notice that increasing the values of k above 8 degrades performance because the fraction of normal replay data in the buffer becomes very low.
# 4.6 Deployment on a physical robot
We took a policy for the pick-and-place task trained in the simulator (version with the future strategy and k = 4 from Sec. 4.5) and deployed it on a physical fetch robot without any finetuning. The box position was predicted using a separately trained CNN using raw fetch head camera images. See Appendix B for details.
Initially the policy succeeded in 2 out of 5 trials. It was not robust to small errors in the box position estimation because it was trained on perfect state coming from the simulation. After retraining the policy with gaussian noise (std=Icm) added to observations!° the success rate increased to 5/5. The video showing some of the trials is available at https: //goo.g1/SMrQnI.
# 5 Related work
The technique of experience replay has been introduced in Lin (1992) and became very popular after it was used in the DQN agent playing Atari (Mnih et al., 2015). Prioritized experience replay (Schaul et al., 2015b) is an improvement to experience replay which prioritizes transitions in the replay buffer in order to speed up training. It it orthogonal to our work and both approaches can be easily combined.
Learning simultaneously policies for multiple tasks have been heavily explored in the context of policy search, e.g. Schmidhuber and Huber (1990); Caruana (1998); Da Silva et al. (2012); Kober et al. (2012); Devin et al. (2016); Pinto and Gupta (2016). Learning off-policy value functions for multiple tasks was investigated by Foster and Dayan (2002) and Sutton et al. (2011). Our work is most heavily based on Schaul et al. (2015a) who considers training a single neural network approximating multiple value functions. Learning simultaneously to perform multiple tasks has been also investigated for a long time in the context of Hierarchical Reinforcement Learning, e.g. Bakker and Schmidhuber (2004); Vezhnevets et al. (2017).
Our approach may be seen as a form of implicit curriculum learning (Elman, 1993; Bengio et al., 2009). While curriculum is now often used for training neural networks (e.g. Zaremba and Sutskever (2014); Graves et al. (2016)), the curriculum is almost always hand-crafted. The problem of automatic curriculum generation was approached by Schmidhuber (2004) who constructed an asymptotically optimal algorithm for this problem using program search. Another interesting approach is PowerPlay (Schmidhuber, 2013; Srivastava et al., 2013) which is a general framework for automatic task selection. Graves et al. (2017) consider a setup where there is a fixed discrete set of tasks and empirically evaluate different strategies for automatic curriculum generation in this settings. Another approach investigated by Sukhbaatar et al. (2017) and Held et al. (2017) uses self-play between the policy and a task-setter in order to automatically generate goal states which are on the border of what the current policy can achieve. Our approach is orthogonal to these techniques and can be combined with them.
°We have also tried replaying the goals which are close to the ones achieved in the near future but it has not performed better than the future strategy
âThe Q-function approximator was trained using exact observations. It does not have to be robust to noisy observations because it is not used during the deployment on the physical robot.
10
# 6 Conclusions
We introduced a novel technique called Hindsight Experience Replay which makes possible applying RL algorithms to problems with sparse and binary rewards. Our technique can be combined with an arbitrary off-policy RL algorithm and we experimentally demonstrated that with DQN and DDPG.
We showed that HER allows training policies which push, slide and pick-and-place objects with a robotic arm to the specified positions while the vanilla RL algorithm fails to solve these tasks. We also showed that the policy for the pick-and-place task performs well on the physical robot without any finetuning. As far as we know, it is the first time so complicated behaviours were learned using only sparse, binary rewards.
# Acknowledgments
We would like to thank Ankur Handa, Jonathan Ho, John Schulman, Matthias Plappert, Tim Salimans, and Vikash Kumar for providing feedback on the previous versions of this manuscript. We would also like to thank Rein Houthooft and the whole OpenAI team for fruitful discussions as well as Bowen Baker for performing some additional experiments.
# References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv: 1603.04467.
Bakker, B. and Schmidhuber, J. (2004). Hierarchical reinforcement learning based on subgoal discovery and subpolicy specialization. In Proc. of the 8-th Conf. on Intelligent Autonomous Systems, pages 438-445.
Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. (2016). Unifying count- based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471-1479.
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48. ACM.
Caruana, R. (1998). Multitask learning. In Learning to learn, pages 95-133. Springer.
Chebotar, Y., Kalakrishnan, M., Yahya, A., Li, A., Schaal, S., and Levine, S. (2016). Path integral guided policy search. arXiv preprint arXiv: 1610.00529.
Da Silva, B., Konidaris, G., and Barto, A. (2012). Learning parameterized skills. arXiv preprint arXiv: 1206.6398.
Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. (2016). Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv: 1609.07088.
Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71-99.
Foster, D. and Dayan, P. (2002). Structure in the space of value functions. Machine Learning, 49(2):325-346.
Graves, A., Bellemare, M. G., Menick, J., Munos, R., and Kavukcuoglu, K. (2017). Automated curriculum learning for neural networks. arXiv preprint arXiv: 1704.03003.
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwiriska, A., Colmenarejo, S. G., Grefenstette, E., Ramalho, T., Agapiou, J., et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):47 1-476.
Gu, S., Lillicrap, T., Sutskever, IL, and Levine, S. (2016). Continuous deep q-learning with model-based acceleration. arXiv preprint arXiv: 1603.00748.
Held, D., Geng, X., Florensa, C., and Abbeel, P. (2017). Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv: 1705.06366.
Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109- 1117.
11
Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980.
Kober, J., Wilhelm, A., Oztop, E., and Peters, J. (2012). Reinforcement learning to adjust parametrized motor primitives to new situations. Autonomous Robots, 33(4):361-379.
Kolter, J. Z. and Ng, A. Y. (2009). Near-bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 513-520. ACM.
Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2015). End-to-end training of deep visuomotor policies. arXiv preprint arXiv: 1504.00702.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
Lin, L.-J. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321.
Metz, L., Ibarz, J., Jaitly, N., and Davidson, J. (2017). Discrete sequential prediction of continuous actions for deep rl. arXiv preprint arXiv: 1705.05035.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529-533.
Ng, A., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., and Liang, E. (2006). Autonomous inverted helicopter flight via reinforcement learning. Experimental Robotics IX, pages 363-372.
Ng, A. Y., Harada, D., and Russell, S. (1999). Policy invariance under reward transformations: Theory and application to reward shaping. In JCML, volume 99, pages 278-287.
Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. (2016). Deep exploration via bootstrapped dqn. In Advances In Neural Information Processing Systems, pages 4026-4034.
Ostrovski, G., Bellemare, M. G., Oord, A. v. d., and Munos, R. (2017). Count-based exploration with neural density models. arXiv preprint arXiv: 1703.01310.
Peters, J. and Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682-697.
Pinto, L. and Gupta, A. (2016). Learning to push by grasping: Using multiple tasks for effective learning. arXiv preprint arXiv: 1609.09025.
Polyak, B. T. and Juditsky, A. B. (1992). Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855.
Popov, I., Heess, N., Lillicrap, T., Hafner, R., Barth-Maron, G., Vecerik, M., Lampe, T., Tassa, Y., Erez, T., and Riedmiller, M. (2017). Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv: 1704.03073.
Schaul, T., Horgan, D., Gregor, K., and Silver, D. (2015a). Universal value function approximators. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1312-1320.
Schaul, T., Quan, J., Antonoglou, L., and Silver, D. (2015b). Prioritized experience replay. arXiv preprint arXiv:1511.05952.
Schmidhuber, J. (2004). Optimal ordered problem solver. Machine Learning, 54(3):211-254.
Schmidhuber, J. (2013). Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. Frontiers in psychology, 4.
Schmidhuber, J. and Huber, R. (1990). Learning to generate focus trajectories for attentive vision. Institut fiir Informatik.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, L, Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489.
Srivastava, R. K., Steunebrink, B. R., and Schmidhuber, J. (2013). First experiments with powerplay. Neural Networks, 41:130-136.
12
Strehl, A. L. and Littman, M. L. (2005). A theoretical analysis of model-based interval estimation. In Proceedings of the 22nd international conference on Machine learning, pages 856-863. ACM.
Sukhbaatar, S., Kostrikov, I., Szlam, A., and Fergus, R. (2017). Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv: 1703.05407.
Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 761-768. International Foundation for Autonomous Agents and Multiagent Systems.
Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). # exploration: A study of count-based exploration for deep reinforcement learning. arXiv preprint arXiv:1611.04717.
Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. arXiv preprint arXiv:1703.06907.
Todorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026-5033. IEEE.
Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. (2017). Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv: 1703.01161.
Zaremba, W. and Sutskever, I. (2014). Learning to execute. arXiv preprint arXiv:1410.4615.
13
# A Experiment details
In this section we provide more details on our experimental setup and hyperparameters used.
Bit-flipping experiment: We used a network with 1 hidden layer with 256 neurons. The length of each episode was equal to the number of bits and the episode was considered successful if the goal state was achieved at an arbitrary timestep during the episode. All other hyperparameters used were the same as in the case of DDPG experiments.
State-goal distributions: For all tasks the initial position of the gripper is fixed, for the pushing and sliding tasks it is located just above the table surface and for pushing it is located 20cm above the table. The object is placed randomly on the table in the 30cm x 30cm (20c x 20cm for sliding) square with the center directly under the gripper (both objects are 5cm wide). For pushing, the goal state is sampled uniformly from the same square as the box position. In the pick-and-place task the target is located in the air in order to force the robot to grasp (and not just push). The x and y coordinates of the goal position are sampled uniformly from the mentioned square and the height is sampled uniformly between 10cm and 45cm. For sliding the goal position is sampled from a 60cm x 60cm square centered 40cm away from the initial gripper position. For all tasks we discard initial state-goal pairs in which the goal is already satisfied.
Network architecture: Both actor and critic networks have 3 hidden layers with 64 hidden units in each layer. Hidden layers use ReLu activation function and the actor output layer uses tanh. The output of the tanh is then rescaled so that it lies in the range [â5cm, 5cm]. In order to prevent tanh saturation and vanishing gradients we add the square of the their preactivations to the actorâs cost function.
Training procedure: We train for 200 epochs. Each epoch consists of 50 cycles where each cycle consists of running the policy for 16 episodes and then performing 40 optimization steps on minibatches of size 128 sampled uniformly from a replay buffer consisting of 10° transitions. We update the target networks after every cycle using the decay coefficient of 0.95. Apart from using the target network for computing Q-targets for the critic we also use it in testing episodes as it is more stable than the main network. The whole training procedure is distributed over 8 threads. For the Adam optimization algorithm we use the learning rate of 0.001 and the default values from Tensorflow framework (Abadi et al., 2016) for the other hyperparameters. We use the discount factor of y = 0.98 for all transitions including the ones ending an episode. Moreover, we clip the targets used to train the critic to the range of possible values, i.e. [â cn 0).
Input scaling: Neural networks have problems dealing with inputs of different magnitudes and therefore it is crucial to scale them properly. To this end, we rescale inputs to neural networks so that they have mean zero and standard deviation equal to one and then clip them to the range [â5, 5]. Means and standard deviations used for rescaling are computed using all the observations encountered so far in the training.
Exploration: The behavioral policy we use for exploration works as follows. With probability 20% we sample (uniformly) a random action from the hypercube of valid actions. Otherwise, we take the output of the policy network and add independently to every coordinate normal noise with standard deviation equal to 5% of the total range of allowed values on this coordinate.
Simulation: Every episode consists of 50 environment timesteps, each of which consists of 10 MuJoCo steps with At = 0.002s. MuJoCo uses soft constraints for contacts and therefore object penetration is possible. It can be minimized by using a small timestep and more constraint solver epochs but it would slow down the simulation. We encountered some penetration in the pushing task (the agent learnt to push the box into the table in a way that it is pushed out by contact forces onto the target). In order to void this behaviour we added to the reward a term penalizing the squared depth of penetration for every contact pair.
14
Training time: Training for 200 epochs took us approximately 2.5h for pushing and the pick-and- place tasks and 6h for sliding (because physics simulation was slower for this task) using 8 cpu cores.
# B_ Deployment on the physical robot
We have trained a convolutional neural network (CNN) which predicts the box position given the raw image from the fetch head camera. The CNN was trained using only images coming from the Mujoco renderer. Despite the fact that training images were not photorealistic, the trained network performs well on real world data thanks to a high degree of randomization of textures, lightning and other visual parameters in training. This approach called domain randomization is described in more detail in Tobin et al. (2017).
At the beginning of each episode we initialize a simulated environment using the box position predicted by the CNN and robot state coming from the physical robot. From this point we run the policy in the simulator. After each timestep we send the simulated robot joint angles to the real one which is position-controlled and uses the simulated data as targets.
15 | {
"id": "1511.05952"
} |
1707.01083 | ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices | We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy. | http://arxiv.org/pdf/1707.01083 | Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun | cs.CV | null | null | cs.CV | 20170704 | 20171207 | 7 1 0 2 c e D 7
] V C . s c [
2 v 3 8 0 1 0 . 7 0 7 1 : v i X r a
# Shufï¬eNet: An Extremely Efï¬cient Convolutional Neural Network for Mobile Devices
# Xiangyu Zhangâ Xinyu Zhouâ Mengxiao Lin Jian Sun
# Megvii Inc (Face++) {zhangxiangyu,zxy,linmengxiao,sunjian}@megvii.com
# Abstract
We introduce an extremely computation-efï¬cient CNN architecture named Shufï¬eNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuf- ï¬e, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classiï¬cation and MS COCO object detection demonstrate the superior perfor- mance of Shufï¬eNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on Ima- geNet classiï¬cation task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, Shufï¬eNet achieves â¼13à actual speedup over AlexNet while main- taining comparable accuracy.
tions to reduce computation complexity of 1 à 1 convolu- tions. To overcome the side effects brought by group con- volutions, we come up with a novel channel shufï¬e opera- tion to help the information ï¬owing across feature channels. Based on the two techniques, we build a highly efï¬cient ar- chitecture called Shufï¬eNet. Compared with popular struc- tures like [30, 9, 40], for a given computation complexity budget, our Shufï¬eNet allows more feature map channels, which helps to encode more information and is especially critical to the performance of very small networks.
We evaluate our models on the challenging ImageNet classiï¬cation [4, 29] and MS COCO object detection [23] tasks. A series of controlled experiments shows the effec- tiveness of our design principles and the better performance over other structures. Compared with the state-of-the-art architecture MobileNet [12], Shufï¬eNet achieves superior performance by a signiï¬cant margin, e.g. absolute 7.8% lower ImageNet top-1 error at level of 40 MFLOPs.
# 1. Introduction
Building deeper and larger convolutional neural net- works (CNNs) is a primary trend for solving major visual recognition tasks [21, 9, 33, 5, 28, 24]. The most accu- rate CNNs usually have hundreds of layers and thousands of channels [9, 34, 32, 40], thus requiring computation at billions of FLOPs. This report examines the opposite ex- treme: pursuing the best accuracy in very limited compu- tational budgets at tens or hundreds of MFLOPs, focusing on common mobile platforms such as drones, robots, and smartphones. Note that many existing works [16, 22, 43, 42, 38, 27] focus on pruning, compressing, or low-bit represent- ing a âbasicâ network architecture. Here we aim to explore a highly efï¬cient basic architecture specially designed for our desired computing ranges.
We notice that state-of-the-art basic architectures such as Xception [3] and ResNeXt [40] become less efï¬cient in ex- tremely small networks because of the costly dense 1 à 1 convolutions. We propose using pointwise group convolu-
We also examine the speedup on real hardware, i.e. an off-the-shelf ARM-based computing core. The Shufï¬eNet model achieves â¼13à actual speedup (theoretical speedup is 18Ã) over AlexNet [21] while maintaining comparable accuracy.
# 2. Related Work
Efï¬cient Model Designs The last few years have seen the success of deep neural networks in computer vision tasks [21, 36, 28], in which model designs play an im- portant role. The increasing needs of running high qual- ity deep neural networks on embedded devices encour- age the study on efï¬cient model designs [8]. For ex- ample, GoogLeNet [33] increases the depth of networks with much lower complexity compared to simply stack- ing convolution layers. SqueezeNet [14] reduces parame- ters and computation signiï¬cantly while maintaining accu- racy. ResNet [9, 10] utilizes the efï¬cient bottleneck struc- ture to achieve impressive performance. SENet [13] in- troduces an architectural unit that boosts performance at slight computation cost. Concurrent with us, a very re-
* Equally contribution.
1
kK Channels- >| kK Channels- > kK Channels- > Input GConv1 Feature 1 SSSR xX) OS Channel| GConv2 ! ceenansevounensnrsaovsseeeeetenses oblffle | Output (a) (c)
Figure 1. Channel shufï¬e with two stacked group convolutions. GConv stands for group convolution. a) two stacked convolution layers with the same number of groups. Each output channel only relates to the input channels within the group. No cross talk; b) input and output channels are fully related when GConv2 takes data from different groups after GConv1; c) an equivalent implementation to b) using channel shufï¬e.
cent work [46] employs reinforcement learning and model search to explore efï¬cient model designs. The proposed mobile NASNet model achieves comparable performance with our counterpart Shufï¬eNet model (26.0% @ 564 MFLOPs vs. 26.3% @ 524 MFLOPs for ImageNet clas- siï¬cation error). But [46] do not report results on extremely tiny models (e.g. complexity less than 150 MFLOPs), nor evaluate the actual inference time on mobile devices.
Model Acceleration This direction aims to accelerate in- ference while preserving accuracy of a pre-trained model. Pruning network connections [6, 7] or channels [38] re- duces redundant connections in a pre-trained model while maintaining performance. Quantization [31, 27, 39, 45, 44] and factorization [22, 16, 18, 37] are proposed in litera- ture to reduce redundancy in calculations to speed up in- ference. Without modifying the parameters, optimized con- volution algorithms implemented by FFT [25, 35] and other methods [2] decrease time consumption in practice. Distill- ing [11] transfers knowledge from large models into small ones, which makes training small models easier.
Group Convolution The concept of group convolution, which was ï¬rst introduced in AlexNet [21] for distribut- ing the model over two GPUs, has been well demon- strated its effectiveness in ResNeXt [40]. Depthwise sep- arable convolution proposed in Xception [3] generalizes the ideas of separable convolutions in Inception series [34, 32]. Recently, MobileNet [12] utilizes the depthwise separa- ble convolutions and gains state-of-the-art results among lightweight models. Our work generalizes group convolu- tion and depthwise separable convolution in a novel form.
Channel Shufï¬e Operation To the best of our knowl- edge, the idea of channel shufï¬e operation is rarely men- tioned in previous work on efï¬cient model design, although CNN library cuda-convnet [20] supports ârandom sparse convolutionâ layer, which is equivalent to random channel shufï¬e followed by a group convolutional layer. Such âran- dom shufï¬eâ operation has different purpose and been sel- dom exploited later. Very recently, another concurrent work [41] also adopt this idea for a two-stage convolution. How- ever, [41] did not specially investigate the effectiveness of channel shufï¬e itself and its usage in tiny model design.
# 3. Approach
# 3.1. Channel Shufï¬e for Group Convolutions
Modern convolutional neural networks [30, 33, 34, 32, 9, 10] usually consist of repeated building blocks with the same structure. Among them, state-of-the-art networks such as Xception [3] and ResNeXt [40] introduce efï¬cient depthwise separable convolutions or group convolutions into the building blocks to strike an excellent trade-off between representation capability and computational cost. However, we notice that both designs do not fully take the 1 à 1 convolutions (also called pointwise convolutions in [12]) into account, which require considerable complex- ity. For example, in ResNeXt [40] only 3 à 3 layers are equipped with group convolutions. As a result, for each residual unit in ResNeXt the pointwise convolutions occupy 93.4% multiplication-adds (cardinality = 32 as suggested in [40]). In tiny networks, expensive pointwise convolutions result in limited number of channels to meet the complexity constraint, which might signiï¬cantly damage the accuracy. To address the issue, a straightforward solution is to ap-
(a) (b) on on 1x1 Conv 1x1 GConv 1x1 GConv BN ReLU BN ReLU BN ReLU v Vv Channel Shuffle v Channel Shuffle 3x3 AVG Pool 3x3 DWConv (Gtride = 2) v 3x3 DWConv Cowiesny BN ReLU (stride = 2) y y BN y BN 1x1 Conv 1x1 GConv 1x1 GConv WA BN \ wa BN \ WA BN Add Add Concat on (c)
Figure 2. Shufï¬eNet Units. a) bottleneck unit [9] with depthwise convolution (DWConv) [3, 12]; b) Shufï¬eNet unit with pointwise group convolution (GConv) and channel shufï¬e; c) Shufï¬eNet unit with stride = 2.
ply channel sparse connections, for example group convo- lutions, also on 1 à 1 layers. By ensuring that each con- volution operates only on the corresponding input channel group, group convolution signiï¬cantly reduces computation cost. However, if multiple group convolutions stack to- gether, there is one side effect: outputs from a certain chan- nel are only derived from a small fraction of input channels. Fig 1 (a) illustrates a situation of two stacked group convo- lution layers. It is clear that outputs from a certain group only relate to the inputs within the group. This property blocks information ï¬ow between channel groups and weak- ens representation.
If we allow group convolution to obtain input data from different groups (as shown in Fig 1 (b)), the input and out- put channels will be fully related. Speciï¬cally, for the fea- ture map generated from the previous group layer, we can ï¬rst divide the channels in each group into several sub- groups, then feed each group in the next layer with differ- ent subgroups. This can be efï¬ciently and elegantly im- plemented by a channel shufï¬e operation (Fig 1 (c)): sup- pose a convolutional layer with g groups whose output has g à n channels; we ï¬rst reshape the output channel dimen- sion into (g, n), transposing and then ï¬attening it back as the input of next layer. Note that the operation still takes effect even if the two convolutions have different numbers of groups. Moreover, channel shufï¬e is also differentiable, which means it can be embedded into network structures for end-to-end training.
Channel shufï¬e operation makes it possible to build more powerful structures with multiple group convolutional layers. In the next subsection we will introduce an efï¬cient network unit with channel shufï¬e and group convolution.
# 3.2. Shufï¬eNet Unit
Taking advantage of the channel shufï¬e operation, we propose a novel Shufï¬eNet unit specially designed for small networks. We start from the design principle of bottleneck unit [9] in Fig 2 (a). It is a residual block. In its residual branch, for the 3 à 3 layer, we apply a computational eco- nomical 3 à 3 depthwise convolution [3] on the bottleneck feature map. Then, we replace the ï¬rst 1 à 1 layer with pointwise group convolution followed by a channel shufï¬e operation, to form a Shufï¬eNet unit, as shown in Fig 2 (b). The purpose of the second pointwise group convolution is to recover the channel dimension to match the shortcut path. For simplicity, we do not apply an extra channel shufï¬e op- eration after the second pointwise layer as it results in com- parable scores. The usage of batch normalization (BN) [15] and nonlinearity is similar to [9, 40], except that we do not use ReLU after depthwise convolution as suggested by [3]. As for the case where Shufï¬eNet is applied with stride, we simply make two modiï¬cations (see Fig 2 (c)): (i) add a 3 à 3 average pooling on the shortcut path; (ii) replace the element-wise addition with channel concatenation, which makes it easy to enlarge channel dimension with little extra computation cost.
Thanks to pointwise group convolution with channel shufï¬e, all components in Shufï¬eNet unit can be com- puted efï¬ciently. Compared with ResNet [9] (bottleneck design) and ResNeXt [40], our structure has less complex- ity under the same settings. For example, given the input size c à h à w and the bottleneck channels m, ResNet unit requires hw(2cm + 9m2) FLOPs and ResNeXt has hw(2cm + 9m2/g) FLOPs, while our Shufï¬eNet unit re- quires only hw(2cm/g + 9m) FLOPs, where g means the
Layer Image Conv1 MaxPool Stage2 Stage3 Stage4 Output size KSize 224 Ã 224 112 Ã 112 56 Ã 56 28 Ã 28 28 Ã 28 14 Ã 14 14 Ã 14 7 Ã 7 7 Ã 7 1 Ã 1 3 Ã 3 3 Ã 3 Stride Repeat 2 2 2 1 2 1 2 1 1 1 3 1 7 1 3 g = 1 3 24 144 144 288 288 576 576 Output channels (g groups) g = 3 3 24 g = 2 3 24 g = 4 3 24 200 200 400 400 800 800 240 240 480 480 960 960 272 272 544 544 1088 1088 g = 8 3 24 384 384 768 768 1536 1536 GlobalPool FC Complexity 7 Ã 7 1000 1000 1000 143M 140M 137M 133M 137M 1000 1000
Table 1. Shufï¬eNet architecture. The complexity is evaluated with FLOPs, i.e. the number of ï¬oating-point multiplication-adds. Note that for Stage 2, we do not apply group convolution on the ï¬rst pointwise layer because the number of input channels is relatively small.
Model Shufï¬eNet 1à Shufï¬eNet 0.5à Shufï¬eNet 0.25à Complexity (MFLOPs) 140 38 13 g = 1 33.6 45.1 57.1 Classiï¬cation error (%) g = 4 g = 3 g = 2 32.8 32.6 32.7 41.6 43.2 44.4 54.2 55.0 56.8 g = 8 32.4 42.3 52.7
Table 2. Classiï¬cation error vs. number of groups g (smaller number represents better performance)
number of groups for convolutions. In other words, given a computational budget, Shufï¬eNet can use wider feature maps. We ï¬nd this is critical for small networks, as tiny networks usually have an insufï¬cient number of channels to process the information.
In addition, in Shufï¬eNet depthwise convolution only performs on bottleneck feature maps. Even though depth- wise convolution usually has very low theoretical complex- ity, we ï¬nd it difï¬cult to efï¬ciently implement on low- power mobile devices, which may result from a worse com- putation/memory access ratio compared with other dense operations. Such drawback is also referred in [3], which has a runtime library based on TensorFlow [1]. In Shufï¬eNet units, we intentionally use depthwise convolution only on bottleneck in order to prevent overhead as much as possi- ble.
# 3.3. Network Architecture
Built on Shufï¬eNet units, we present the overall Shuf- ï¬eNet architecture in Table 1. The proposed network is mainly composed of a stack of Shufï¬eNet units grouped into three stages. The ï¬rst building block in each stage is ap- plied with stride = 2. Other hyper-parameters within a stage stay the same, and for the next stage the output channels are doubled. Similar to [9], we set the number of bottleneck channels to 1/4 of the output channels for each Shufï¬eNet
unit. Our intent is to provide a reference design as simple as possible, although we ï¬nd that further hyper-parameter tunning might generate better results.
In Shufï¬eNet units, group number g controls the connec- tion sparsity of pointwise convolutions. Table 1 explores different group numbers and we adapt the output chan- nels to ensure overall computation cost roughly unchanged (â¼140 MFLOPs). Obviously, larger group numbers result in more output channels (thus more convolutional ï¬lters) for a given complexity constraint, which helps to encode more information, though it might also lead to degradation for an individual convolutional ï¬lter due to limited corresponding input channels. In Sec 4.1.1 we will study the impact of this number subject to different computational constrains.
To customize the network to a desired complexity, we can simply apply a scale factor s on the number of chan- nels. For example, we denote the networks in Table 1 as âShufï¬eNet 1Ãâ, then âShufï¬eNet sÃâ means scaling the number of ï¬lters in Shufï¬eNet 1à by s times thus overall complexity will be roughly s2 times of Shufï¬eNet 1Ã.
# 4. Experiments
We mainly evaluate our models on the ImageNet 2012 classiï¬cation dataset [29, 4]. We follow most of the train- ing settings and hyper-parameters used in [40], with two exceptions: (i) we set the weight decay to 4e-5 instead of
Cls err. (%, no shufï¬e) Cls err. (%, shufï¬e) â err. (%) 34.5 37.6 45.7 48.1 56.3 56.5 32.6 32.4 43.2 42.3 55.0 52.7 1.9 5.2 2.5 5.8 1.3 3.8
Table 3. Shufï¬eNet with/without channel shufï¬e (smaller number represents better performance)
1e-4 and use linear-decay learning rate policy (decreased from 0.5 to 0); (ii) we use slightly less aggressive scale aug- mentation for data preprocessing. Similar modiï¬cations are also referenced in [12] because such small networks usu- ally suffer from underï¬tting rather than overï¬tting. It takes 1 or 2 days to train a model for 3Ã105 iterations on 4 GPUs, whose batch size is set to 1024. To benchmark, we compare single crop top-1 performance on ImageNet validation set, i.e. cropping 224 à 224 center view from 256à input image and evaluating classiï¬cation accuracy. We use exactly the same settings for all models to ensure fair comparisons.
(e.g. g = 8), the classiï¬cation score saturates or even drops. With an increase in group number (thus wider fea- ture maps), input channels for each convolutional ï¬lter be- come fewer, which may harm representation capability. In- terestingly, we also notice that for smaller models such as Shufï¬eNet 0.25à larger group numbers tend to better re- sults consistently, which suggests wider feature maps bring more beneï¬ts for smaller models.
# 4.1.2 Channel Shufï¬e vs. No Shufï¬e
# 4.1. Ablation Study
The core idea of Shufï¬eNet lies in pointwise group con- volution and channel shufï¬e operation. In this subsection we evaluate them respectively.
# 4.1.1 Pointwise Group Convolutions
To evaluate the importance of pointwise group convolu- tions, we compare Shufï¬eNet models of the same com- plexity whose numbers of groups range from 1 to 8. If the group number equals 1, no pointwise group convolu- tion is involved and then the Shufï¬eNet unit becomes an âXception-likeâ [3] structure. For better understanding, we also scale the width of the networks to 3 different complex- ities and compare their classiï¬cation performance respec- tively. Results are shown in Table 2.
From the results, we see that models with group convo- lutions (g > 1) consistently perform better than the coun- terparts without pointwise group convolutions (g = 1). Smaller models tend to beneï¬t more from groups. For ex- ample, for Shufï¬eNet 1à the best entry (g = 8) is 1.2% better than the counterpart, while for Shufï¬eNet 0.5à and 0.25à the gaps become 3.5% and 4.4% respectively. Note that group convolution allows more feature map channels for a given complexity constraint, so we hypothesize that the performance gain comes from wider feature maps which help to encode more information. In addition, a smaller network involves thinner feature maps, meaning it beneï¬ts more from enlarged feature maps.
Table 2 also shows that for some models (e.g. Shuf- ï¬eNet 0.5Ã) when group numbers become relatively large
The purpose of shufï¬e operation is to enable cross-group information ï¬ow for multiple group convolution layers. Ta- ble 3 compares the performance of Shufï¬eNet structures (group number is set to 3 or 8 for instance) with/without channel shufï¬e. The evaluations are performed under three different scales of complexity. It is clear that channel shuf- ï¬e consistently boosts classiï¬cation scores for different set- tings. Especially, when group number is relatively large (e.g. g = 8), models with channel shufï¬e outperform the counterparts by a signiï¬cant margin, which shows the im- portance of cross-group information interchange.
# 4.2. Comparison with Other Structure Units
leading convolutional units in VGG [30], ResNet [9], GoogleNet [33], ResNeXt [40] and Xcep- tion [3] have pursued state-of-the-art results with large mod- els (e.g. ⥠1GFLOPs), but do not fully explore low- complexity conditions. In this section we survey a variety of building blocks and make comparisons with Shufï¬eNet under the same complexity constraint.
For fair comparison, we use the overall network architec- ture as shown in Table 1. We replace the Shufï¬eNet units in Stage 2-4 with other structures, then adapt the number of channels to ensure the complexity remains unchanged. The structures we explored include:
⢠VGG-like. Following the design principle of VGG net [30], we use a two-layer 3Ã3 convolutions as the basic building block. Different from [30], we add a Batch Normalization layer [15] after each of the con- volutions to make end-to-end training easier.
140 38 13 50.7 - - 37.3 48.8 63.7 33.6 45.1 57.1 33.3 46.0 65.2
# Complexity (MFLOPs) VGG-like ResNet Xception-like ResNeXt
Shufï¬eNet (ours) 32.4 (1Ã, g = 8) 41.6 (0.5Ã, g = 4) 52.7 (0.25Ã, g = 8)
Table 4. Classiï¬cation error vs. various structures (%, smaller number represents better performance). We do not report VGG-like structure on smaller networks because the accuracy is signiï¬cantly worse.
Model 1.0 MobileNet-224 Shufï¬eNet 2à (g = 3) Shufï¬eNet 2à (with SE[13], g = 3) 0.75 MobileNet-224 Shufï¬eNet 1.5à (g = 3) 0.5 MobileNet-224 Shufï¬eNet 1à (g = 8) 0.25 MobileNet-224 Shufï¬eNet 0.5à (g = 4) Shufï¬eNet 0.5à (shallow, g = 3) Complexity (MFLOPs) Cls err. (%) â err. (%) 569 524 527 325 292 149 140 41 38 40 29.4 26.3 24.7 31.6 28.5 36.3 32.4 49.4 41.6 42.8 - 3.1 4.7 - 3.1 - 3.9 - 7.8 6.6
Table 5. Shufï¬eNet vs. MobileNet [12] on ImageNet Classiï¬cation
⢠ResNet. We adopt the âbottleneckâ design in our ex- periment, which has been demonstrated more efï¬cient in [9] . Same as [9], the bottleneck ratio1 is also 1 : 4.
the increase of accuracy. Since the efï¬cient design of Shuf- ï¬eNet, we can use more channels for a given computation budget, thus usually resulting in better performance.
⢠Xception-like. The original structure proposed in [3] involves fancy designs or hyper-parameters for differ- ent stages, which we ï¬nd difï¬cult for fair comparison Instead, we remove the pointwise on small models. group convolutions and channel shufï¬e operation from Shufï¬eNet (also equivalent to Shufï¬eNet with g = 1). The derived structure shares the same idea of âdepth- wise separable convolutionâ as in [3], which is called an Xception-like structure here.
⢠ResNeXt. We use the settings of cardinality = 16 and bottleneck ratio = 1 : 2 as suggested in [40]. We also explore other settings, e.g. bottleneck ratio = 1 : 4, and get similar results.
include GoogleNet or Inception series [33, 34, 32]. We ï¬nd it non- trivial to generate such Inception structures to small net- works because the original design of Inception module in- volves too many hyper-parameters. As a reference, the ï¬rst GoogleNet version [33] has 31.3% top-1 error at the cost of 1.5 GFLOPs (See Table 6). More sophisticated Inception versions [34, 32] are more accurate, however, involve sig- niï¬cantly increased complexity. Recently, Kim et al. pro- pose a lightweight network structure named PVANET [19] which adopts Inception units. Our reimplemented PVANET (with 224Ã224 input size) has 29.7% classiï¬cation error with a computation complexity of 557 MFLOPs, while our Shufï¬eNet 2x model (g = 3) gets 26.3% with 524 MFLOPs (see Table 6).
We use exactly the same settings to train these models. Results are shown in Table 4. Our Shufï¬eNet models out- perform most others by a signiï¬cant margin under different complexities. Interestingly, we ï¬nd an empirical relation- ship between feature map channels and classiï¬cation accu- racy. For example, under the complexity of 38 MFLOPs, output channels of Stage 4 (see Table 1) for VGG-like, ResNet, ResNeXt, Xception-like, Shufï¬eNet models are 50, 192, 192, 288, 576 respectively, which is consistent with
1In the bottleneck-like units (like ResNet, ResNeXt or Shufï¬eNet) bot- tleneck ratio implies the ratio of bottleneck channels to output channels. For example, bottleneck ratio = 1 : 4 means the output feature map is 4 times the width of the bottleneck feature map.
# 4.3. Comparison with MobileNets and Other Frameworks
Recently Howard et al. have proposed MobileNets [12] which mainly focus on efï¬cient network architecture for mobile devices. MobileNet takes the idea of depthwise sep- arable convolution from [3] and achieves state-of-the-art results on small models.
Table 5 compares classiï¬cation scores under a variety of complexity levels. It is clear that our Shufï¬eNet models are superior to MobileNet for all the complexities. Though our Shufï¬eNet network is specially designed for small models (< 150 MFLOPs), we ï¬nd it is still better than MobileNet
Model VGG-16 [30] Shufï¬eNet 2à (g = 3) GoogleNet [33]* Shufï¬eNet 1à (g = 8) AlexNet [21] SqueezeNet [14] Shufï¬eNet 0.5à (g = 4) Cls err. (%) Complexity (MFLOPs) 28.5 26.3 31.3 32.4 42.8 42.5 41.6 15300 524 1500 140 720 833 38
Table 6. Complexity comparison. *Implemented by BVLC (https://github.com/BVLC/caffe/tree/master/models/bvlc googlenet)
Model Shufï¬eNet 2à (g = 3) Shufï¬eNet 1à (g = 3) 1.0 MobileNet-224 [12] 1.0 MobileNet-224 (our impl.) mAP [.5, .95] (300à image) mAP [.5, .95] (600à image) 18.7% 14.5% 16.4% 14.9% 25.0% 19.8% 19.8% 19.3%
Table 7. Object detection results on MS COCO (larger numbers represents better performance). For MobileNets we compare two results: 1) COCO detection scores reported by [12]; 2) ï¬netuning from our reimplemented MobileNets, whose training and ï¬netuning settings are exactly the same as that for Shufï¬eNets.
Model Shufï¬eNet 0.5à (g = 3) Shufï¬eNet 1à (g = 3) Shufï¬eNet 2à (g = 3) AlexNet [21] 1.0 MobileNet-224 [12] Cls err. (%) 43.2 32.6 26.3 42.8 29.4 FLOPs 38M 140M 524M 720M 569M 224 à 224 15.2ms 37.8ms 108.8ms 184.0ms 110.0ms 480 à 640 87.4ms 222.2ms 617.0ms 1156.7ms 612.0ms
720 Ã 1280 260.1ms 684.5ms 1857.6ms 3633.9ms 1879.2ms
Table 8. Actual inference time on mobile device (smaller number represents better performance). The platform is based on a single Qualcomm Snapdragon 820 processor. All results are evaluated with single thread.
for higher computation cost, e.g. 3.1% more accurate than MobileNet 1à at the cost of 500 MFLOPs. For smaller networks (â¼40 MFLOPs) Shufï¬eNet surpasses MobileNet by 7.8%. Note that our Shufï¬eNet architecture contains 50 layers while MobileNet only has 28 layers. For better un- derstanding, we also try Shufï¬eNet on a 26-layer architec- ture by removing half of the blocks in Stage 2-4 (see âShuf- ï¬eNet 0.5à shallow (g = 3)â in Table 5). Results show that the shallower model is still signiï¬cantly better than the cor- responding MobileNet, which implies that the effectiveness of Shufï¬eNet mainly results from its efï¬cient structure, not the depth.
state-of-the-art results on large ImageNet models. We ï¬nd SE modules also take effect in combination with the back- bone Shufï¬eNets, for instance, boosting the top-1 error of Shufï¬eNet 2à to 24.7% (shown in Table 5). Interestingly, though negligible increase of theoretical complexity, we ï¬nd Shufï¬eNets with SE modules are usually 25 â¼ 40% slower than the ârawâ Shufï¬eNets on mobile devices, which implies that actual speedup evaluation is critical on low-cost architecture design. In Sec 4.5 we will make further discus- sion.
# 4.4. Generalization Ability
Table 6 compares our Shufï¬eNet with a few popular models. Results show that with similar accuracy Shufï¬eNet is much more efï¬cient than others. For example, Shuf- ï¬eNet 0.5à is theoretically 18à faster than AlexNet [21] with comparable classiï¬cation score. We will evaluate the actual running time in Sec 4.5.
It is also worth noting that the simple architecture de- sign makes it easy to equip ShuffeNets with the latest ad- vances such as [13, 26]. For example, in [13] the authors propose Squeeze-and-Excitation (SE) blocks which achieve
To evaluate the generalization ability for transfer learn- ing, we test our Shufï¬eNet model on the task of MS COCO object detection [23]. We adopt Faster-RCNN [28] as the detection framework and use the publicly released Caffe code [28, 17] for training with default settings. Similar to [12], the models are trained on the COCO train+val dataset excluding 5000 minival images and we conduct testing on the minival set. Table 7 shows the comparison of results trained and evaluated on two input resolutions. Comparing Shufï¬eNet 2à with MobileNet whose complexity are com-
parable (524 vs. 569 MFLOPs), our Shufï¬eNet 2à sur- passes MobileNet by a signiï¬cant margin on both resolu- tions; our Shufï¬eNet 1à also achieves comparable results with MobileNet on 600à resolution, but has â¼4à com- plexity reduction. We conjecture that this signiï¬cant gain is partly due to Shufï¬eNetâs simple design of architecture without bells and whistles.
# 4.5. Actual Speedup Evaluation
Finally, we evaluate the actual inference speed of Shuf- ï¬eNet models on a mobile device with an ARM platform. Though Shufï¬eNets with larger group numbers (e.g. g = 4 or g = 8) usually have better performance, we ï¬nd it less efï¬cient in our current implementation. Empirically g = 3 usually has a proper trade-off between accuracy and actual inference time. As shown in Table 8, three input resolutions are exploited for the test. Due to memory access and other overheads, we ï¬nd every 4à theoretical complexity reduc- tion usually results in â¼2.6à actual speedup in our im- plementation. Nevertheless, compared with AlexNet [21] our Shufï¬eNet 0.5à model still achieves â¼13à actual speedup under comparable classiï¬cation accuracy (the the- oretical speedup is 18Ã), which is much faster than previ- ous AlexNet-level models or speedup approaches such as [14, 16, 22, 42, 43, 38].
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 4
[2] H. Bagherinezhad, M. Rastegari, and A. Farhadi. Lcnn: Lookup-based convolutional neural network. arXiv preprint arXiv:1611.06473, 2016. 2
[3] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. arXiv preprint arXiv:1610.02357, 2016. 1, 2, 3, 4, 5, 6
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248â255. IEEE, 2009. 1, 4
[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic In Proceedings of the IEEE conference on segmentation. computer vision and pattern recognition, pages 580â587, 2014. 1
Deep compres- sion: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. 2
[7] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efï¬cient neural network. In Advances in
Neural Information Processing Systems, pages 1135â1143, 2015. 2
[8] K. He and J. Sun. Convolutional neural networks at con- strained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5353â 5360, 2015. 1
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 770â778, 2016. 1, 2, 3, 4, 5, 6
[10] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Com- puter Vision, pages 630â645. Springer, 2016. 1, 2
[11] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2
[12] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efï¬- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017. 1, 2, 3, 5, 6, 7
[13] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation net- works. arXiv preprint arXiv:1709.01507, 2017. 1, 6, 7 [14] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016. 1, 7, 8
[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 3, 5
[16] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. 1, 2, 8
[17] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. In Proceed- ings of the 22nd ACM international conference on Multime- dia, pages 675â678. ACM, 2014. 7
[18] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014. 2
[19] K.-H. Kim, S. Hong, B. Roh, Y. Cheon, and M. Park. Pvanet: Deep but lightweight neural networks for real-time object de- tection. arXiv preprint arXiv:1608.08021, 2016. 6
[20] A. Krizhevsky. cuda-convnet: High-performance c++/cuda implementation of convolutional neural networks, 2012. 2
Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012. 1, 2, 7, 8
I. Oseledets, and V. Lempitsky. Speeding-up convolutional neural net- works using ï¬ne-tuned cp-decomposition. arXiv preprint arXiv:1412.6553, 2014. 1, 2, 8
[23] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In European Conference on Com- puter Vision, pages 740â755. Springer, 2014. 1, 7
[24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3431â3440, 2015. 1
[25] M. Mathieu, M. Henaff, and Y. LeCun. of convolutional networks through ffts. arXiv:1312.5851, 2013. 2 Fast training arXiv preprint
[26] P. Ramachandran, B. Zoph, and Q. V. Le. Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 2017. 7
[27] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor- net: Imagenet classiï¬cation using binary convolutional neu- ral networks. In European Conference on Computer Vision, pages 525â542. Springer, 2016. 1, 2
[28] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015. 1, 7
[29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015. 1, 4
[30] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1, 2, 5, 7
[31] D. Soudry, I. Hubara, and R. Meir. Expectation backpropa- gation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances in Neural Information Processing Systems, pages 963â971, 2014. 2
[32] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi. Inception- v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 1, 2, 6 [33] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 1, 2, 5, 6, 7
[34] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818â2826, 2016. 1, 2, 6 [35] N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Pi- Fast convolutional nets with arXiv preprint
antino, and Y. LeCun. fbfft: A gpu performance evaluation. arXiv:1412.7580, 2014. 2
[36] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3156â3164, 2015. 1
[37] M. Wang, B. Liu, and H. Foroosh. Design of efï¬cient convolutional layers using single intra-channel convolution, topological subdivisioning and spatial âbottleneckâ struc- ture. arXiv preprint arXiv:1608.04337, 2016. 2
[38] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â2082, 2016. 1, 2, 8
[39] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized In Pro- convolutional neural networks for mobile devices. ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4820â4828, 2016. 2
[40] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016. 1, 2, 3, 4, 5, 6
[41] T. Zhang, G.-J. Qi, B. Xiao, and J. Wang. Interleaved group convolutions for deep neural networks. In International Con- ference on Computer Vision, 2017. 2
[42] X. Zhang, J. Zou, K. He, and J. Sun. Accelerating very deep convolutional networks for classiï¬cation and detection. IEEE transactions on pattern analysis and machine intelli- gence, 38(10):1943â1955, 2016. 1, 8
[43] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efï¬cient and accurate approximations of nonlinear convolutional net- works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1984â1992, 2015. 1, 8
[44] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremen- tal network quantization: Towards lossless cnns with low- precision weights. arXiv preprint arXiv:1702.03044, 2017. 2
[45] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou. Dorefa-net: Training low bitwidth convolutional neural arXiv preprint networks with low bitwidth gradients. arXiv:1606.06160, 2016. 2
[46] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learn- ing transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012, 2017. 2 | {
"id": "1602.07360"
} |
1707.01067 | ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games | In this paper, we propose ELF, an Extensive, Lightweight and Flexible
platform for fundamental reinforcement learning research. Using ELF, we
implement a highly customizable real-time strategy (RTS) engine with three game
environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a
miniature version of StarCraft, captures key game dynamics and runs at 40K
frame-per-second (FPS) per core on a Macbook Pro notebook. When coupled with
modern reinforcement learning methods, the system can train a full-game bot
against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition,
our platform is flexible in terms of environment-agent communication
topologies, choices of RL methods, changes in game parameters, and can host
existing C/C++-based game environments like Arcade Learning Environment. Using
ELF, we thoroughly explore training parameters and show that a network with
Leaky ReLU and Batch Normalization coupled with long-horizon training and
progressive curriculum beats the rule-based built-in AI more than $70\%$ of the
time in the full game of Mini-RTS. Strong performance is also achieved on the
other two games. In game replays, we show our agents learn interesting
strategies. ELF, along with its RL platform, is open-sourced at
https://github.com/facebookresearch/ELF. | http://arxiv.org/pdf/1707.01067 | Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick | cs.AI | NIPS 2017 oral | null | cs.AI | 20170704 | 20171110 | 7 1 0 2
v o N 0 1 ] I A . s c [
2 v 7 6 0 1 0 . 7 0 7 1 : v i X r a
# ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games
# Yuandong Tian1 Qucheng Gong1 Wenling Shang2 Yuxin Wu1 C. Lawrence Zitnick1
# 1Facebook AI Research
# 2Oculus
Racebook AI Research
1{yuandong, qucheng, yuxinwu, zitnick}@fb.com 2wendy.shang@oculus.com
# Abstract
In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environ- ments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a minia- ture version of StarCraft, captures key game dynamics and runs at 40K frame- per-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end- to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is ï¬exible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environ- ments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] cou- pled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong per- formance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies. ELF, along with its RL platform, is open sourced at https://github.com/facebookresearch/ELF.
# Introduction
Game environments are commonly used for research in Reinforcement Learning (RL), i.e. how to train intelligent agents to behave properly from sparse rewards [4, 6, 5, 14, 29]. Compared to the real world, game environments offer an inï¬nite amount of highly controllable, fully reproducible, and automatically labeled data. Ideally, a game environment for fundamental RL research is:
⢠Extensive: The environment should capture many diverse aspects of the real world, such as rich dynamics, partial information, delayed/long-term rewards, concurrent actions with different granularity, etc. Having an extensive set of features and properties increases the potential for trained agents to generalize to diverse real-world scenarios.
⢠Lightweight: A platform should be fast and capable of generating samples hundreds or thousands of times faster than real-time with minimal computational resources (e.g., a sin- gle machine). Lightweight and efï¬cient platforms help accelerate academic research of RL algorithms, particularly for methods which are heavily data-dependent.
⢠Flexible: A platform that is easily customizable at different levels, including rich choices of environment content, easy manipulation of game parameters, accessibility of internal variables, and ï¬exibility of training architectures. All are important for fast exploration of different algorithms. For example, changing environment parameters [35], as well as using internal data [15, 19] have been shown to substantially accelerate training.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
To our knowledge, no current game platforms satisfy all criteria. Modern commercial games (e.g., StarCraft I/II, GTA V) are extremely realistic, but are not customizable and require signiï¬cant re- sources for complex visual effects and for computational costs related to platform-shifting (e.g., a virtual machine to host Windows-only SC I on Linux). Old games and their wrappers [4, 6, 5, 14]) are substantially faster, but are less realistic with limited customizability. On the other hand, games designed for research purpose (e.g., MazeBase [29], µRTS [23]) are efï¬cient and highly customiz- able, but are not very extensive in their capabilities. Furthermore, none of the environments consider simulation concurrency, and thus have limited ï¬exibility when different training architectures are applied. For instance, the interplay between RL methods and environments during training is often limited to providing simplistic interfaces (e.g., one interface for one game) in scripting languages like Python.
In this paper, we propose ELF, a research-oriented platform that offers games with diverse prop- erties, efï¬cient simulation, and highly customizable environment settings. The platform allows for both game parameter changes and new game additions. The training of RL methods is deeply and ï¬exibly integrated into the environment, with an emphasis on concurrent simulations. On ELF, we build a real-time strategy (RTS) game engine that includes three initial environments including Mini-RTS, Capture the Flag and Tower Defense. Mini-RTS is a miniature custom-made RTS game that captures all the basic dynamics of StarCraft (fog-of-war, resource gathering, troop building, defense/attack with troops, etc). Mini-RTS runs at 165K FPS on a 4 core laptop, which is faster than existing environments by an order of magnitude. This enables us for the ï¬rst time to train end-to- end a full-game bot against built-in AIs. Moreover, training is accomplished in only one day using 6 CPUs and 1 GPU. The other two games can be trained with similar (or higher) efï¬ciency.
Many real-world scenarios and complex games (e.g. StarCraft) are hierarchical in nature. Our RTS engine has full access to the game data and has a built-in hierarchical command system, which allows training at any level of the command hierarchy. As we demonstrate, this allows us to train a full-game bot that acts on the top-level strategy in the hierarchy while lower-level commands are handled using build-in tactics. Previously, most research on RTS games focused only on lower-level scenarios such as tactical battles [34, 25]. The full access to the game data also allows for supervised training with small-scale internal data.
ELF is resilient to changes in the topology of the environment-actor communication used for train- ing, thanks to its hybrid C++/Python framework. These include one-to-one, many-to-one and one- to-many mappings. In contrast, existing environments (e.g., OpenAI Gym [6] and Universe [33]) wrap one game in one Python interface, which makes it cumbersome to change topologies. Paral- lelism is implemented in C++, which is essential for simulation acceleration. Finally, ELF is capable of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games (e.g. Chess and Go [32]), physics engines (e.g., Bullet [10]), etc, by writing a simple adaptor.
Equipped with a ï¬exible RL backend powered by PyTorch, we experiment with numerous baselines, and highlight effective techniques used in training. We show the ï¬rst demonstration of end-to- end trained AIs for real-time strategy games with partial information. We use the Asynchronous Advantagous Actor-Critic (A3C) model [21] and explore extensive design choices including frame- skip, temporal horizon, network structure, curriculum training, etc. We show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in full-game Mini-RTS. We also show stronger performance in others games. ELF and its RL platform, is open-sourced at https://github.com/facebookresearch/ELF.
# 2 Architecture
ELF follows a canonical and simple producer-consumer paradigm (Fig. 1). The producer plays N games, each in a single C++ thread. When a batch of M current game states are ready (M < N ), the corresponding games are blocked and the batch are sent to the Python side via the daemon. The con- sumers (e.g., actor, optimizer, etc) get batched experience with history information via a Python/C++ interface and send back the replies to the blocked batch of the games, which are waiting for the next action and/or values, so that they can proceed. For simplicity, the producer and consumers are in the same process. However, they can also live in different processes, or even on different machines. Before the training (or evaluation) starts, different consumers register themselves for batches with
2
Game 1 H History buffer Batch with history info Game 2 H History buffer y ° . . Game N HY History buffer Producer (Games in C++, Consumers (Python ae
Figure 1: Overview of ELF.
different history length. For example, an actor might need a batch with short history, while an op- timizer (e.g., T -step actor-critic) needs a batch with longer history. During training, the consumers use the batch in various ways. For example, the actor takes the batch and returns the probabilties of actions (and values), then the actions are sampled from the distribution and sent back. The batch received by the optimizer already contains the sampled actions from the previous steps, and can be used to drive reinforcement learning algorithms such as A3C. Here is a sample usage of ELF:
1 2 3 4 5 6 7 8 9 10 11 12 13 # We run 1024 games concurrently . num games = 1024 # Wait for a batch of 256 games. batchsize = 256 # The return states contain key âs â, # The reply contains key âaâ to be ï¬lled from the Python side . # The deï¬nitions of the keys are in the wrapper of the game. input spec = dict (s=ââ , r=ââ , reply spec = dict (a=ââ ) â r â and â terminal â terminal =ââ ) context = Init (num games, batchsize , input spec , reply spec ) Initialization of ELF 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # Start all game threads and enter main loop . context . Start () while True: # Wait for a batch of game states to be ready # These games will be blocked, waiting for batch = context . Wait() replies . # Apply a model to the game state . The output has key â pi â output = model(batch) # Sample from the output reply [ âaâ ][:] = SampleFromDistribution(output ) to get the actions of this batch . # Resume games. context . Steps () # Stop all game threads . context . Stop()
# Main loop of ELF
Parallelism using C++ threads. Modern reinforcement learning methods often require heavy par- allelism to obtain diverse experiences [21, 22]. Most existing RL environments (OpenAI Gym [6] and Universe [33], RLE [5], Atari [4], Doom [14]) provide Python interfaces which wrap only sin- gle game instances. As a result, parallelism needs to be built in Python when applying modern RL methods. However, thread-level parallelism in Python can only poorly utilize multi-core processors, due to the Global Interpreter Lock (GIL)1. Process-level parallelism will also introduce extra data exchange overhead between processes and increase complexity to framework design. In contrast, our parallelism is achieved with C++ threads for better scaling on multi-core CPUs.
Flexible Environment-Model Conï¬gurations. In ELF, one or multiple consumers can be used. Each consumer knows the game environment identities of samples from received batches, and typi- cally contains one neural network model. The models of different consumers may or may not share parameters, might update the weights, might reside in different processes or even on different ma- chines. This architecture offers ï¬exibility for switching topologies between game environments and models. We can assign one model to each game environment, or one-to-one (e.g, vanilla A3C [21]), in which each agent follows and updates its own copy of the model. Similarly, multiple environ- ments can be assigned to a single model, or many-to-one (e.g., BatchA3C [35] or GA3C [1]), where the model can perform batched forward prediction to better utilize GPUs. We have also incorporated forward-planning methods (e.g., Monte-Carlo Tree Search (MCTS) [7, 32, 27]) and Self-Play [27], in which a single environment might emit multiple states processed by multiple models, or one-to- many. Using ELF, these training conï¬gurations can be tested with minimal changes.
Highly customizable and uniï¬ed interface. Games implemented with our RTS engine can be trained using raw pixel data or lower-dimensional internal game data. Using internal game data is
1The GIL in Python forbids simultaneous interpretations of multiple statements even on multi-core CPUs.
3
An extensive framework that can host many games. RTS Engine Mini-RTS Capture Tower the Flag Defense Specific game engines. Go (DarkForest) Environments
Figure 2: Hierarchical layout of ELF. In the current repository (https://github.com/ facebookresearch/ELF, master branch), there are board games (e.g., Go [32]), Atari learn- ing environment [4], and a customized RTS engine that contains three simple games.
(a) Le... (bd) Resource Mini-RTS Gather resource and build 1000-6000 ticks troops to destroy uaa Selected unit opponent's base. Capture the Flag Capture the flag and bring 1000-4000 ticks it to your own base Tower Defense Builds defensive towers to 1000-2000 ticks Enemy base block enemy invasion.
Figure 3: Overview of Real-time strategy engine. (a) Visualization of current game state. (b) The three different game environments and their descriptions.
typically more convenient for research focusing on reasoning tasks rather than perceptual ones. Note that web-based visual renderings is also supported (e.g., Fig. 3(a)) for case-by-case debugging.
ELF allows for a uniï¬ed interface capable of hosting any existing game written in C/C++, including Atari games (e.g., ALE [4]), board games (e.g. Go [32]), and a customized RTS engine, with a simple adaptor (Fig. 2). This enables easy multi-threaded training and evaluation using existing RL methods. Besides, we also provide three concrete simple games based on RTS engine (Sec. 3).
Reinforcement Learning backend. We propose a Python-based RL backend. It has a ï¬exible design that decouples RL methods from models. Multiple baseline methods (e.g., A3C [21], Policy Gradient [30], Q-learning [20], Trust Region Policy Optimization [26], etc) are implemented, mostly with very few lines of Python codes.
# 3 Real-time strategy Games
Real-time strategy (RTS) games are considered to be one of the next grand AI challenges after Chess and Go [27]. In RTS games, players commonly gather resources, build units (facilities, troops, etc), and explore the environment in the fog-of-war (i.e., regions outside the sight of units are invisible) to invade/defend the enemy, until one player wins. RTS games are known for their exponential and changing action space (e.g., 510 possible actions for 10 units with 5 choices each, and units of each player can be built/destroyed when game advances), subtle game situations, incomplete information due to limited sight and long-delayed rewards. Typically professional players take 200-300 actions per minute, and the game lasts for 20-30 minutes.
Very few existing RTS engines can be used directly for research. Commercial RTS games (e.g., StarCraft I/II) have sophisticated dynamics, interactions and graphics. The game play strategies have been long proven to be complex. Moreover, they are close-source with unknown internal states, and cannot be easily utilized for research. Open-source RTS games like Spring [12], OpenRA [24] and Warzone 2100 [28] focus on complex graphics and effects, convenient user interface, stable network play, ï¬exible map editors and plug-and-play mods (i.e., game extensions). Most of them use rule-based AIs, do not intend to run faster than real-time, and offer no straightforward interface
4
Realistic Code Resource Rule AIs Data AIs RL backend StarCraft I/II TorchCraft ORTS, BattleCode µRTS, MazeBase Mini-RTS No Yes No Yes Yes Table 1: Comparison between different RTS engines. High High Mid Low Mid No Yes Yes Yes Yes High High Low Low Low Yes Yes Yes Yes Yes No No No No Yes
Platform Frame per second Platform Frame per second ALE [4] 6000 RLE [5] 530 DeepMind Lab [3] VizDoom [14] 287(C)/866(G) â¼ 7,000 Universe [33] 60 TorchCraft [31] 2,000 (frameskip=50) Malmo [13] 120 Mini-RTS 40,000 Table 2: Frame rate comparison. Note that Mini-RTS does not render frames, but save game infor- mation into a C structure which is used in Python without copying. For DeepMind Lab, FPS is 287 (CPU) and 866 (GPU) on single 6CPU+1GPU machine. Other numbers are in 1CPU core.
with modern machine learning architectures. ORTS [8], BattleCode [2] and RoboCup Simulation League [16] are designed for coding competitions and focused on rule-based AIs. Research-oriented platforms (e.g., µRTS [23], MazeBase [29]) are fast and simple, often coming with various baselines, but often with much simpler dynamics than RTS games. Recently, TorchCraft [31] provides APIs for StarCraft I to access its internal game states. However, due to platform incompatibility, one docker is used to host one StarCraft engine, and is resource-consuming. Tbl. 1 summarizes the difference.
# 3.1 Our approach
Many popular RTS games and its variants (e.g., StarCraft, DoTA, Leagues of Legends, Tower De- fense) share the same structure: a few units are controlled by a player, to move, attack, gather or cast special spells, to inï¬uence their own or an enemyâs army. With our command hierarchy, a new game can be created by changing (1) available commands (2) available units, and (3) how each unit emits commands triggered by certain scenarios. For this, we offer simple yet effective tools. Researchers can change these variables either by adding commands in C++, or by writing game scripts (e.g., Lua). All derived games share the mechanism of hierarchical commands, replay, etc. Rule-based AIs can also be extended similarly. We provide the following three games: Mini-RTS, Capture the Flag and Tower Defense (Fig. 3(b)). These games share the following properties:
Gameplay. Units in each game move with real coordinates, have dimensions and collision checks, and perform durative actions. The RTS engine is tick-driven. At each tick, AIs make decisions by sending commands to units based on observed information. Then commands are executed, the gameâs state changes, and the game continues. Despite a fair complicated game mechanism, Mini- RTS is able to run 40K frames-per-second per core on a laptop, an order of magnitude faster than most existing environments. Therefore, bots can be trained in a day on a single machine.
Built-in hierarchical command levels. An agent could issue strategic commands (e.g., more ag- gressive expansion), tactical commands (e.g., hit and run), or micro-command (e.g., move a partic- ular unit backward to avoid damage). Ideally strong agents master all levels; in practice, they may focus on a certain level of command hierarchy, and leave others to be covered by hard-coded rules. For this, our RTS engine uses a hierarchical command system that offers different levels of controls over the game. A high-level command may affect all units, by issuing low-level commands. A low-level, unit-speciï¬c durative command lasts a few ticks until completion during which per-tick immediate commands are issued.
Built-in rule-based AIs. We have designed rule-based AIs along with the environment. These AIs have access to all the information of the map and follow ï¬xed strategies (e.g., build 5 tanks and attack the opponent base). These AIs act by sending high-level commands which are then translated to low-level ones and then executed.
With ELF, for the ï¬rst time, we are able to train full-game bots for real-time strategy games and achieve stronger performance than built-in rule-based AIs. In contrast, existing RTS AIs are either
5
KFPS per CPU core for Mini-RTS KFPS per CPU core for Pong (Atari) =Icore 6 = core 1 2 cores = 2 cores 4cores 5 4 cores 1 8 cores #8 cores 4 1 16 cores 1 16 cores 3 2 Hi pent 2 Mer 1 0 0 64threads 128threads 256 threads 512 threads 1024 threads 64threads 128threads 256threads 512 threads 1024 threads.
°
50
40
30
20
10
Figure 4: Frame-per-second per CPU core (no hyper-threading) with respect to CPUs/threads. ELF (light-shaded) is 3x faster than OpenAI Gym [6] (dark-shaded) with 1024 threads. CPU involved in testing: Intel E5-2680@2.50GHz.
rule-based or focused on tactics (e.g., 5 units vs. 5 units). We run experiments on the three games to justify the usability of our platform.
# 4 Experiments
# 4.1 Benchmarking ELF
We run ELF on a single server with a different number of CPU cores to test the efï¬ciency of paral- lelism. Fig. 4(a) shows the results when running Mini-RTS. We can see that ELF scales well with the number of CPU cores used to run the environments. We also embed Atari emulator [4] into our platform and check the speed difference between a single-threaded ALE and paralleled ALE per core (Fig. 4(b)). While a single-threaded engine gives around 5.8K FPS on Pong, our paralleled ALE runs comparable speed (5.1K FPS per core) with up to 16 cores, while OpenAI Gym (with Python threads) runs 3x slower (1.7K FPS per core) with 16 cores 1024 threads, and degrades with more cores. Number of threads matters for training since they determine how diverse the experiences could be, with the same number of CPUs. Apart from this, we observed that Python multiprocessing with Gym is even slower, due to heavy communication of game frames among processes. Note that we used no hyperthreading for all experiments.
# 4.2 Baselines on Real-time Strategy Games
We focus on 1-vs-1 full games between trained AIs and built-in AIs. Built-in AIs have access to full information (e.g., number of opponentâs tanks), while trained AIs know partial information in the fog of war, i.e., game environment within the sight of its own units. There are exceptions: in Mini-RTS, the location of the opponentâs base is known so that the trained AI can attack; in Capture the Flag, the ï¬ag location is known to all; Tower Defense is a game of complete information.
Details of Built-in AI. For Mini-RTS there are two rule-based AIs: SIMPLE gathers, builds ï¬ve tanks and then attacks the opponent base. HIT N RUN often harasses, builds and attacks. For Capture the Flag, we have one built-in AI. For Tower Defense (TD), no AI is needed. We tested our built-in AIs against a human player and ï¬nd they are strong in combat but exploitable. For example, SIMPLE is vulnerable to hit-and-run style harass. As a result, a human player has a win rate of 90% and 50% against SIMPLE and HIT N RUN, respectively, in 20 games.
Action Space. For simplicity, we use 9 strategic (and thus global) actions with hard-coded execution details. For example, AI may issue BUILD BARRACKS, which automatically picks a worker to build barracks at an empty location, if the player can afford. Although this setting is simple, detailed commands (e.g., command per unit) can be easily set up, which bear more resemblance to StarCraft. Similar setting applies to Capture the Flag and Tower Defense. Please check Appendix for detailed descriptions.
Rewards. For Mini-RTS, the agent only receives a reward when the game ends (±1 for win/loss). An average game of Mini-RTS lasts for around 4000 ticks, which results in 80 decisions for a frame skip of 50, showing that the game is indeed delayed in reward. For Capturing the Flag, we give intermediate rewards when the ï¬ag moves towards playerâs own base (one score when the ï¬ag âtouches downâ). In Tower Defense, intermediate penalty is given if enemy units are leaked.
6
# Gym
Frameskip 50 20 10 SIMPLE 68.4(±4.3) 61.4(±5.8) 52.8(±2.4) HIT N RUN 63.6(±7.9) 55.4(±4.7) 51.1(±5.0) Random Trained AI Capture Flag Tower Defense 0.7 (± 0.9) 59.9 (± 7.4) 36.3 (± 0.3) 91.0 (± 7.6)
Table 3: Win rate of A3C models competing with built-in AIs over 10k games. Left: Mini-RTS. Frame skip of the trained AI is 50. Right: For Capture the Flag, frame skip of trained AI is 10, while the opponent is 50. For Tower Defense the frame skip of trained AI is 50, no opponent AI.
Game ReLU Leaky ReLU BN Leaky ReLU + BN Mini-RTS HIT N RUN Median Mean (± std) Median Mean (± std) 57.0 (± 6.8) 60.3 (± 3.3) 57.5 (± 6.8) 63.6 (± 7.9) Mini-RTS SIMPLE 52.8 59.8 61.0 72.2 54.7 (± 4.2) 61.0 (± 2.6) 64.4 (± 7.4 ) 68.4 (± 4.3) 60.4 60.2 55.6 65.5
Table 4: Win rate in % of A3C models using different network architectures. Frame skip of both sides are 50 ticks. The fact that the medians are better than the means shows that different instances of A3C could converge to very different solutions.
# 4.2.1 A3C baseline
Next, we describe our baselines and their variants. Note that while we refer to these as baseline, we are the ï¬rst to demonstrate end-to-end trained AIs for real-time strategy (RTS) games with partial information. For all games, we randomize the initial game states for more diverse experience and use A3C [21] to train AIs to play the full game. We run all experiments 5 times and report mean and standard deviation. We use simple convolutional networks with two heads, one for actions and the other for values. The input features are composed of spatially structured (20-by-20) abstractions of the current game environment with multiple channels. At each (rounded) 2D location, the type and hit point of the unit at that location is quantized and written to their corresponding channels. For Mini-RTS, we also add an additional constant channel ï¬lled with current resource of the player. The input feature only contains the units within the sight of one player, respecting the properties of fog-of-war. For Capture the Flag, immediate action is required at speciï¬c situations (e.g., when the opponent just gets the ï¬ag) and A3C does not give good performance. Therefore we use frame skip 10 for trained AI and 50 for the opponent to give trained AI a bit advantage. All models are trained from scratch with curriculum training (Sec. 4.2.2).
Note that there are several factors affecting the AI performance.
Frame-skip. A frame skip of 50 means that the AI acts every 50 ticks, etc. Against an opponent with low frame skip (fast-acting), A3Câs performance is generally lower (Fig. 3). When the opponent has high frame skip (e.g., 50 ticks), the trained agent is able to ï¬nd a strategy that exploits the long- delayed nature of the opponent. For example, in Mini-RTS it will send two tanks to the opponentâs base. When one tank is destroyed, the opponent does not attack the other tank until the next 50- divisible tick comes. Interestingly, the trained model could be adaptive to different frame-rates and learn to develop different strategies for faster acting opponents. For Capture the Flag, the trained bot learns to win 60% over built-in AI, with an advantage in frame skip. For even frame skip, trained AI performance is low.
Network Architectures. Since the input is sparse and heterogeneous, we experiment on CNN ar- chitectures with Batch Normalization [11] and Leaky ReLU [18]. BatchNorm stabilizes the gradient ï¬ow by normalizing the outputs of each ï¬lter. Leaky ReLU preserves the signal of negative linear responses, which is important in scenarios when the input features are sparse. Tbl. 4 shows that these two modiï¬cations both improve and stabilize the performance. Furthermore, they are compli- mentary to each other when combined.
History length. History length T affects the convergence speed, as well as the ï¬nal performance of A3C (Fig. 5). While Vanilla A3C [21] uses T = 5 for Atari games, the reward in Mini-RTS In this case, the T -step estimation of reward is more delayed (â¼ 80 actions before a reward).
7
ALSIMPLE ALHIT_AND_RUN 5 075 § S 3 3 2 3 4 E055 § £ -T4 gg g : g 2 âT-8 2 E 035 =T=12 B 5 -T+16 E 2 oss ~120 Boos B 019 200 400 600 800 O19 200 400 600 800 Samples used (in thousands) Samples used (in thousands)
Figure 5: Win rate in Mini-RTS with respect to the amount of experience at different steps T in A3C. Note that one sample (with history) in T = 2 is equivalent to two samples in T = 1. Longer T shows superior performance to small step counterparts, even if their samples are more expensive.
Trained Al (Blue) âALSIMPLE (Red) (a) (b) (°) (a) (e)
Figure 6: Game screenshots between trained AI (blue) and built-in SIMPLE (red). Player colors are shown on the boundary of hit point gauges. (a) Trained AI rushes opponent using early advantage. (c) Trained AI defends enemy invasion by (b) Trained AI attacks one opponent unit at a time. blocking their ways. (d)-(e) Trained AI uses one long-range attacker (top) to distract enemy units and one melee attacker to attack enemyâs base.
R= wy y'-1r, + y7V (sr) used in A3C does not yield a good estimation of the V (sr) is inaccurate, in particular for small T. For other experiments we use T = 6.
# true reward if
Interesting behaviors The trained AI learns to act promptly and use sophisticated strategies (Fig. 6). Multiple videos are available in https://github.com/facebookresearch/ELF.
# 4.2.2 Curriculum Training
We ï¬nd that curriculum training plays an important role in training AIs. All AIs shown in Tbl. 3 and Tbl. 4 are trained with curriculum training. For Mini-RTS, we let the built-in AI play the ï¬rst k ticks, where k â¼ Uniform(0, 1000), then switch to the AI to be trained. This (1) reduces the difï¬culty of the game initially and (2) gives diverse situations for training to avoid local minima. During training, the aid of the built-in AIs is gradually reduced until no aid is given. All reported win rates are obtained by running the trained agents alone with greedy policy.
We list the comparison with and without curriculum training in Tbl. 6. It is clear that the performance improves with curriculum training. Similarly, when ï¬ne-tuning models pre-trained with one type of opponent towards a mixture of opponents (e.g., 50%SIMPLE + 50%HIT N RUN), curriculum training is critical for better performance (Tbl. 5). Tbl. 5 shows that AIs trained with one built-in AI cannot do very well against another built-in AI in the same game. This demonstrates that training with diverse agents is important for training AIs with low-exploitability.
Game Mini-RTS HIT N RUN 26.6(±7.6) 63.6 (±7.9) 46.0(±15.3) 54.7(±11.2) Combined 47.5(±5.1) 49.1(±10.5) 47.7(±11.0) 53.2(±8.5) SIMPLE HIT N RUN Combined(No curriculum) Combined
SIMPLE 68.4 (±4.3) 34.6(±13.1) 49.4(±10.0) 51.8(±10.6) Table 5: Training with a speciï¬c/combined AIs. Frame skip of both sides is 50. When against combined AIs (50%SIMPLE + 50%HIT N RUN), curriculum training is particularly important.
8
Game no curriculum training with curriculum training Mini-RTS SIMPLE Mini-RTS HIT N RUN 66.0(±2.4) 68.4 (±4.3) 54.4(±15.9) 63.6 (±7.9) Capture the Flag 54.2(±20.0) 59.9 (±7.4)
Table 6: Win rate of A3C models with and without curriculum training. Mini-RTS: Frame skip of both sides are 50 ticks. Capture the Flag: Frame skip of trained AI is 10, while the opponent is 50. The standard deviation of win rates are large due to instability of A3C training. For example in Capture the Flag, highest win rate reaches 70% while lowest win rate is only 27%.
Game Random MCTS Mini-RTS SIMPLE Mini-RTS HIT N RUN 24.2(±3.9) 73.2(±0.6) 25.9(±0.6) 62.7(±2.0)
Table 7: Win rate using MCTS over 1000 games. Both players use a frameskip of 50.
# 4.2.3 Monte-Carlo Tree Search
Monte-Carlo Tree Search (MCTS) can be used for planning when complete information about the game is known. This includes the complete state s without fog-of-war, and the precise forward model sâ = sâ(s,a). Rooted at the current game state, MCTS builds a game tree that is biased towards paths with high win rate. Leaves are expanded with all candidate moves and the win rate estimation is computed by random self-play until the game ends. We use 8 threads, each with 100 rollouts. We use root parallelization [9] in which each thread independently expands a tree, and are combined to get the most visited action. As shown in Tol. [7] MCTS achieves a comparable win rate to models trained with RL. Note that the win rates of the two methods are not directly comparable, since RL methods have no knowledge of game dynamics, and its state knowledge is reduced by the limits introduced by the fog-of-war. Also, MCTS runs much slower (2-3sec per move) than the trained RL AI (< Imsec per move).
# 5 Conclusion and Future Work
In this paper, we propose ELF, a research-oriented platform for concurrent game simulation which offers an extensive set of game play options, a lightweight game simulator, and a ï¬exible envi- ronment. Based on ELF, we build a RTS game engine and three initial environments (Mini-RTS, Capture the Flag and Tower Defense) that run 40KFPS per core on a laptop. As a result, a full- game bot in these games can be trained end-to-end in one day using a single machine. In addition to the platform, we provide throughput benchmarks of ELF, and extensive baseline results using state-of-the-art RL methods (e.g, A3C [21]) on Mini-RTS and show interesting learnt behaviors.
ELF opens up many possibilities for future research. With this lightweight and ï¬exible platform, RL methods on RTS games can be explored in an efï¬cient way, including forward modeling, hierarchical RL, planning under uncertainty, RL with complicated action space, and so on. Furthermore, the exploration can be done with an affordable amount of resources. As future work, we will continue improving the platform and build a library of maps and bots to compete with.
# References
[1] Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons, and Jan Kautz. Re- International inforcement learning through asynchronous advantage actor-critic on a gpu. Conference on Learning Representations (ICLR), 2017.
[2] BattleCode. Battlecode, mitâs ai programming competition: https://www.battlecode.org/. 2000. URL https://www.battlecode.org/.
[3] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801.
9
[4] Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. CoRR, abs/1207.4708, 2012. URL http://arxiv.org/abs/1207.4708.
[5] Nadav Bhonker, Shai Rozenberg, and Itay Hubara. Playing SNES in the retro learning envi- ronment. CoRR, abs/1611.02205, 2016. URL http://arxiv.org/abs/1611.02205.
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. CoRR, abs/1606.01540, 2016. URL http://arxiv. org/abs/1606.01540.
[7] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowl- ing, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1â43, 2012.
[8] Michael Buro and Timothy Furtak. On the development of a free rts game engine. In Game- OnNA Conference, pages 23â27, 2005.
[9] Guillaume MJ-B Chaslot, Mark HM Winands, and H Jaap van Den Herik. Parallel monte-carlo tree search. In International Conference on Computers and Games, pages 60â71. Springer, 2008.
[10] Erwin Coumans. Bullet physics engine. Open Source Software: http://bulletphysics.org, 2010.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015.
[12] Stefan Johansson and Robin Westberg. Spring: https://springrts.com/. 2008. URL https: //springrts.com/.
[13] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for In International joint conference on artiï¬cial intelli- artiï¬cial intelligence experimentation. gence (IJCAI), page 4246, 2016.
[14] MichaŠKempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.
[15] Guillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement learning. arXiv preprint arXiv:1609.05521, 2016.
[16] RoboCup Simulation League. Robocup https://en.wikipedia.org/wiki/robocup simulation league. //en.wikipedia.org/wiki/RoboCup_Simulation_League. 1995. simulation league: URL https:
[17] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
[18] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural network acoustic models. 2013.
[19] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J. Ballard, Andrea Ban- ino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell. Learning to navigate in complex environments. ICLR, 2017.
[20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
[21] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lill- icrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
10
[22] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. Massively parallel methods for deep reinforcement learning. CoRR, abs/1507.04296, 2015. URL http://arxiv.org/ abs/1507.04296.
[23] Santiago Ontan´on. The combinatorial multi-armed bandit problem and its application to real- time strategy games. In Proceedings of the Ninth AAAI Conference on Artiï¬cial Intelligence and Interactive Digital Entertainment, pages 58â64. AAAI Press, 2013.
# [24] OpenRA. Openra: http://www.openra.net/. 2007. URL http://www.openra.net/.
[25] Peng Peng, Quan Yuan, Ying Wen, Yaodong Yang, Zhenkun Tang, Haitao Long, and Jun Wang. Multiagent bidirectionally-coordinated nets for learning to play starcraft combat games. CoRR, abs/1703.10069, 2017. URL http://arxiv.org/abs/1703.10069.
[26] John Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, pages 1889â1897, 2015.
[27] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanc- tot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529 (7587):484â489, 2016.
[28] Pumpkin Studios. Warzone 2100: https://wz2100.net/. 1999. URL https://wz2100. net/.
[29] Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Mazebase: A sandbox for learning from games. CoRR, abs/1511.07401, 2015. URL http: //arxiv.org/abs/1511.07401.
[30] Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gra- dient methods for reinforcement learning with function approximation. In NIPS, volume 99, pages 1057â1063, 1999.
[31] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth´ee Lacroix, Zem- ing Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learn- ing research on real-time strategy games. CoRR, abs/1611.00625, 2016. URL http: //arxiv.org/abs/1611.00625.
[32] Yuandong Tian and Yan Zhu. Better computer go player with neural network and long-term prediction. arXiv preprint arXiv:1511.06410, 2015.
# [33] Universe. 2016. URL universe.openai.com.
[34] Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, and Soumith Chintala. Episodic exploration ICLR, for deep deterministic policies: An application to starcraft micromanagement tasks. 2017.
[35] Yuxin Wu and Yuandong Tian. Training agent for ï¬rst-person shooter game with actor-critic curriculum learning. International Conference on Learning Representations (ICLR), 2017.
11
# 6 Appendix: Detailed descriptions of RTS engine and games
# 6.1 Overview
On ELF, we thus build three different environments, Mini-RTS, Capture the Flag and Tower De- fense. Tbl. 8 shows their characteristics.
(a) 4 (b) Resource Game ends? All Bots Act() Execute Commands Your base Worker Wout barracks Cmd G: Durative/Gather State 0: Moving to resource Game State Fee, Hit Point Action/Reply AAI Selected unit > Enemy base Coordinates in floating points.
Figure 7: Overview of Mini-RTS. (a) Tick-driven system. Command system. (b) Visualization of game play. (c)
Descriptions Gather resource/build troops to destroy enemyâs base. Capture the ï¬ag and bring it to your own base Builds defensive towers to block enemy invasion. Table 8: Short descriptions of three different environments built from our RTS engine.
# 6.2 Hierarchical Commands
Strategic Environment command a 7 Immediate -}ââ] Game state change Top-level
Figure 8: Hierarchical command system in our RTS engine. Top-level commands can issue strategic level commands, which in terms can issue durative and immediate commands to each unit (e.g., ALL ATTACK can issue ATTACK command to all units of our side). For a unit, durative commands usually last for a few ticks until the goal is achieved (e.g., enemy down). At each tick, the durative command can issue other durative ones, or immediate commands which takes effects by changing the game situation at the current tick.
The command level in our RTS engine is hierarchical (Fig. 8). A high-level command can issue other commands at the same tick during execution, which are then executed and can potential issues other commands as well. A command can also issue subsequent commands for future ticks. Two kinds of commands exist, durative and immediate. Durative commands (e.g., Move, Attack) last for many ticks until completion (e.g., enemy down), while immediate commands take effect at the current tick.
12
# 6.3 Units and Game Dynamics
Mini-RTS. Tbl. 9 shows available units for Mini-RTS, which captures all basic dynamics of RTS Games: Gathering, Building facilities, Building different kinds of troops, Defending opponentâs attacks and/or Invading opponentâs base. For troops, there are melee units with high hit point, high attack points but low moving speed, and agile units with low hit point, long attack range but fast moving speed. Tbl. 10 shows available units for Capture the Flag.
Note that our framework is extensive and adding more units is easy.
Unit name BASE RESOURCE WORKER BARRACKS MELEE ATTACKER RANGE ATTACKER Description Building that can build workers and collect resources. Resource unit that contains 1000 minerals. Worker who can build barracks and gather resource. Low movement speed and low attack damage. Building that can build melee attacker and range attacker. Tank with high HP, medium movement speed, short attack range, high attack damage. Tank with low HP, high movement speed, long attack range and medium attack damage.
# Table 9: Available units in Mini-RTS.
Unit name Description BASE FLAG ATHLETE Unit with attack damage and can carry a ï¬ag. Moves slowly with a ï¬ag. Table 10: Available units in Capture the Flag.
Capture the Flag. During the game, the player will try to bring the ï¬ag back to his own base. The ï¬ag will appear in the middle of the map. The athlete can carry a ï¬ag or ï¬ght each other. When carrying a ï¬ag, an athlete has reduced movement speed. Upon death, it will drop the ï¬ag if it is carrying one, and will respawn automatically at base after a certain period of time. Once a ï¬ag is brought to a playerâs base, the player scores a point and the ï¬ag is returned to the middle of the map. The ï¬rst player to score 5 points wins.
Tower Defense. During the game, the player will defend his base at top-left corner. Every 200 ticks, increasing number of enemy attackers will spawn at lower-right corner of the map, and travel towards playerâs base through a maze. The player can build towers along the way to prevent enemy from reaching the target. For every 5 enemies killed, the player can build a new tower. The player will lose if 10 enemies reach his base, and will win if he can survive 10 waves of attacks.
# 6.4 Others
Game Balance. We test the game balance of Mini-RTS and Capture the Flag. We put the same AI to combat each other. In Mini-RTS the win rate for player 0 is 50.0(±3.0) and In Capture the Flag the win rate for player 0 is 49.9(±1.1).
Replay. We offer serialization of replay and state snapshot at arbitrary ticks, which is more ï¬exible than many commercial games.
13
# 7 Detailed explanation of the experiments
Tbl. 11 shows the discrete action space for Mini-RTS and Capture the Flag used in the experiments.
Randomness. All games based on RTS engine are deterministic. However, modern RL methods require the experience to be diverse to explore the game state space more efï¬ciently. When we train AIs for Mini-RTS, we add randomness by randomly placing resources and bases, and by randomly adding units and buildings when the game starts. For Capture the Flag, all athletes have random starting position, and the ï¬ag appears in a random place with equal distances to both playerâs bases.
# 7.1 Rule based AIs for Mini-RTS
Simple AI This AI builds 3 workers and ask them to gather resources, then builds a barrack if resource permits, and then starts to build melee attackers. Once he has 5 melee attackers, all 5 attackers will attack opponentâs base.
Hit & Run AI This AI builds 3 workers and ask them to gather resources, then builds a barrack if resource permits, and then starts to build range attackers. Once he has 2 range attackers, the range attackers will move towards opponentâs base and attack enemy troops in range. If enemy counterattacks, the range attackers will hit and run.
# 7.2 Rule based AIs for Capture the Flag
Simple AI This AI will try to get ï¬ag if ï¬ag is not occupied. If one of the athlete gets the ï¬ag, he will escort the ï¬ag back to base, while other athletes defend opponentâs attack. If an opponent athlete carries the ï¬ag, all athletes will attack the ï¬ag carrier.
Command name IDLE BUILD WORKER BUILD BARRACK BUILD MELEE ATTACKER BUILD RANGE ATTACKER HIT AND RUN ATTACK ATTACK IN RANGE ALL DEFEND Description Do nothing. If the base is idle, build a worker. Move a worker (gathering or idle) to an empty place and build a barrack. If we have an idle barrack, build an melee attacker. If we have an idle barrack, build an range attacker. If we have range attackers, move towards opponent base and attack. Take advantage of their long attack range and high movement speed to hit and run if enemy counter-attack. All melee and range attackers attack the opponentâs base. All melee and range attackers attack enemies in sight. All troops attack enemy troops near the base and resource.
Table 11: Action space used in our trained AI. There are 9 strategic hard-coded global commands. Note that all building commands will be automatically cancelled when the resource is insufï¬cient.
Command name Description IDLE Do nothing. GET FLAG All athletes move towards the ï¬ag and capture the ï¬ag. ESCORT FLAG Move the athlete with the ï¬ag back to base. ATTACK DEFEND
Table 12: Action space used in Capture the Flag trained AI.
14 | {
"id": "1605.02097"
} |
1707.00061 | Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English | We highlight an important frontier in algorithmic fairness: disparity in the
quality of natural language processing algorithms when applied to language from
authors of different social groups. For example, current systems sometimes
analyze the language of females and minorities more poorly than they do of
whites and males. We conduct an empirical analysis of racial disparity in
language identification for tweets written in African-American English, and
discuss implications of disparity in NLP. | http://arxiv.org/pdf/1707.00061 | Su Lin Blodgett, Brendan O'Connor | cs.CY, cs.CL | Presented as a talk at the 2017 Workshop on Fairness, Accountability,
and Transparency in Machine Learning (FAT/ML 2017) | null | cs.CY | 20170630 | 20170630 | 7 1 0 2 n u J 0 3 ] Y C . s c [
1 v 1 6 0 0 0 . 7 0 7 1 : v i X r a
# Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English
Su Lin Blodgett University of Massachusetts Amherst Amherst, MA blodgett@cs.umass.edu
Brendan OâConnor University of Massachusetts Amherst Amherst, MA brenocon@cs.umass.edu
ABSTRACT We highlight an important frontier in algorithmic fairness: dispar- ity in the quality of natural language processing algorithms when applied to language from authors of different social groups. For example, current systems sometimes analyze the language of fe- males and minorities more poorly than they do of whites and males. We conduct an empirical analysis of racial disparity in language identification for tweets written in African-American English, and discuss implications of disparity in NLP.
1 INTRODUCTION: DISPARITY IN NLP As machine learned algorithms govern more and more real-world outcomes, how to make them fairâand what that should meanâis of increasing concern. One strand of research, heavily represented at the FAT-ML series of workshops,1 considers scenarios where a learning algorithm must make decisions about people, such as approving prospective applicants for employment, or deciding who should be the targets of police actions [5], and seeks to develop learners or algorithms whose decisions have only small differences in behavior between persons from different groups [4] or that satisfy other notions of fairness (e.g. [12, 13]).
Another recent strand of research has examined a complemen- tary aspect of bias and fairness: disparate accuracy in language anal- ysis. Linguistic production is a critically important form of human behavior, and a major class of artificial intelligence algorithmsâ natural language processing, or language technologiesâmay or may not fairly analyze language produced by different types of authors [7]. For example, Tatman [20] finds that YouTube autocap- tioning has a higher word error rate for female speakers than for male speakers in videos. This has implications for downstream uses of language technology:
Gender and dialect are well-known confounds in speech recogni- tion, since they can implicate pitch, timbre, and the pronunciation of words (the phonetic level of language); domain adaptation is always a challenge and research continues on how to apply do- main transfer to speech recognizers across dialects [15]. And more broadly, decades of research in the field of sociolinguistics has doc- umented an extensive array of both social factors that affect how people produce language (e.g. community, geography, ethnicity), and how specifically language is affected (e.g. the lexicon, syntax, semantics). We might expect a minority teenager in school as well as a white middle-aged software engineer to both speak English, but they may exhibit variation in their pronunciation, word choice, slang, or even syntactic structures. Dialect communities often align with geographic and sociological factors, as language variation emerges within distinct social networks, or is affirmed as a marker of social identity.
Dialects pose a challenge to fairness in NLP, because they en- tail language variation that is correlated to social factors, and we believe there needs to be greater awareness of dialects among tech- nologists using and building language technologies. In the rest of this paper, we focus on the dialect of African-American English as used on Twitter, which previous work [3, 9, 11] has established is very prevalent and sometimes quite different than mainstream American English. We analyze an African-American English Twit- ter corpus (from Blodgett et al. [3], described in §3), and analyze racial disparity in language identification, a crucial first step in any NLP application. Our previous work found that off-the-shelf tools display racial disparityâthey tend to erroneously classify messages from African-Americans as non-English more often than those from whites. We extend this analysis from 200 to 20,000 tweets, finding that the disparity persists when controlling for message length (§4), and evaluate the racial disparity for several black-box commercial services. We conclude with a brief discussion (§5).
⢠Viewing: users who rely on autocaptioning have a harder time understanding what women are saying in videos, rel- ative to what men are saying.
⢠Access: search systems are necessary for people to access information online, and for videos they may depend on indexing text recognized from the audio. Tatmanâs results [20] imply that such a search system will fail to find infor- mation produced by female speakers more often than for male speakers.
This bias affects interests of the speakersâit is more difficult for their voices to be communicated to the worldâas well as other users, who are deprived of information or opinions from females, or more generally, any social group whose language experiences lower accuracy of analysis by language technologies.
# 2 AFRICAN-AMERICAN ENGLISH AND SOCIAL MEDIA
We focus on language in social media, which is often informal and conversational. Social media NLP tools may be used for, say, senti- ment analysis applications, which seek to measure opinions from online communities. But current NLP tools are typically trained on traditional written sources, which are quite different from so- cial media language, and even more so from dialectal social media language. Not only does this imply social media NLP may be of lower accuracy, but since language can vary across social groups, any such measurements may be biasedâincorrectly representing ideas and opinions from people who use non-standard language.
# 1http://www.fatml.org/
Specifically, we investigate dialectal language in publicly avail- able Twitter data, focusing on African-American English (AAE), a dialect of American English spoken by millions of people across the United States [6, 14, 18]. AAE is a linguistic variety with defined syntactic-semantic, phonological, and lexical features, which have been the subject of a rich body of sociolinguistic literature. In addi- tion to the linguistic characterization, reference to its speakers and their geographical location or speech communities is important, especially in light of the historical development of the dialect. Not all African-Americans speak AAE, and not all speakers of AAE are African-American; nevertheless, speakers of this variety have close ties with specific communities of African-Americans [6].
The phenomenon of âBlackTwitterâ has been noted anecdotally; indeed, African-American and Hispanic minorities were markedly over-represented in the early years of the Twitter service (as well as younger people) relative to their representation in the American general population.2 It is easy to find examples of non-Standard American English (SAE) language use, such as:
(1) he woke af smart af educated af daddy af coconut oil af
GOALS AF & shares food af (2) Bored af den my phone finna dieâ¼!
The first example has low punctuation usage (there is an utterance boundary after every âafâ), but more importantly, it displays a key syntactic feature of the AAE dialect, a null copula: âhe wokeâ would be written, in Standard American English, as âhe is wokeâ (meaning, politically aware). âafâ is an online-specific term meaning âas fâ.â The second example displays two more traditional AAE features: âdenâ is a spelling of âthenâ which follows a common phonological transform in AAE (initial âthâ changing to a âdâ sound: âdat,â âdis,â etc. are also common), and the word âfinnaâ is an auxiliary verb, short for âfixing to,â which indicates an immediate future tense (âmy phone is going to die very soonâ); it is part of AAEâs rich verbal auxiliary system capable of encoding different temporal semantics than mainstream English [6].
# 3 DEMOGRAPHIC MIXED MEMBERSHIP MODEL FOR SOCIAL MEDIA
In order to test racial disparity in social media NLP, [3] collects a large-scale AAE corpus from Twitter, inferring soft demographic labels with a mixed-membership probabilistic model; we use this same corpus and method, briefly repeating the earlier description of the method. This approach to identifying AAE-like text makes use of the connection between speakers of AAE and African-American neighborhoods; we harvest a set of messages from Twitter, cross- referenced against U.S. Census demographics, and then analyze words against demographics with a mixed-membership probabilis- tic model. The data is a sample of millions of publicly posted geo- located Twitter messages (from the Decahose/Gardenhose stream [17]), most of which are sent on mobile phones, by authors in the U.S. in 2013.
For each message, we look up the U.S. Census blockgroup geo- graphic area that the message was sent in, and use race and ethnicity information for each blockgroup from the Censusâ 2013 American Community Survey, defining four covariates: percentages of the
# 2http://www.pewinternet.org/fact-sheet/social-media/
2
population that are non-Hispanic whites, non-Hispanic blacks, His- panics (of any race), and (non-Hispanic) Asians. Finally, for each user u, we average the demographic values of all their messages in our dataset into a length-four vector Ï
Given this set of messages and author-associated demograph- ics, we infer statistical associations between language and demo- graphics with a mixed membership probabilistic model. It directly associates each of the demographic variables with a topic; i.e. a unigram language model over the vocabulary. The model assumes an authorâs mixture over the topics tends to be similar to their Census-associated demographic weights, and that every message has its own topic distribution. This allows for a single author to use different types of language in different messages, accommodating multidialectal authors. The message-level topic probabilities θm (census) are drawn from an asymmetric Dirichlet centered on Ï , u whose scalar concentration parameter α controls whether authorsâ language is very similar to the demographic prior, or can have some deviation. A token tâs latent topic zt is drawn from θm , and the word itself is drawn from Ïzt , the language model for the topic. Thus, the model learns demographically-aligned language models for each demographic category. Our previous work [3] verifies that its African-American language model learns linguistic attributes known in the sociolinguistics literature to be characteristic of AAE, in line with other work that has also verified the correspondence of geographical AA prevalence to AAE linguistic features on Twitter [10, 19].
This publicly available corpus contains 59.2 million tweets. We filter its messages to ones strongly associated with demographic groups; for example, for each message we infer the posterior pro- portion of its tokens that came from the African-American language model, which can be high either due to demographic prior, or from a message that uses many words exclusive to the AA language model (topic); these proportions are available in the released cor- pus. When we filter to messages with AA proportion greater than 0.8, this results in AAE-like text. We call these AA-aligned messages and we also select a set of white-aligned messages in the same way.3
4 BIAS IN NLP TOOLS 4.1 Language identification Language identification, the task of classifying the major world language in which a message is written, is a crucial first step in al- most any web or social media text processing pipeline. For example, in order to analyze the opinions of U.S. Twitter users, one might throw away all non-English messages before running an English sentiment analyzer. (Some of the coauthors of this paper have done this as a simple expedient step in the past.)
A variety of methods for language identification exist [8]; so- cial media language identification is particularly challenging since messages are short and also use non-standard language [1]. In
3While Blodgett et al. verified that the AA-aligned tweets contain well-known features of AAE, we hesitate to call these âAAEâ and âSAEâ corpora, since technically speaking they are simply demographically correlated language models. The Census refers to the categories as âBlack or African-Americanâ and âWhiteâ (codes B03002E4 and B03002E3 in ACS 2013). And, while Hispanic- and Asian-associated language models of Blodgett et al.âs model are also of interest, we focus our analysis here on the African-American and White language models.
fact, a popular language identification system, langid.py [16], clas- sifies both example messages in §2 as Danish with more than 99.9% confidence.
We take the perspective that since AAE is a dialect of American English, it ought to be classified as English for the task of major world language identification. We hypothesize that if a language identification tool is trained on standard English data, it may exhibit disparate performance on AA- versus white-aligned tweets. In particular, we wish to assess the racial disparity accuracy difference:
p(correct | Wh) â p(correct | AA) (1)
From manual inspection of a sample of hundreds of messages, it appears that nearly all white-aligned and AA-aligned tweets are actually English, so accuracy is the same as proportion of English predictions by the classifier. A disparity of 0 indicates a language identifier that is fair across these classes. (An alternative measure is the ratio of accuracies, corresponding to Feldman et al.âs disparate impact measure [4].)
4.2 Experiments We conduct an evaluation of four different off-the-shelf language identifiers, which are popular and straightforward for engineers to use when building applications:
⢠langid.py (software): One of the most popular open source language identification tools, langid.py was originally trained on over 97 languages and evaluated on both traditional corpora and Twitter messages [16].
⢠IBM Watson (API): The Watson Developer Cloudâs Lan- guage Translator service supports language identification of 62 languages.4
Microsoft Azure (API): Microsoft Azureâs Cognitive Ser- vices supports language identification of 120 languages.5 ⢠Twitter (metadata): The output of Twitterâs in-house identifier, whose predictions are included in a tweetâs meta- data (from 2013, the time of data collection), which Twitter intends to âhelp developers more easily work with targeted subsets of Tweet collections.â6
⢠Google (API, excluded): We attempted to test Googleâs language detection service,7 but it returned a server error for every message we gave it to classify.
We queried the remote API systems in May 2017.
From manual inspection, we observed that longer tweets are sig- nificantly more likely to be correctly classified, which is a potential confound for a race disparity analysis, since the length distribution is different for each demographic group. To minimize this effect in our comparisons, we group messages into four bins (shown in Table 1) according to the number of words in the message. For each bin, we sampled 2,500 AA-aligned tweets and 2,500 white-aligned tweets, yielding a total of 20,000 messages across the two categories
# 4https://www.ibm.com/watson/developercloud/doc/language-translator/index.html 5https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/ overview#language-detection 6https://blog.twitter.com/developer/en us/a/2013/introducing-new-metadata-for-tweets. html 7https://cloud.google.com/translate/docs/detecting-language
3
and four bins.8 We limited pre-processing of the messages to fixing of HTML escape characters and removal of URLs, keeping ânoisyâ features of social media text such as @-mentions, emojis, and hash- tags. We then calculated, for each bin in each category, the number of messages predicted to be in English by each classifier. Accuracy results are shown in Table 1.9
As predicted, classifier accuracy does increase as message lengths increase; classifier accuracy is generally excellent for all messages containing at least 10 tokens. This result agrees with previous work finding short texts to be challenging to classify (e.g. [2]), since there are fewer features (e.g. character n-grams) to give evidence for the language used.10
However, the classifier results display a disparity in performance among messages of similar length; for all but one length bin under one classifier, accuracy on the white-aligned sample is higher than on the AA-aligned sample. The disparity in performance between AA- and white-aligned messages is greatest when messages are short; the gaps in performance for extremely short messages ranges across classifiers from 6.6% to 19.7%. This gap in performance is particularly critical as 41.7% of all AA-aligned messages in the corpus as a whole have 5 or fewer tokens.11
5 DISCUSSION Are these disparities substantively significant? It is easy to see how statistical bias could arise in downstream applications. For example, consider an analyst trying to look at major opinions about a product or political figure, with a sentiment analysis system that only gathers opinions from messages classified as English by Twitter. For messages length 5 or less, opinions from African-American speakers will be shown to be 1 â 54.0/73.7 = 27% less frequent than they really are, relative to white opinions. Fortunately, the accuracy disparities are often only a few percentage points; nevertheless, it is important for practitioners to keep potential biases like these in mind.
One way forward to create less disparate NLP systems will be to use domain adaptation and other methods to extend algorithms to work on different distributions of data; for example, our demo- graphic modelâs predictions can be used to improve a language identifier, since the demographic language modelâs posteriors accu- rately identify some cases of dialectal English [3]. In the context of speech recognition, Lehr et al. [15] pursue a joint modeling ap- proach, learning pronunciation model parameters for AAE and SAE simultaneously.
One important issue may be the limitation of perspective of technologists versus users. In striking contrast to Twitterâs (histor- ically) minority-heavy demographics, major U.S. tech companies are notorious for their low representation of African-Americans and Hispanics; for example, Facebook and Google report only 1%
Due to a data processing error, there are 5 duplicates (19,995 unique tweets); we report on all 20,000 messages for simplicity. °We have made the 20,000 messages publicly available at: http://slanglab.cs.umass. edu/TwitterAAE/ 104 reviewer asked if length is used as a feature; we know that the open-source langid.py system does not (explicitly) use it. 'lFor most (system,length) combinations, the accuracy difference is significant under a two-sided t-test (p < .01) except for two rows (t < 5, langid.py, p = .03) and (10 < t < 15, Twitter, p = 0.5). Accuracy rate standard errors range from 0.04% to 0.9% (= sfacc(1 â acc)/2500).
langid.py IBM Watson Microsoft Azure Twitter t ⤠5 5 < t ⤠10 10 < t ⤠15 t > 15 t ⤠5 5 < t ⤠10 10 < t ⤠15 t > 15 t ⤠5 5 < t ⤠10 10 < t ⤠15 t > 15 t ⤠5 5 < t ⤠10 10 < t ⤠15 t > 15 AA Acc. WH Acc. Diff. 2.8 7.0 5.0 3.6 15.1 3.8 2.6 1.6 6.6 1.1 0.3 0.4 19.7 4.0 0.3 -3.0 68.0 84.6 93.0 96.2 62.8 91.9 96.4 98.0 87.6 98.5 99.6 99.5 54.0 87.5 95.7 98.5 70.8 91.6 98.0 99.8 77.9 95.7 99.0 99.6 94.2 99.6 99.9 99.9 73.7 91.5 96.0 95.1
Table 1: Percent of the 2,500 tweets in each bin classified as English by each classifier; Diff. is the difference (disparity on an absolute scale) between the classifier accuracy on the AA-aligned and white-aligned samples. t is the message length for the bin.
of their tech employees are African-American,12 as opposed to 13.3% in the overall U.S. population,13 and the population of com- puter science researchers in the U.S. has similarly low minority representation. It is of course one example of the ever-present challenge of software designers understanding how users use their software; in the context of language processing algorithms, such understanding must be grounded in an understanding of dialects and sociolinguistics.
REFERENCES [1] Timothy Baldwin, Paul Cook, Marco Lui, Andrew MacKinlay, and Li Wang. 2013. How Noisy Social Media Text, How Diffrnt Social Media Sources?. In International Joint Conference on Natural Language Processing. 356â364. [2] Timothy Baldwin and Marco Lui. 2010. Language identification: The long and the short of the matter. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 229â237.
[3] Su Lin Blodgett, Lisa Green, and Brendan OâConnor. 2016. Demographic Dialectal Variation in Social Media: A Case Study of African-American English. Proceedings of EMNLP (2016).
[4] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 259â268.
[5] Sharad Goel, Maya Perelman, Ravi Shroff, and David Alan Sklansky. 2017. Com- batting police discrimination in the age of big data. New Criminal Law Review: In International and Interdisciplinary Journal 20, 2 (2017), 181â232.
[6] Lisa J. Green. 2002. African American English: A Linguistic Introduction. Cam- bridge University Press.
[10] Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2016. Learning a POS tagger for AAVE-like language. In Proceedings of NAACL. Association for Computational Linguistics.
[11] Anna Katrine Jørgensen, Dirk Hovy, and Anders Søgaard. 2015. Challenges of studying and processing dialects in social media. In Proceedings of the Workshop on Noisy User-generated Text. 9â18.
[12] Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2016. Rawlsian fairness for machine learning. arXiv preprint arXiv:1610.09559 (2016).
[13] Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. 2016. Fairness in Learning: Classic and Contextual Bandits. In Advances in Neural In- formation Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 325â333. http://papers.nips.cc/ paper/6355-fairness-in-learning-classic-and-contextual-bandits.pdf
[14] William Labov. 1972. Language in the inner city: Studies in the Black English vernacular. Vol. 3. University of Pennsylvania Press.
[15] Maider Lehr, Kyle Gorman, and Izhak Shafran. 2014. Discriminative pronuncia- tion modeling for dialectal speech recognition. In Proc. Interspeech.
langid. py: An Off-the-shelf Language Identifi- cation Tool. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012), Demo Session, Jeju, Republic of Korea. http://www.aclweb.org/anthology-new/P/P12/P12-3005.pdf
[18]
[19]
Is the Sample Good Enough? Comparing Data from Twitterâs Streaming API with Twitterâs Firehose. In International AAAI Conference on Weblogs and Social Media. http://www.aaai.org/ocs/index.php/ICWSM/ICWSM13/paper/view/6071 John Russell Rickford. 1999. African American vernacular English: Features, evolution, educational implications. Wiley-Blackwell. Ian Stewart. 2014. Now We Stronger than Ever: African-American English Syntax in Twitter. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguis- tics. Association for Computational Linguistics, Gothenburg, Sweden, 31â37. http://www.aclweb.org/anthology/E14-3004
[7] Dirk Hovy and L. Shannon Spruit. 2016. The Social Impact of Natural Language Processing. In Proceedings of ACL.
[8] Baden Hughes, Timothy Baldwin, Steven Bird, Jeremy Nicholson, and Andrew MacKinlay. 2006. Reconsidering Language Identification for Written Language Resources. In Proceedings of the Fifth International Conference on Language Re- sources and Evaluation (LRECâ06). European Language Resources Association (ELRA). http://aclweb.org/anthology/L06-1274
[20] Rachael Tatman. 2017. Gender and Dialect Bias in YouTubeâs Automatic Captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. Association for Computational Linguistics, Valencia, Spain, 53â59. http://www. aclweb.org/anthology/W/W17/W17-1606
[9] Taylor Jones. 2015. Toward a Description of African American Vernacular English Dialect Regions Using âBlack Twitterâ. American Speech 90, 4 (2015), 403â440.
# 12https://newsroom.fb.com/news/2016/07/facebook-diversity-update-positive-hiring-trends-show-progress/ https://www.google.com/diversity/ 13https://www.census.gov/quickfacts/table/RHI225215/00
4 | {
"id": "1610.09559"
} |
1706.10295 | Noisy Networks for Exploration | We introduce NoisyNet, a deep reinforcement learning agent with parametric
noise added to its weights, and show that the induced stochasticity of the
agent's policy can be used to aid efficient exploration. The parameters of the
noise are learned with gradient descent along with the remaining network
weights. NoisyNet is straightforward to implement and adds little computational
overhead. We find that replacing the conventional exploration heuristics for
A3C, DQN and dueling agents (entropy reward and $\epsilon$-greedy respectively)
with NoisyNet yields substantially higher scores for a wide range of Atari
games, in some cases advancing the agent from sub to super-human performance. | http://arxiv.org/pdf/1706.10295 | Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, Shane Legg | cs.LG, stat.ML | ICLR 2018 | null | cs.LG | 20170630 | 20190709 | 9 1 0 2
l u J 9 ] G L . s c [
3 v 5 9 2 0 1 . 6 0 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# NOISY NETWORKS FOR EXPLORATION
# Meire Fortunatoâ Mohammad Gheshlaghi Azarâ Bilal Piot â
# Jacob Menick Matteo Hessel Ian Osband Alex Graves Volodymyr Mnih
# Remi Munos Demis Hassabis Olivier Pietquin Charles Blundell Shane Legg
DeepMind {meirefortunato,mazar,piot, jmenick,mtthss,iosband,gravesa,vmnih, munos,dhcontact,pietquin,cblundell,legg}@google.com
# ABSTRACT
We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agentâs policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and e-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.
# INTRODUCTION
Despite the wealth of research into efficient methods for exploration in Reinforcement Learning (RL) (Kearns & Singh] 2002| Jaksch et al] 2010), most exploration heuristics rely on random perturbations of the agentâs policy, such as e-greedy (Sutton & Barto|{1998) or entropy regularisation ), to induce novel behaviours. However such local âditheringâ perturbations are unlikely to lead to the large-scale behavioural patterns needed for efficient exploration in many environments fetal2017).
Optimism in the face of uncertainty is a common exploration heuristic in reinforcement learning. Various forms of this heuristic often come with theoretical guarantees on agent performance (Azar et al., 2017; Lattimore et al., 2013; Jaksch et al., 2010; Auer & Ortner, 2007; Kearns & Singh, 2002). However, these methods are often limited to small state-action spaces or to linear function approximations and are not easily applied with more complicated function approximators such as neural networks (except from work by (Geist & Pietquin, 2010a;b) but it doesnât come with convergence guarantees). A more structured approach to exploration is to augment the environmentâs reward signal with an additional intrinsic motivation term (Singh et al., 2004) that explicitly rewards novel discoveries. Many such terms have been proposed, including learning progress (Oudeyer & Kaplan, 2007), compression progress (Schmidhuber, 2010), variational information maximisation (Houthooft et al., 2016) and prediction gain (Bellemare et al., 2016). One problem is that these methods separate the mechanism of generalisation from that of exploration; the metric for intrinsic reward, andâimportantlyâits weighting relative to the environment reward, must be chosen by the experimenter, rather than learned from interaction with the environment. Without due care, the optimal policy can be altered or even completely obscured by the intrinsic rewards; furthermore, dithering perturbations are usually needed as well as intrinsic reward to ensure robust exploration (Ostrovski et al., 2017). Exploration in the policy space itself, for example, with evolutionary or black box algorithms (Moriarty et al., 1999; Fix & Geist, 2012; Salimans et al., 2017), usually requires many prolonged interactions with the environment. Although these algorithms are quite generic and
# âEqual contribution.
1
Published as a conference paper at ICLR 2018
can apply to any type of parametric policies (including neural networks), they are usually not data efï¬cient and require a simulator to allow many policy evaluations.
We propose a simple alternative approach, called NoisyNet, where learned perturbations of the network weights are used to drive exploration. The key insight is that a single change to the weight vector can induce a consistent, and potentially very complex, state-dependent change in policy over multiple time steps â unlike dithering approaches where decorrelated (and, in the case of e-greedy, state-independent) noise is added to the policy at every step. The perturbations are sampled from a noise distribution. The variance of the perturbation is a parameter that can be considered as the energy of the injected noise. These variance parameters are learned using gradients from the reinforcement learning loss function, along side the other parameters of the agent. The approach differs from parameter compression schemes such as variational inference (Hinton & Van Camp} 1993} |Bishop}|1995}|Graves| {201 1}/Blundell et al. |2015}/Gal & Ghahramani| 2016) and flat minima search (Hochreiter & Schmidhuber||1997) since we do not maintain an explicit distribution over weights during training but simply inject noise in the parameters and tune its intensity automatically. Consequently, it also differs from Thompson sampling (Thompson|{1933} [Lipton et al.|{2016) as the distribution on the parameters of our agents does not necessarily converge to an approximation of a posterior distribution.
At a high level our algorithm is a randomised value function, where the functional form is a neural network. Randomised value functions provide a provably efï¬cient means of exploration (Osband et al., 2014). Previous attempts to extend this approach to deep neural networks required many duplicates of sections of the network (Osband et al., 2016). By contrast in our NoisyNet approach while the number of parameters in the linear layers of the network is doubled, as the weights are a simple afï¬ne transform of the noise, the computational complexity is typically still dominated by the weight by activation multiplications, rather than the cost of generating the weights. Additionally, it also applies to policy gradient methods such as A3C out of the box (Mnih et al., 2016). Most recently (and independently of our work) Plappert et al. (2017) presented a similar technique where constant Gaussian noise is added to the parameters of the network. Our method thus differs by the ability of the network to adapt the noise injection with time and it is not restricted to Gaussian noise distributions. We need to emphasise that the idea of injecting noise to improve the optimisation process has been thoroughly studied in the literature of supervised learning and optimisation under different names (e.g., Neural diffusion process (Mobahi, 2016) and graduated optimisation (Hazan et al., 2016)). These methods often rely on a noise of vanishing size that is non-trainable, as opposed to NoisyNet which tunes the amount of noise by gradient descent.
NoisyNet can also be adapted to any deep RL algorithm and we demonstrate this versatility by pro- viding NoisyNet versions of DQN (Mnih et al., 2015), Dueling (Wang et al., 2016) and A3C (Mnih et al., 2016) algorithms. Experiments on 57 Atari games show that NoisyNet-DQN and NoisyNet- Dueling achieve striking gains when compared to the baseline algorithms without signiï¬cant extra computational cost, and with less hyper parameters to tune. Also the noisy version of A3C provides some improvement over the baseline.
# 2 BACKGROUND
This section provides mathematical background for Markov Decision Processes (MDPs) and deep RL with Q-learning, dueling and actor-critic methods.
2.1 MARKOV DECISION PROCESSES AND REINFORCEMENT LEARNING
MDPs model stochastic, discrete-time and ï¬nite action space control problems (Bellman & Kalaba, 1965; Bertsekas, 1995; Puterman, 1994). An MDP is a tuple M = (X , A, R, P, γ) where X is the state space, A the action space, R the reward function, γ â]0, 1[ the discount factor and P a stochastic kernel modelling the one-step Markovian dynamics (P (y|x, a) is the probability of transitioning to state y by choosing action a in state x). A stochastic policy Ï maps each state to a distribution over actions Ï(·|x) and gives the probability Ï(a|x) of choosing action a in state x. The quality of a policy
2
Published as a conference paper at ICLR 2018
Ï is assessed by the action-value function QÏ deï¬ned as:
+00 Smee] , (1) t=0 Q*(v.a) =E
t=0 where Eâ is the expectation over the distribution of the admissible trajectories (x9, a9, 71, @1,.--) obtained by executing the policy 7 starting from x9 = x and ag = a. Therefore, the quantity Q(x, a) represents the expected y-discounted cumulative reward collected by executing the policy 7 starting from x and a. A policy is optimal if no other policy yields a higher return. The action-value function of the optimal policy is Q*(x, a) = arg max, Q(x, a). The value function V⢠for a policy is defined as V"(a) = Eq vx(.\x)[Qâ (2, @)], and represents the expected y-discounted return collected by executing the policy 7 starting from state x.
2.2 DEEP REINFORCEMENT LEARNING
Deep Reinforcement Learning uses deep neural networks as function approximators for RL methods. Deep Q-Networks (DQN) (Mnih et al., 2015), Dueling architecture (Wang et al., 2016), Asynchronous Advantage Actor-Critic (A3C) (Mnih et al., 2016), Trust Region Policy Optimisation (Schulman et al., 2015), Deep Deterministic Policy Gradient (Lillicrap et al., 2015) and distributional RL (C51) (Bellemare et al., 2017) are examples of such algorithms. They frame the RL problem as the minimisation of a loss function L(θ), where θ represents the parameters of the network. In our experiments we shall consider the DQN, Dueling and A3C algorithms.
DQN (Mnih et al.|/2015) uses a neural network as an approximator for the action-value function of the optimal policy Q*(x, a). DQNâs estimate of the optimal action-value function, Q(z, a), is found by minimising the following loss with respect to the neural network parameters 0: 2
L(θ) = E(x,a,r,y)â¼D r + γ max bâA Q(y, b; θâ) â Q(x, a; θ) , (2)
where D is a distribution over transitions e = («,a,r = R(x,a),y ~ P(-|a,a)) drawn from a replay buffer of previously observed transitions. Here 6~ represents the parameters of a fixed and separate target network which is updated (0~ < 4) regularly to stabilise the learning. An e-greedy policy is used to pick actions greedily according to the action-value function Q or, with probability ¢, a random action is taken.
The Dueling DQN (Wang et al., 2016) is an extension of the DQN architecture. The main difference is in using Dueling network architecture as opposed to the Q network in DQN. Dueling network estimates the action-value function using two parallel sub-networks, the value and advantage sub- network, sharing a convolutional layer. Let θconv, θV , and θA be, respectively, the parameters of the convolutional encoder f , of the value network V , and of the advantage network A; and θ = {θconv, θV , θA} is their concatenation. The output of these two networks are combined as follows for every (x, a) â X à A:
_ Xs A(f (a; 8eonv)s b; 9a) Nations : Q(x, a; 0) = V(f (x; Pconv), Ov) + A(f (x; Pconv), a; 4) (3)
The Dueling algorithm then makes use of the double-DQN update rule (van Hasselt et al., 2016) to optimise θ:
L(8) = Eveany~p [(r + 1Q(us"(y)s8-) â Q(x, a38))"], 4)
L(θ) = E(x,a,r,y)â¼D bâ(y) = arg max bâA
s.t. Q(y, b; θ), (5)
where the deï¬nition distribution D and the target network parameter set θâ is identical to DQN.
In contrast to DQN and Dueling, A3C (Mnih et al., 2016) is a policy gradient algorithm. A3Câs network directly learns a policy Ï and a value function V of its policy. The gradient of the loss on the
3
Published as a conference paper at ICLR 2018
A3C policy at step t for the roll-out (xt+i, at+i â¼ Ï(·|xt+i; θ), rt+i)k
i=0 is:
# k
k k VoL" (0) = âE⢠|S° Vo log (a (arsilarey ss) A(@r sis ar4i30) + BD) VoH(m(-|0r+i 9) i=0 i=0
(6) H{r(-|x1;0)| denotes the entropy of the policy 7 and is a hyper parameter that trades off be- tween optimising the advantage function and the entropy of the policy. The advantage function A(a14i, 4144; 0) is the difference between observed returns and estimates of the return produced by A3Câs value network: A(x14i, 44439) = an reg HYPO V (ares 8) â V (ergs 8), Teej being the reward at step t + j and V(x; 6) being the agentâs estimate of value function of state x.
The parameters of the value function are found to match on-policy returns; namely we have
k SOE" [(Qi â Vasa; 4)? | T14i] (7) i=0 wy < ics = ll
where Q; is the return obtained by executing policy 7 starting in state x,+,;. In practice, and as in (2016), we estimate Q; as Q; = am reg + PV (ar4K3 0) where {ress hoe are rewards observed by the agent, and x;4,, is the kth state observed when starting from observed state x,. The overall A3C loss is then L(#) = Lâ¢(0) + ALY (8) where 2 balances optimising the policy loss relative to the baseline value function loss.
# 3 NOISYNETS FOR REINFORCEMENT LEARNING
NoisyNets are neural networks whose weights and biases are perturbed by a parametric function of the noise. These parameters are adapted with gradient descent. More precisely, let y = f(x) be a neural network parameterised by the vector of noisy parameters 6 which takes the input 2 and outputs y. We represent the noisy parameters 6 as 0 we fu + © ¢, where ¢ def (4, 4) is a set of vectors of learnable parameters, ¢ is a vector of zero-mean noise with fixed statistics and © represents element-wise multiplication. The usual loss of the neural network is wrapped by expectation over the . z f eee . noise e: L(¢) âEp [L(6)]. Optimisation now occurs with respect to the set of parameters ¢.
Consider a linear layer of a neural network with p inputs and q outputs, represented by
y = wx + b, (8) where x â Rp is the layer input, w â RqÃp the weight matrix, and b â Rq the bias. The corresponding noisy linear layer is deï¬ned as:
y def (u? +o" Oer)a t+ po +o oe, (9)
# y
where piâ + 0â © â¬â¢â and p> + 0° © &? replace w and b in Eq. (8), respectively. The parameters pwâ ⬠R*?, py? ERY, 0â ⬠RY? and a? ⬠R4 are learnable whereas câ ⬠R?*? and c? ⬠R% are noise random variables (the specific choices of this distribution are described below). We provide a graphical representation of a noisy linear layer in Fig. [4](see Appendix|Bp.
We now turn to explicit instances of the noise distributions for linear layers in a noisy network. We explore two options: Independent Gaussian noise, which uses an independent Gaussian noise entry per weight and Factorised Gaussian noise, which uses an independent noise per each output and another independent noise per each input. The main reason to use factorised Gaussian noise is to reduce the compute time of random number generation in our algorithms. This computational overhead is especially prohibitive in the case of single-thread agents such as DQN and Duelling. For this reason we use factorised noise for DQN and Duelling and independent noise for the distributed A3C, for which the compute time is not a major concern.
(a) Independent Gaussian noise: the noise applied to each weight and bias is independent, where each entry εw j) of the random matrix εw (respectively of the random vector εb) is drawn from a unit Gaussian distribution. This means that for each noisy linear layer, there are pq + q noise variables (for p inputs to the layer and q outputs).
4
Published as a conference paper at ICLR 2018
i,j, we can use p unit Gaussian variables εi for noise of the inputs and and q unit Gaussian variables εj for noise of the outputs (thus p + q unit Gaussian variables in total). Each εw
Gaussian variables in total). Each ¢/?, and eb can then be written as: ef = Fle Fes), (10) e} = Fes), (11) where f is a real-valued function. In our experiments we used f(x) = sen(x)\/|z]. Note that for the bias Eq. (re) we could have set f(x) = x, but we decided to keep the same output noise for weights and biases.
Since the loss of a noisy network, ¯L(ζ) = E [L(θ)], is an expectation over the noise, the gradients are straightforward to obtain:
VL(¢) = VE[L(0)| = E[VysL(u+2U¢)]. (12) We use a Monte Carlo approximation to the above gradients, taking a single sample ⬠at each step of optimisation:
# VL(6) & Va sl(ut EO â¬).
(13)
3.1 DEEP REINFORCEMENT LEARNING WITH NOISYNETS
We now turn to our application of noisy networks to exploration in deep reinforcement learning. Noise drives exploration in many methods for reinforcement learning, providing a source of stochasticity external to the agent and the RL task at hand. Either the scale of this noise is manually tuned across a wide range of tasks (as is the practice in general purpose agents such as DQN or A3C) or it can be manually scaled per task. Here we propose automatically tuning the level of noise added to an agent for exploration, using the noisy networks training to drive down (or up) the level of noise injected into the parameters of a neural network, as needed.
A noisy network agent samples a new set of parameters after every step of optimisation. Between optimisation steps, the agent acts according to a ï¬xed set of parameters (weights and biases). This ensures that the agent always acts according to parameters that are drawn from the current noise distribution.
Deep Q-Networks (DQN) and Dueling. We apply the following modiï¬cations to both DQN and Dueling: ï¬rst, ε-greedy is no longer used, but instead the policy greedily optimises the (randomised) action-value function. Secondly, the fully connected layers of the value network are parameterised as a noisy network, where the parameters are drawn from the noisy network parameter distribution after every replay step. We used factorised Gaussian noise as explained in (b) from Sec. 3. For replay, the current noisy network parameter sample is held ï¬xed across the batch. Since DQN and Dueling take one step of optimisation for every action step, the noisy network parameters are re-sampled before every action. We call the new adaptations of DQN and Dueling, NoisyNet-DQN and NoisyNet-Dueling, respectively.
We now provide the details of the loss function that our variant of DQN is minimising. When replacing the linear layers by noisy layers in the network (respectively in the target network), the parameterised action-value function Q(x, a, â¬;¢) (respectively Q(x, a, <â; ¢â )) can be seen as a random variable and the DQN loss becomes the NoisyNet-DQN loss: £(6) = E |E(e.ary)~lr + ymax Q(y, bes 67) â Q(a,a, ⬠OP]. (14)
where the outer expectation is with respect to distribution of the noise variables ¢ for the noisy value function Q(x, a,¢;¢) and the noise variable ¢â for the noisy target value function Q(y, b,â¬â;¢â ). Computing an unbiased estimate of the loss is straightforward as we only need to compute, for each transition in the replay buffer, one instance of the target network and one instance of the online network. We generate these independent noises to avoid bias due to the correlation between the noise in the target network and the online network. Concerning the action choice, we generate another independent sample <â for the online network and we act greedily with respect to the corresponding output action-value function.
5
Published as a conference paper at ICLR 2018
Similarly the loss function for NoisyNet-Dueling is deï¬ned as:
L(0) = E [E(e,aryy~olt + Â¥Q(y, 0" (y), 2567) â Q(a, 4,65 0)]7] D*(y) = arg max Q(y, b(y),â¬";¢)-
st. D*(y) = arg max Q(y, b(y),â¬";¢)- (16)
Both algorithms are provided in Appendix C.1.
Asynchronous Advantage Actor Critic (A3C). A3C is modified in a similar fashion to DQN: firstly, the entropy bonus of the policy loss is removed. Secondly, the fully connected layers of the policy network are parameterised as a noisy network. We used independent Gaussian noise as explained in (a) from Sec. [3] In A3C, there is no explicit exploratory action selection scheme (such as â¬-greedy); and the chosen action is always drawn from the current policy. For this reason, an entropy bonus of the policy loss is often added to discourage updates leading to deterministic policies. However, when adding noisy weights to the network, sampling these parameters corresponds to choosing a different current policy which naturally favours exploration. As a consequence of direct exploration in the policy space, the artificial entropy loss on the policy can thus be omitted. New parameters of the policy network are sampled after each step of optimisation, and since A3C uses n step returns, optimisation occurs every n steps. We call this modification of A3C, NoisyNet-A3C.
Indeed, when replacing the linear layers by noisy linear layers (the parameters of the noisy network are now noted ζ), we obtain the following estimation of the return via a roll-out of size k:
k-1 Qi = Soyer + 'V (wer 6, â¬%)- (17) jai
As A3C is an on-policy algorithm the gradients are unbiased when noise of the network is consistent for the whole roll-out. Consistency among action value functions ËQi is ensured by letting letting the noise be the same throughout each rollout, i.e., âi, εi = ε. Additional details are provided in the Appendix A and the algorithm is given in Appendix C.2.
INITIALISATION OF NOISY NETWORKS
In the case of an unfactorised noisy networks, the parameters µ and Ï are initialised as follows. Each element µi,j is sampled from independent uniform distributions U[â p ], where p is the number of inputs to the corresponding linear layer, and each element Ïi,j is simply set to 0.017 for all parameters. This particular initialisation was chosen because similar values worked well for the supervised learning tasks described in Fortunato et al. (2017), where the initialisation of the variances of the posteriors and the variances of the prior are related. We have not tuned for this parameter, but we believe different values on the same scale should provide similar results.
For factorised noisy networks, each element µi,j was initialised by a sample from an independent uniform distributions U[â 1â p . The hyperparameter Ï0 is set to 0.5.
# 4 RESULTS
We evaluated the performance of noisy network agents on 57 Atari games (Bellemare et al., 2015) and compared to baselines that, without noisy networks, rely upon the original exploration methods (ε-greedy and entropy bonus).
# 4.1 TRAINING DETAILS AND PERFORMANCE
We used the random start no-ops scheme for training and evaluation as described the original DQN paper (Mnih et al., 2015). The mode of evaluation is identical to those of Mnih et al. (2016) where randomised restarts of the games are used for evaluation after training has happened. The raw average scores of the agents are evaluated during training, every 1M frames in the environment, by suspending
6
(15)
Published as a conference paper at ICLR 2018
50 iL 8 6, Segg°agagstwss "sess z BR E2 a URES 2955 08 83,8, 3 gee sen pee? Bee gs ese g 8 sas5 8 5 28 3 8 eve 85 gs * g ges = 3 ~ 8 w 8E eS g i 3
(a) Improvement in percentage of NoisyNet-DQN over DQN (Mnih et al., 2015)
250 200 150 100 50 ° -50 -100 SSESe Sia PSE Sees Se SEEGER Ress ESSER EZESESESESE RS SEs Se eee PS Ea RSE EERE RSS ERS GS Ey ae eee ee 375 Fe sa % ge 8 gf 2 PAE ges S7Eseg goa Se ee ae 3 3 8 fF 8 Es 8 $38 & 4 3 § 8 âe¢
(b) Improvement in percentage of NoisyNet-Dueling over Dueling (Wang et al., 2016)
250 200 150 100 50 ° -50 m0 Ee > 5 es SSERR SERPS PESRSTLEL ERC ERESegsee SS2ESeEssaseys SESE ESE ESS RE Sg85 a5 Le eZ ESSER EEE GES BSSESE ESRB EE â geyse° Pfs oe seers sesgese** sages gee aes âeee * . B25 ss 8 Poe os Sa H
(c) Improvement in percentage of NoisyNet-A3C over A3C (Mnih et al., 2016)
Figure 1: Comparison of NoisyNet agent versus the baseline according to Eq. (19). The maximum score is truncated at 250%.
learning and evaluating the latest agent for 500K frames. Episodes are truncated at 108K frames (or 30 minutes of simulated play) (van Hasselt et al., 2016).
We consider three baseline agents: DQN (Mnih et al., 2015), duel clip variant of Dueling algo- rithm (Wang et al., 2016) and A3C (Mnih et al., 2016). The DQN and A3C agents were training for 200M and 320M frames, respectively. In each case, we used the neural network architecture from the corresponding original papers for both the baseline and NoisyNet variant. For the NoisyNet variants we used the same hyper parameters as in the respective original paper for the baseline.
We compared absolute performance of agents using the human normalised score:
100 Ã Scoreagent â ScoreRandom ScoreHuman â ScoreRandom , (18)
where human and random scores are the same as those in Wang et al. (2016). Note that the human normalised score is zero for a random agent and 100 for human level performance. Per-game maximum scores are computed by taking the maximum raw scores of the agent and then averaging over three seeds. However, for computing the human normalised scores in Figure 2, the raw scores are evaluated every 1M frames and averaged over three seeds. The overall agent performance is measured by both mean and median of the human normalised score across all 57 Atari games.
The aggregated results across all 57 Atari games are reported in Table 1, while the individual scores for each game are in Table 3 from the Appendix E. The median human normalised score is improved
7
Published as a conference paper at ICLR 2018
in all agents by using NoisyNet, adding at least 18 (in the case of A3C) and at most 48 (in the case of DQN) percentage points to the median human normalised score. The mean human normalised score is also signiï¬cantly improved for all agents. Interestingly the Dueling case, which relies on multiple modiï¬cations of DQN, demonstrates that NoisyNet is orthogonal to several other improvements made to DQN. We also compared relative performance of NoisyNet agents to the respective baseline agent
Baseline NoisyNet Mean Median Mean Median Improvement (On median) DQN Dueling A3C 319 524 293 83 132 80 379 633 347 123 172 94 48% 30% 18%
Table 1: Comparison between the baseline DQN, Dueling and A3C and their NoisyNet version in terms of median and mean human-normalised scores deï¬ned in Eq. (18). We report on the last column the percentage improvement on the baseline in terms of median human-normalised score.
without noisy networks:
100 Ã ScoreNoisyNet â ScoreBaseline max(ScoreHuman, ScoreBaseline) â ScoreRandom . (19)
As before, the per-game score is computed by taking the maximum performance for each game and then averaging over three seeds. The relative human normalised scores are shown in Figure 1. As can be seen, the performance of NoisyNet agents (DQN, Dueling and A3C) is better for the majority of games relative to the corresponding baseline, and in some cases by a considerable margin. Also as it is evident from the learning curves of Fig. 2 NoisyNet agents produce superior performance compared to their corresponding baselines throughout the learning process. This improvement is especially signiï¬cant in the case of NoisyNet-DQN and NoisyNet-Dueling. Also in some games, NoisyNet agents provide an order of magnitude improvement on the performance of the vanilla agent; as can be seen in Table 3 in the Appendix E with detailed breakdown of individual game scores and the learning curves plots from Figs 6, 7 and 8, for DQN, Dueling and A3C, respectively. We also ran some experiments evaluating the performance of NoisyNet-A3C with factorised noise. We report the corresponding learning curves and the scores in Fig. ?? and Table 2, respectively (see Appendix D). This result shows that using factorised noise does not lead to any signiï¬cant decrease in the performance of A3C. On the contrary it seems that it has positive effects in terms of improving the median score as well as speeding up the learning process.
Median score over games 2 $ g Median Score FS 3 y 8 â don â NoisyNet-DQN ° 2 50 100 150 200 Million frames
80 Median score over games â asc â NoisyNet-a3c © 50 100 150 200 250 300 350 Million frames
Median score over games Median score over games 80 Median score over games 2 $ g Median Score FS 3 Median Score y 8 â don â NoisyNet-DQN â Dueling â asc â NoisyNet-Dueling â NoisyNet-a3c ° 2 50 100 150 200 0 50 100 150 200 © 50 100 150 200 250 300 350 Million frames Million frames Million frames
Median score over games Median Score â Dueling â NoisyNet-Dueling 0 50 100 150 200 Million frames
Figure 2: Comparison of the learning curves of NoisyNet agent versus the baseline according to the median human normalised score.
4.2 ANALYSIS OF LEARNING IN NOISY LAYERS
In this subsection, we try to provide some insight on how noisy networks affect the learning process and the exploratory behaviour of the agent. In particular, we focus on analysing the evolution of the noise weights Ïw and Ïb throughout the learning process. We ï¬rst note that, as L(ζ) is a positive and continuous function of ζ, there always exists a deterministic optimiser for the loss L(ζ) (deï¬ned in
8
Published as a conference paper at ICLR 2018
Eq. (14)). Therefore, one may expect that, to obtain the deterministic optimal solution, the neural network may learn to discard the noise entries by eventually pushing Ïws and Ïb towards 0. To test this hypothesis we track the changes in Ïws throughout the learning process. Let Ïw the ith weight of a noisy layer. We then deï¬ne ¯Σ, the mean-absolute of the Ïw
1 y= Nweights YS lev. (20)
Intuitively speaking ¯Σ provides some measure of the stochasticity of the Noisy layers. We report the learning curves of the average of ¯Σ across 3 seeds in Fig. 3 for a selection of Atari games in NoisyNet-DQN agent. We observe that ¯Σ of the last layer of the network decreases as the learning proceeds in all cases, whereas in the case of the penultimate layer this only happens for 2 games out of 5 (Pong and Beam rider) and in the remaining 3 games ¯Σ in fact increases. This shows that in the case of NoisyNet-DQN the agent does not necessarily evolve towards a deterministic solution as one might have expected. Another interesting observation is that the way ¯Σ evolves signiï¬cantly differs from one game to another and in some cases from one seed to another seed, as it is evident from the error bars. This suggests that NoisyNet produces a problem-speciï¬c exploration strategy as opposed to ï¬xed exploration strategy used in standard DQN.
Penultimate layer Last layer K beam sider + beam rider 0.014| ââ breakout 0.0200 = breakout sh pong + pong FE road runner 0.0175 i =H road runner 0.012 > space invaders = space invaders 0.0150 wi Tah âoo. aul ee ml 0.0100 0.008 _ il 0.0075 0.0050 0.006" 25° «50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200 Million frames Million frames
Penultimate layer K beam sider 0.014| ââ breakout sh pong FE road runner 0.012 > space invaders wi Tah âoo. aul ml 0.008 _ il 0.006" 25° «50 75 100 125 150 175 200 Million frames
Last layer + beam rider 0.0200 = breakout + pong 0.0175 i =H road runner = space invaders 0.0150 ee 0.0100 0.0075 0.0050 0 25 50 75 100 125 150 175 200 Million frames
Figure 3: Comparison of the learning curves of the average noise parameter ¯Σ across ï¬ve Atari games in NoisyNet-DQN. The results are averaged across 3 seeds and error bars (+/- standard deviation) are plotted.
# 5 CONCLUSION
We have presented a general method for exploration in deep reinforcement learning that shows signiï¬cant performance improvements across many Atari games in three different agent architec- tures. In particular, we observe that in games such as Beam rider, Asteroids and Freeway that the standard DQN, Dueling and A3C perform poorly compared with the human player, NoisyNet-DQN, NoisyNet-Dueling and NoisyNet-A3C achieve super human performance, respectively. Although the improvements in performance might also come from the optimisation aspect since the cost functions are modiï¬ed, the uncertainty in the parameters of the networks introduced by NoisyNet is the only exploration mechanism of the method. Having weights with greater uncertainty introduces more variability into the decisions made by the policy, which has potential for exploratory actions, but further analysis needs to be done in order to disentangle the exploration and optimisation effects.
Another advantage of NoisyNet is that the amount of noise injected in the network is tuned automati- cally by the RL algorithm. This alleviates the need for any hyper parameter tuning (required with standard entropy bonus and e-greedy types of exploration). This is also in contrast to many other methods that add intrinsic motivation signals that may destabilise learning or change the optimal policy. Another interesting feature of the NoisyNet approach is that the degree of exploration is contextual and varies from state to state based upon per-weight variances. While more gradients are needed, the gradients on the mean and variance parameters are related to one another by a computationally efficient affine function, thus the computational overhead is marginal. Automatic differentiation makes implementation of our method a straightforward adaptation of many existing methods. A similar randomisation technique can also be applied to LSTM units 2017) ind is easily extended to reinforcement learning, we leave this as future work.
9
Published as a conference paper at ICLR 2018
Note NoisyNet exploration strategy is not restricted to the baselines considered in this paper. In fact, this idea can be applied to any deep RL algorithms that can be trained with gradient descent, including DDPG (Lillicrap et al., 2015), TRPO (Schulman et al., 2015) or distributional RL (C51) (Bellemare et al., 2017). As such we believe this work is a step towards the goal of developing a universal exploration strategy.
Acknowledgements We would like to thank Koray Kavukcuoglu, Oriol Vinyals, Daan Wierstra, Georg Ostrovski, Joseph Modayil, Simon Osindero, Chris Apps, Stephen Gaffney and many others at DeepMind for insightful discussions, comments and feedback on this work.
# REFERENCES
Peter Auer and Ronald Ortner. Logarithmic online regret bounds for undiscounted reinforcement learning. Advances in Neural Information Processing Systems, 19:49, 2007.
Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforce- ment learning. arXiv preprint arXiv:1703.05449, 2017.
Marc Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. In Twenty-Fourth International Joint Conference on Artiï¬cial Intelligence, 2015.
Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471â1479, 2016.
Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. In International Conference on Machine Learning, pp. 449â458, 2017.
Richard Bellman and Robert Kalaba. Dynamic programming and modern control theory. Academic Press New York, 1965.
Dimitri Bertsekas. Dynamic programming and optimal control, volume 1. Athena Scientiï¬c, Belmont, MA, 1995.
Chris M Bishop. Training with noise is equivalent to Tikhonov regularization. Neural computation, 7 (1):108â116, 1995.
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1613â1622, 2015.
Jeremy Fix and Matthieu Geist. Monte-Carlo swarm policy search. In Swarm and Evolutionary Computation, pp. 75â83. Springer, 2012.
Meire Fortunato, Charles Blundell, and Oriol Vinyals. Bayesian recurrent neural networks. arXiv preprint arXiv:1704.02798, 2017.
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 1050â1059, New York, New York, USA, 20â22 Jun 2016. PMLR. URL http://proceedings.mlr.press/v48/gal16.html.
Matthieu Geist and Olivier Pietquin. Kalman temporal differences. Journal of artiï¬cial intelligence research, 39:483â532, 2010a.
Matthieu Geist and Olivier Pietquin. Managing uncertainty within value function approximation in reinforcement learning. In Active Learning and Experimental Design workshop (collocated with AISTATS 2010), Sardinia, Italy, volume 92, 2010b.
Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pp. 2348â2356, 2011.
10
Published as a conference paper at ICLR 2018
Elad Hazan, Kï¬r Yehuda Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic non-convex problems. In International Conference on Machine Learning, pp. 1833â1841, 2016.
Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5â13. ACM, 1993.
Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. VIME: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109â1117, 2016.
Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563â1600, 2010.
Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209â232, 2002.
Tor Lattimore, Marcus Hutter, and Peter Sunehag. The sample-complexity of general reinforcement learning. In Proceedings of The 30th International Conference on Machine Learning, pp. 28â36, 2013.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. Efï¬cient exploration for dialogue policy learning with BBQ networks & replay buffer spiking. arXiv preprint arXiv:1608.05081, 2016.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928â1937, 2016.
Hossein Mobahi. Training recurrent neural networks by diffusion. arXiv preprint arXiv:1601.04114, 2016.
David E Moriarty, Alan C Schultz, and John J Grefenstette. Evolutionary algorithms for reinforcement learning. Journal of Artiï¬cial Intelligence Research, 11:241â276, 1999.
Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions. arXiv preprint arXiv:1402.0635, 2014.
Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped DQN. In Advances In Neural Information Processing Systems, pp. 4026â4034, 2016.
Ian Osband, Daniel Russo, Zheng Wen, and Benjamin Van Roy. Deep exploration via randomized value functions. arXiv preprint arXiv:1703.07608, 2017.
Georg Ostrovski, Marc G Bellemare, Aaron van den Oord, and Remi Munos. Count-based exploration with neural density models. arXiv preprint arXiv:1703.01310, 2017.
Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? A typology of computational approaches. Frontiers in neurorobotics, 1, 2007.
Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905, 2017.
11
Published as a conference paper at ICLR 2018
Martin Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 1994.
Tim Salimans, J. Ho, X. Chen, and I. Sutskever. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. ArXiv e-prints, 2017.
Jürgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990â2010). IEEE Transactions on Autonomous Mental Development, 2(3):230â247, 2010.
J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In Proc. of ICML, pp. 1889â1897, 2015.
Satinder P Singh, Andrew G Barto, and Nuttapong Chentanez. Intrinsically motivated reinforcement learning. In NIPS, volume 17, pp. 1281â1288, 2004.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. Cambridge Univ Press, 1998.
Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proc. of NIPS, volume 99, pp. 1057â1063, 1999.
William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285â294, 1933.
Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q- learning. In Proc. of AAAI, pp. 2094â2100, 2016.
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. In Proceedings of The 33rd Dueling network architectures for deep reinforcement learning. International Conference on Machine Learning, pp. 1995â2003, 2016.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
12
Published as a conference paper at ICLR 2018
# A NOISYNET-A3C IMPLEMENTATION DETAILS
In contrast with value-based algorithms, policy-based methods such as A3C (Mnih et al., 2016) parameterise the policy Ï(a|x; θÏ) directly and update the parameters Î¸Ï by performing a gradient ascent on the mean value-function Exâ¼D[V Ï(·|·;θÏ)(x)] (also called the expected return) (Sutton et al., 1999). A3C uses a deep neural network with weights θ = Î¸Ï âªÎ¸V to parameterise the policy Ï and the value V . The network has one softmax output for the policy-head Ï(·|·; θÏ) and one linear output for the value-head V (·; θV ), with all non-output layers shared. The parameters Î¸Ï (resp. θV ) are relative to the shared layers and the policy head (resp. the value head). A3C is an asynchronous and online algorithm that uses roll-outs of size k + 1 of the current policy to perform a policy improvement step.
For simplicity, here we present the A3C version with only one thread. For a multi-thread implementa- tion, refer to the pseudo-code C.2 or to the original A3C paper (Mnih et al., 2016). In order to train the policy-head, an approximation of the policy-gradient is computed for each state of the roll-out (xt+i, at+i â¼ Ï(·|xt+i; θÏ), rt+i)k
âÎ¸Ï log(Ï(at+i|xt+i; θÏ))[ ËQi â V (xt+i; θV )], (21)
where Q; is an estimation of the return Q; = aH ree + OV (ar4K3 Ov). The gradients are then added to obtain the cumulative gradient of the roll-out:
k Ss Vo, log(m(ar4i|te4i3 Ox)) [Qi â V (xe4i3 Ov))- (22) i=0
A3C trains the value-head by minimising the error between the estimated return and the value yt (Qi âV(at4i;9v))â. Therefore, the network parameters (0,,,\) are updated after each roll-out as follows:
# k
Î¸Ï â Î¸Ï + Î±Ï âÎ¸Ï log(Ï(at+i|xt+i; θÏ))[ ËQi â V (xt+i; θV )], i=0 (23)
# K
θV â θV â αV âθV [ ËQi â V (xt+i; θV )]2, i=0 (24)
where (a, ay) are hyper-parameters. As mentioned previously, in the original A3C algorithm, it is recommended to add an entropy term 3 yy Vo, H(1(-|x14i;7)) to the policy update, where A(n(-\ar4i3Ox)) = âB Yo ye4 T(alae+i; Ox) log(t(a\x14i; Ox)). Indeed, this term encourages ex- ploration as it favours policies which are uniform over actions. When replacing the linear layers in the value and policy heads by noisy layers (the parameters of the noisy network are now ¢, and ¢y), we obtain the following estimation of the return via a roll-out of size k:
k-1 i= Sores + 9° 'V (21443 Cvs Ei): (25) j=i
We would like ËQi to be a consistent estimate of the return of the current policy. To do so, we should force âi, εi = ε. As A3C is an on-policy algorithm, this involves ï¬xing the noise of the network for the whole roll-out so that the policy produced by the network is also ï¬xed. Hence, each update of the parameters (ζÏ, ζV ) is done after each roll-out with the noise of the whole network held ï¬xed for the duration of the roll-out:
k Cn â Gr + On Ss Ve, log(m(arsileeris Gr â¬)) [Qi â V(aa6v.e)], (26) i=0
# K
ζV â ζV â αV âζV [ ËQi â V (xt+i; ζV , ε)]2. i=0 (27)
13
Published as a conference paper at ICLR 2018
# B NOISY LINEAR LAYER
In this Appendix we provide a graphical representation of noisy layer.
y=wret+b (<<! yE b= pe +o? Ge? x
Figure 4: Graphical representation of a noisy linear layer. The parameters py.â .â, 0â and a? are the learnables of the network whereas ¢â and ¢? are noise variables which can be chosen in factorised or non-factorised fashion. The noisy layer functions similarly to the standard fully connected linear layer. The main difference is that in the noisy layer both the weights vector and the bias is perturbed by some parametric zero-mean noise, that is, the noisy weights and the noisy bias can be expressed as w= peâ +oâ Oeâ andb = p? +.0° ©e°, respectively. The output of the noisy layer is then simply obtained as y = wax + b.
14
Published as a conference paper at ICLR 2018
# C ALGORITHMS
C.1 NOISYNET-DQN AND NOISYNET-DUELING
Algorithm 1: NoisyNet-DQN / NoisyNet-Dueling Input Input Input Input Output :Q(·, ε; ζ) action-value function
:Env Environment; ε set of random variables of the network :DUELING Boolean; "true" for NoisyNet-Dueling and "false" for NoisyNet-DQN :B empty replay buffer; ζ initial network parameters; ζ â initial target network parameters :NB replay buffer size; NT training batch size; N â target network replacement frequency
Input: Nz replay buffer size; Nr training batch size; N~ target network replacement frequency Output : Q(-, â¬; ¢) action-value function 1 for episode e ⬠{1,...,M}do 2 Initialise state sequence 9 ~ Env 3 fort ⬠{1,...} do /x I[-1] is the last element of the list | */ 4 Set x + 2x9 5 Sample a noisy network ⬠~ ⬠6 Select an action a < argmax,¢ 4 Q(x, b, â¬;¢) 7 Sample next state y ~ P(-|x, a), receive reward r ~ R(x, a) and set xp + y 8 Add transition (x, a,1,y) to the replay buffer B[â1] < (x,a,r,y) 9 if |B| > Nz then 10 | Delete oldest transition from B it end /*« D is a distribution over the replay, it can be uniform or implementing prioritised replay */ 2 Sample a minibatch of Nr transitions ((x;,a;,7j, yj) ~ Dy, /* Construction of the target values. x/ 13 Sample the noisy variable for the online network ⬠~ ⬠4 Sample the noisy variables for the target network £â ~ ⬠15 if DUELING then 16 Sample the noisy variables for the action selection network â¬â ~ ⬠7 for j ⬠{1,...,Nr}do 18 if y; is a terminal state then 19 | Q 15 20 if DUELING then 21 b*(yj) = arg maxpea Q(y;, 6,656) 2 Qe 15 + 7Q(yj, 0" (ys), 6507) 23 else 4 | Qerj+ymaxrea Qyj,b656-) 28 Do a gradient step with loss (Q â Q(x;,a;,â¬;¢))? 26 end 27 ift = 0 (mod Nâ) then 28 Update the target network: ¢~ + ¢ 29 end 30 end
30 31 end
15
Published as a conference paper at ICLR 2018
C.2 NOISYNET-A3C
# Algorithm 2: NoisyNet-A3C for each actor-learner thread Input
1 2 3 waue 10 i 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 2 0 Input :Environment Env, Global shared parameters (¢,, Cv), global shared counter Tâ and maximal time Tirax- Input :Thread-specific parameters (¢/., ¢{-), Set of random variables ¢, thread-specific counter t and roll-out size tax. Output : 7(-; ¢,,â¬) the policy and V(-; Gy, â¬) the value. Initial thread counter t <â 1 repeat counter < 0. Get state x, from Env Choice of the noise: ⬠~ ⬠/* r is a list of rewards r<{] /* ais a list of actions at[] /* x is a list of states x + [Jand x[0] + a, repeat Policy choice: ay ~ m(-|a4; C1; â¬) a{-1] + a Receive reward r; and new state r441 r[â-1] © rand x[-1] © a44 t<t+landT¢+T+1 counter = counter + 1 until x, terminal or counter == tmaxz +1 if x, is a terminal state then Q=0 Q=V (acy, 8) for i ⬠{counter â1,...,0}do Update Q: Q + r[i] + 7Q. else end until T > Tina Reset cumulative gradients: d¢, + 0 and d¢y + 0. Synchronise thread-specific parameters: ¢/ <â ¢, and Cj, © Gv. Perform asynchronous update of ¢,: ¢, < ¢; + azd¢, Perform asynchronous update of Gy: Gv + ¢v â avdy */ */ «/ Accumulate policy-gradient: d¢, < d¢, + Ver log(m(afi]|x[2]; C2, â¬))[Q â V(2[i]; Cy, §)]- Accumulate value-gradient: d¢y <â d¢y + Ver [Q âV (ali); ¢,â¬)]?.
16
Published as a conference paper at ICLR 2018
D COMPARISON BETWEEN NOISYNET-A3C (FACTORISED AND NON-FACTORISED NOISE) AND A3C
Median score over games 80 70 60 2 5 50 uv Ww c 40 g @ 30 = 20 â AIC 10 â Noisy-Net A3C (Factorised) â = Noisy-Net A3C 0 0 50 100 150 200 250 300 350 Million frames
Figure 5: Comparison of the learning curves of factorised and non-factorised NoisyNet-A3C versus the baseline according to the median human normalised score.
Baseline NoisyNet Mean Median Mean Median Improvement (On median) DQN Dueling A3C A3C (factorised) 319 524 293 293 83 132 80 80 379 633 347 276 123 172 94 99 48% 30% 18% 24 %
Table 2: Comparison between the baseline DQN, Dueling and A3C and their NoisyNet version in terms of median and mean human-normalised scores deï¬ned in Eq. (18). In the case of A3C we inculde both factorised and non-factorised variant of the algorithm. We report on the last column the percentage improvement on the baseline in terms of median human-normalised score.
17
Published as a conference paper at ICLR 2018
# E LEARNING CURVES AND RAW SCORES
Here we directly compare the performance of DQN, Dueling DQN and A3C and their NoisyNet counterpart by presenting the maximal score in each of the 57 Atari games (Table 3), averaged over three seeds. In Figures 6-8 we show the respective learning curves.
NoisyNet-DQN NoisyNet-Dueling alien 2404 + 242 2403 + 78 2027 + 92 1899 + 111 6163 + 1077 5778 + 2189 amidar 924 + 159 1610 + 228 904 + 125 491 + 485 2296 + 154 3537 + 521 assault 3595 + 169 5510 + 483 2879 + 293 3060 + 101 8010 + 381 11231 + 503 asterix 6253 + 154 14328 + 2859 6822 + 181 32478 + 2567 11170 + 5355 28350 + 607 asteroids 1824 + 83 3455 + 1054 2544 + 523 4541 + 311 2220 + 91 86700 + 80459 atlantis 876000 + 15013 923733 + 25798 422700 + 4759 465700 + 4224 902742 + 17087 972175 + 31961 bank heist 455 + 25 1068 + 277 1296 + 20 1033 + 463 1428 + 37 1318 + 37 battle zone 28981 + 1497 36786 + 2892 16411 + 1283 17871 + 5007 40481 + 2161 52262 + 1480 beam rider 10564 + 613 20793 + 284 9214 + 608 11237 + 1582 16298 + 1101 18501 + 662 berzerk 634 + 16 905 + 21 1022 + 151 1235 + 259 1122 + 35 1896 + 604 bowling 62+4 71 + 26 37 +2 42+ 11 7246 68 + 6 boxing 87 +1 89+4 91+1 100+ 0 99 +0 100 + 0 breakout 396 + 13 516 + 26 496 + 56 374 +27 200 + 21 263 + 20 centipede 2091 6440 + 1194 4269 + 261 5350 + 432 8282 + 685 4166 + 23 7596 + 1134 chopper command 811 7271 + 473 8893 + 871 5285 + 159 7561 + 1190 7388 + 1024 11477 + 1299 crazy climber 10780 116480 + 896 118305 + 7796 134783 + 5495 139950 + 18190 163335 + 2460 171171 + 2095 defender 2874 18303 + 2611 20525 + 3114 52917 + 3355 $5492 + 3844 37275 + 1572 42253 + 2142 demon attack 152 12696 + 214 36150 + 4646 37085 + 803 37880 + 2093 61033 + 9707 69311 + 26289 double dunk -6+1 1+0 341 3+1 17+7 1+0 enduro i) 835 + 56 1240 + 83 o+0 300 + 424 2064 + 81 2013 + 219 fishing derby 4+4 11+2 -7 +30 -38 + 39 3545 57+2 freeway 3140 32 +0 o+0 18 + 13 3440 3440 frostbite 1000 + 258 753 + 101 288 + 20 261 +0 2807 + 1457 2923 + 1519 gopher 11825 + 1444 14574 + 1837 7992 + 672 12439 + 16229 27313 + 2629 38909 + 2229 gravitar 366 + 26 447 + 94 379 + 31 314+ 25 1682 + 170 2209 + 99 hero 15176 + 3870 6246 + 2092 30791 + 246 8471 + 4332 35895 + 1035 31533 + 4970 ice hockey -2+0 -3+0 2+0 341 -O+0 3+1 jamesbond 909 + 223 1235 + 421 509 + 34 188 + 103 1667 + 134 4682 + 2281 kangaroo 8166 + 1512 10944 + 4149 1166 + 76 1604 + 278 14847 + 29 15227 + 243 krull 8343 + 79 8805 + 313 9422 + 980 22849 + 12175 10733 + 65 10754 + 181 kung fu master 30444 + 1673 36310 + 5093 37422 + 2202 55790 + 23886 30316 + 2397 41672 + 1668 montezuma revenge 243 344 14+ 12 4+3 O+0 57+ 15 ms pacman 2674 + 43 2722 + 148 2436 + 249 3401 + 761 3650 + 445 5546 + 367 name this game 8179 + 551 8181 + 742 7168 + 224 8798 + 1847 9919 + 38 12211 + 251 phoenix 9704 + 2907 16028 + 3317 9476 + 569 50338 + 30396 8215 + 403 10379 + 547 pitfall o+0 o+0 o+0 o+0 o+0 Oo+0 pong 20+ 0 21+0 7+ 19 12+ 11 2140 2140 private eye 2361 + 781 3712 + 161 3781 + 2994 100 +0 227 + 138 279 + 109 qbert 11241 + 1579 15545 + 462 18586 + 574 17896 + 1522 19819 + 2640 27121 + 422 riverraid 7241 + 140 9425 + 705 8135 + 483 7878 + 162 18405 + 93 23134 + 1434 road runner 37910 + 1778 45993 + 2709 45315 + 1837 30454 + 13309 64051 + 1106 234352 + 132671 robotank 55+1 SI +5 6+0 36 +3 63+1 64+1 seaquest 4163 + 425 2282 + 361 1744 +0 943 + 41 19595 + 1493 16754 + 6619 skiing -12630 + 202 -14763 + 706 -12972 + 2846 -15970 + 9887 -7989 + 1349 -7550 + 451 solaris 4055 + 842 6088 + 1791 12380 + 519 10427 + 3878 3423 + 152 6522 + 750 space invaders 1283 + 39 2186 + 92 1034 + 49 1126 + 154 1158 + 74 5909 + 1318 star gunner 10250 40934 + 3598 47133 + 7016 49156 + 3882 45008 + 11570 70264 + 2147 75867 + 8623 surround 6 -6+0 -14+2 8+1 1+1 1+3 10+0 tennis -8 847 Oo+0 649 o+0 o+0 o+0 time pilot 5229 6167 + 73 7035 + 908 10294 + 1449 11124 + 1753 14094 + 652 17301 + 1200 tutankham 168 218+1 232 + 34 213+ 14 164 + 49 280+ 8 269 + 19 up n down 11693 11652 + 737 14255 + 1658 89067 + 12635 103557 + 51492 93931 + 56045 61326 + 6052 venture 1188 319 + 158 97 + 76 o+0 o+0 1433 + 10 815 + 114 video pinball 17668 429936 + 71110 322507 + 135629 229402 + 153801 294724 + 140514 876503 + 61496 870954 + 135363 wizard of wor 4756 3601 + 873 9198 + 4364 8953 + 1377 12723 + 3420 6534 + 882 9149 + 641 yars revenge 54577 20648 + 1543 23915 + 13939 21596 + 1917 61755 + 4798 43120 + 21466 86101 + 4136
Games alien amidar assault asterix asteroids atlantis bank heist battle zone beam rider berzerk bowling boxing breakout centipede chopper command crazy climber defender demon attack double dunk enduro ï¬shing derby freeway frostbite gopher gravitar hero ice hockey jamesbond kangaroo krull kung fu master montezuma revenge ms pacman name this game phoenix pitfall pong private eye qbert riverraid road runner robotank seaquest skiing solaris space invaders star gunner surround tennis time pilot tutankham up n down venture video pinball wizard of wor yars revenge zaxxon
Human 7128 1720 742 8503 47389 29028 753 37188 16926 2630 161 12 30 12017 7388 35829 18689 1971 -16 860 -39 30 4335 2412 3351 30826 1 303 3035 2666 22736 4753 6952 8049 7243 6464 15 69571 13455 17118 7845 12 42055 -4337 12327 1669 10250 6 -8 5229 168 11693 1188 17668 4756 54577 9173
Random 228 6 222 210 719 12580 14 2360 364 124 23 0 2 2091 811 10780 2874 152 -19 0 -92 0 65 258 173 1027 -11 29 52 1598 258 0 307 2292 761 -229 -21 25 164 1338 12 2 68 -17098 1263 148 664 -10 -24 3568 11 533 0 16257 564 3093 32
DQN 2404 ± 242 924 ± 159 3595 ± 169 6253 ± 154 1824 ± 83 876000 ± 15013 455 ± 25 28981 ± 1497 10564 ± 613 634 ± 16 62 ± 4 87 ± 1 396 ± 13 6440 ± 1194 7271 ± 473 116480 ± 896 18303 ± 2611 12696 ± 214 -6 ± 1 835 ± 56 4 ± 4 31 ± 0 1000 ± 258 11825 ± 1444 366 ± 26 15176 ± 3870 -2 ± 0 909 ± 223 8166 ± 1512 8343 ± 79 30444 ± 1673 2 ± 3 2674 ± 43 8179 ± 551 9704 ± 2907 0 ± 0 20 ± 0 2361 ± 781 11241 ± 1579 7241 ± 140 37910 ± 1778 55 ± 1 4163 ± 425 -12630 ± 202 4055 ± 842 1283 ± 39 40934 ± 3598 -6 ± 0 8 ± 7 6167 ± 73 218 ± 1 11652 ± 737 319 ± 158 429936 ± 71110 3601 ± 873 20648 ± 1543 4806 ± 285
NoisyNet-DQN 2403 ± 78 1610 ± 228 5510 ± 483 14328 ± 2859 3455 ± 1054 923733 ± 25798 1068 ± 277 36786 ± 2892 20793 ± 284 905 ± 21 71 ± 26 89 ± 4 516 ± 26 4269 ± 261 8893 ± 871 118305 ± 7796 20525 ± 3114 36150 ± 4646 1 ± 0 1240 ± 83 11 ± 2 32 ± 0 753 ± 101 14574 ± 1837 447 ± 94 6246 ± 2092 -3 ± 0 1235 ± 421 10944 ± 4149 8805 ± 313 36310 ± 5093 3 ± 4 2722 ± 148 8181 ± 742 16028 ± 3317 0 ± 0 21 ± 0 3712 ± 161 15545 ± 462 9425 ± 705 45993 ± 2709 51 ± 5 2282 ± 361 -14763 ± 706 6088 ± 1791 2186 ± 92 47133 ± 7016 -1 ± 2 0 ± 0 7035 ± 908 232 ± 34 14255 ± 1658 97 ± 76 322507 ± 135629 9198 ± 4364 23915 ± 13939 6920 ± 4567
A3C 2027 ± 92 904 ± 125 2879 ± 293 6822 ± 181 2544 ± 523 422700 ± 4759 1296 ± 20 16411 ± 1283 9214 ± 608 1022 ± 151 37 ± 2 91 ± 1 496 ± 56 5350 ± 432 5285 ± 159 134783 ± 5495 52917 ± 3355 37085 ± 803 3 ± 1 0 ± 0 -7 ± 30 0 ± 0 288 ± 20 7992 ± 672 379 ± 31 30791 ± 246 -2 ± 0 509 ± 34 1166 ± 76 9422 ± 980 37422 ± 2202 14 ± 12 2436 ± 249 7168 ± 224 9476 ± 569 0 ± 0 7 ± 19 3781 ± 2994 18586 ± 574 8135 ± 483 45315 ± 1837 6 ± 0 1744 ± 0 -12972 ± 2846 12380 ± 519 1034 ± 49 49156 ± 3882 -8 ± 1 -6 ± 9 10294 ± 1449 213 ± 14 89067 ± 12635 0 ± 0 229402 ± 153801 8953 ± 1377 21596 ± 1917 16544 ± 1513
NoisyNet-A3C 1899 ± 111 491 ± 485 3060 ± 101 32478 ± 2567 4541 ± 311 465700 ± 4224 1033 ± 463 17871 ± 5007 11237 ± 1582 1235 ± 259 42 ± 11 100 ± 0 374 ± 27 8282 ± 685 7561 ± 1190 139950 ± 18190 55492 ± 3844 37880 ± 2093 3 ± 1 300 ± 424 -38 ± 39 18 ± 13 261 ± 0 12439 ± 16229 314 ± 25 8471 ± 4332 -3 ± 1 188 ± 103 1604 ± 278 22849 ± 12175 55790 ± 23886 4 ± 3 3401 ± 761 8798 ± 1847 50338 ± 30396 0 ± 0 12 ± 11 100 ± 0 17896 ± 1522 7878 ± 162 30454 ± 13309 36 ± 3 943 ± 41 -15970 ± 9887 10427 ± 3878 1126 ± 154 45008 ± 11570 1 ± 1 0 ± 0 11124 ± 1753 164 ± 49 103557 ± 51492 0 ± 0 294724 ± 140514 12723 ± 3420 61755 ± 4798 1324 ± 1715
Dueling 6163 ± 1077 2296 ± 154 8010 ± 381 11170 ± 5355 2220 ± 91 902742 ± 17087 1428 ± 37 40481 ± 2161 16298 ± 1101 1122 ± 35 72 ± 6 99 ± 0 200 ± 21 4166 ± 23 7388 ± 1024 163335 ± 2460 37275 ± 1572 61033 ± 9707 17 ± 7 2064 ± 81 35 ± 5 34 ± 0 2807 ± 1457 27313 ± 2629 1682 ± 170 35895 ± 1035 -0 ± 0 1667 ± 134 14847 ± 29 10733 ± 65 30316 ± 2397 0 ± 0 3650 ± 445 9919 ± 38 8215 ± 403 0 ± 0 21 ± 0 227 ± 138 19819 ± 2640 18405 ± 93 64051 ± 1106 63 ± 1 19595 ± 1493 -7989 ± 1349 3423 ± 152 1158 ± 74 70264 ± 2147 1 ± 3 0 ± 0 14094 ± 652 280 ± 8 93931 ± 56045 1433 ± 10 876503 ± 61496 6534 ± 882 43120 ± 21466 13959 ± 613
Table 3: Raw scores across all games with random starts.
18
NoisyNet-Dueling 5778 ± 2189 3537 ± 521 11231 ± 503 28350 ± 607 86700 ± 80459 972175 ± 31961 1318 ± 37 52262 ± 1480 18501 ± 662 1896 ± 604 68 ± 6 100 ± 0 263 ± 20 7596 ± 1134 11477 ± 1299 171171 ± 2095 42253 ± 2142 69311 ± 26289 1 ± 0 2013 ± 219 57 ± 2 34 ± 0 2923 ± 1519 38909 ± 2229 2209 ± 99 31533 ± 4970 3 ± 1 4682 ± 2281 15227 ± 243 10754 ± 181 41672 ± 1668 57 ± 15 5546 ± 367 12211 ± 251 10379 ± 547 0 ± 0 21 ± 0 279 ± 109 27121 ± 422 23134 ± 1434 234352 ± 132671 64 ± 1 16754 ± 6619 -7550 ± 451 6522 ± 750 5909 ± 1318 75867 ± 8623 10 ± 0 0 ± 0 17301 ± 1200 269 ± 19 61326 ± 6052 815 ± 114 870954 ± 135363 9149 ± 641 86101 ± 4136 14874 ± 214
Published as a conference paper at ICLR 2018
Average score 2500 alien ââ pON © 2000 ee teois: 3 % 1500 & @ 1000 £ $ <x 30 0 0 50 100 150 200 Frames (in millions) 1000000 atlantis â DON £00000 -ââ toi 600000 400000 200000 0 0 50 100 150 200 Frames (in millions) 80 bowling 0 sO 100 150 200 Frames (in millions) crazy_climber â pon 140000 120000 100000 80000 60000 Average score 20000 0 oO sO 100 150 200 Frames (in millions) 20 fishing derby â pa Average score 0 sx 100 150 200 Frames (in millions) Average score eh e6 eso 66 os a3 rat os 0 50 100 150 200 Frames (in millions) kung fu_master Average score 0 50 100 150 200 Frames (in millions) â6 itfall @ ~200 S -4 g -400 o ) 600 5 7300 > < -1000 â1200 0 E) 190 150 200 Frames (in millions) 45000 road_runner 40000 2 35000 3 30000 @ 25000 > 20000 3 15000 Z 10000 5000 C) 0 sO 100 150 200 Frames (in millions) 2500 space invaders â vQNn = coed ââ NolsyNet-DQN & 1sa0 & ® 1000 es Zz 500 0 0 x» 100 180 200 Frames (in millions) 250 tutankham â bon ieee â NoisyNet-DQN & 150 S @ 100 £ S x w 0 0 50 100 150 200 Frames (in millions) 35000 yars_revenge @ 20000} ââ DAN 5 oabon NolsyNet-DQN 2 20000 @ 15000 eo Â¥ 10000 = soca 0 0 sO 100 150 200 Frames (in millions) 1600 amidar 1400 ~~ DON 5 2200 âââ_NoisyNet-DOQN Â¥% 1000 &, 800 £ 600 & 400 200 0 0 50 100 150 200 Frames (in millions) 2200 bank_heist â DOQN 000 ie â NoisyNet-DON S 800 uw & 600 n 5 400 = < 200 0 0 50 100 150 200 Frames (in millions) 100 boxing 80 2 6 isyNeEDO! So & 40 o 2 > o oc 5 720 Zz -40 -60 -80 0 50 100 150 200 Frames (in millions) 20000 defender â DON § 25000 _ââ _NoisyNet-DON uv â £7] & 10000 Pcl G > 5000 =z 0 0 50 100 150 200 Frames (in millions) 35 freeway a te 5 3 25 NoisyNet-DON ®, 20 15 o 3 10 5 0 0 so 100 150 200 Frames (in millions) ° ice hockey av 2 ° uv wv a i=J 5 > <= -25 0 50 100 150 200 Frames (in millions) 6 montezuma_revenge â DQN g ââ NoisyNet-DQN 4 a uv 3? Fy 2 ti 0 0 50 100 150 200 Frames (in millions) 30 pong â 0QN ad yiet DON S 10 vv wo & 0 5 710 > <= -20 â30 t) 50 100 150 200 Frames (in millions) 60 robotank â DON x0 FS â NoisyNet- S 40 w a &,30 £ g 20 =< 10 f) 0 50 100 150 200 Frames (in millions) 50000 star_gunner â DQN me sopeD ââ NoisyNet-DQN % a0000 & ® 20000 ca] Z 10000 0 0 so 100 1S0 200 Frames (in millions) 14000 up_n_down on BO 7 Net-DQN cyNet- 8 190000 ae â 8000 7] 3 cov © 4000 => < 2000 0 0 50 100 150 200 Frames (in millions) 10000 zaxxon â DQN 8000 ââ NolsyNet-DON Average score 0 50 100 150 Frames (in millions) 200 5000 assault â dON ee â NoisyNet-DQN % 3000 & ®@ 2000 o = 1000 0 0 50 100 150 Frames {in millions) 35000 battle zone g 30000} ââ Dan F sc00 NoisyNet-DQN * 20000 2 15000 2 10000 = sooo i 0 50 100 150 Frames (in millions) 00 breakout ââ BQN @ 400 ââ noi ° % 300 & ( 200 $ <= 100 0 0 50 100 150 Frames (in millions) 35000 demon_attack 30000 ââ pa aaa = ate 5 25000 slic vw Fi 20000 2 15000 © 10000 = sooo 0 ft) 50 100 150 Frames {in millions) 1000 frostbite â pon g 800 ââ Noisytet-DON % 600 & © 400 o Z 200 % 50 100 150 Frames (in millions) 1400 jamesbond y 1220 peal eo0N 2 â NoisyNet-! S 1000 ah â 3 800 B 900 @ 400 < 200 0 0 50 100 150 Frames (in millions) 3000 ms_pacman â DON 2500 v ; aa 2500 s 1000 <= 500 0 fi 50 100 150 Frames (in millions) 4000 private eye â 0QN g 3000 â NolsyNet-DON B 2000 S © 1000 ev Zz ft) -1000 0 50 100 150 Frames (in millions) 4000 seaquest 3500 ââ DQN § 2000 â NolsyNet-DON % 2500 & 2000 © 1500 g 1000 500 f) 0 50 100 150 Frames (in millions) 2 surround -2 â-DQN LY -3 ââ NoisyNet-DQN S = a ri -5 3 5 g 7 Zz -8 -9 â105 so 100 1s0 Frames (in millions) 400 venture 350 â DOQN 5 300 _ââ_NoisyNet-DQN & 230 % 200 © 150 Es 100 50 0 0 50 100 150 Frames (in millions) 10 vos ° % os & © O04 ov Z 02 0.0 00 fs D2 Oa nie Oe Frames (in millions) 1.0 14000 e 12000 S 10000 % g000 & = 9000 c @ 4000 < 2000 a 0 50 100 150 200 Frames (in millions) 25000 beam _rider â pon £ 20000 ââ NoisyNet-DQN & 15000 & © 10000 o Z 5000 0 0 50 100 150 200 Frames (in millions) soon centipede â 0QN gy 5000 | NoisyNet-DQN o 4 4000 o & 2000 5 2000 < 1000 0 Q 50 100 150 200 Frames (in millions) 5 double_dunk » BQN 5 -s â e . Â¥,-10 $ -15 3 - < -20 -25 0 50 2100 150 200 Frames (in millions) 14000 gopher â pon y 12000 2 Ss & o oa fed $ = 0 50 100 150 200 Frames {in millions) 14000 kangaroo @ 12000 ââ wate toa mm Sy loisyNets! âOn y uv uw 3000 & = 6000 c @ 4000 < 2000 0 0 50 100 150 200 Frames (in millions) 9000 name _this_game 8000 -ââ- DQN © 7000 " S 6000 & 5000 & 4000 & 2000 2 2000 1000 % 50 100 150 200 Frames (in millions) 16000 qbert 14000 âââ~ 5QN £ 12000 â Nolg % 10000 oa ® a000 © 6000 ve 2 4000 2000 fa o 50 100 150 200 Frames (in millions) -10000 skiing â 0QN iS ~25000 ââ NolsyNet-DQN A -20000 & % ~25000 s 3 Z -30000 35000 50 100 150 200 Frames (in millions) 10 tennis â pon nm NoisyNet-DQN Average score 0 so 100 150 Frames {in millions) 200 400000 video_pinball gy 230000 -ââ oQN = 300000 ââ NoisyNet-DQN & 250000 &, 200000 © 150000 100000 50000 0 o 50 100 150 Frames (in millions) 200 Average score ooor +> Oo M7 © ° N 0.0 0.0 0.2 o4 0.6 os 1.0 Frames (in millions) 4000 asteroids 3500 ââ~ DON s 3000 ââ~_ NoisyNet-OQN &% 2500 ov & 2000 © 1500 1000 500 0 ° 50 100 150 200 Frames (in millions) 900 berzerk 300 â 2 700 § 600 @ 500 D400 o 300 Z 200 100 0 ° 50 100 150 200 Frames (in millions) 000 chopper_command 7000 ââ OQN £ 6000 â NoisyNet-OQN % 5000 o & 4000 © 3000 £ 2000 1000 0 ° 50 100 150 200 Frames (in millions) 1400 enduro iy 1700 No|syNet-OQN ââ Nols: rm 5 1000 ud & ® 300 600 5 2 400 =< 200 Q ° 50 100 150 200 Frames (in millions) 500 gravitar â 0QN & 400 ââ NoisyNet-OQN % 300 ® © 200 a Z 100 t) ° 50 100 150 200 Frames [in millions) 9000 krull 3000 ââ 2 7000 § sooo wv $ce | 5 3000 = 2000 1000 0 ° 50 100 150 200 Frames (in millions) 18000 phoenix 16000 -ââ-DQN @ 14000 ââ NoisyNet-O0N; & 12000 & so00 c & 6000 Zz 4000 2000 % 50 100 150 200 Frames (in millions) 10000 riverraid 2 So vy v o o c a > 4 ° 50 100 150 200 Frames (in millions) 6000 solaris â 0QN ae â NolsyNet-OON & 4000 uw vo & 3000 v 2000 = 1000 Q ° 50 100 150 200 Frames (in millions) go00 time_pilot 7a99. ââ OQN mn ~NoisyNet-OQN ° 50 100 150 Frames [in millions) 200 wizard_of wor â BQN â NoisyNet-OQN Average score ° 50 100 150 200 Frames (in millions) 1.0 » 0.8 ° 4 0.6 S © 0.4 ov 202 0. $0 02 04 #06 O8 10 Frames (in millions)
Figure 6: Training curves for all Atari games comparing DQN and NoisyNet-DQN.
19
Published as a conference paper at ICLR 2018
alien ~ r=3 r=9 ° â Dueling ~~ NolsyNet-Dueling 6000 2 5 S000 & 4000 & 3 2000 2000 > = 1000 0 0 50 190 150 200 Frames (in millions) 1cosc00 atlantis % 800000 $ â 600000 & © 400000 2 $ < 200000 0 0 50 100 150 200 Frames (in millions) 2 Ss a uv oa cc] £ $ < 0 Eo) 100 150 200 Frames (in millions) 180000 crazy climber â Dueling ee â NoisyNet-Dj $ 140000 â Â¥ 120000 g 3 100000 < s0000 60000 0 50 100 150 200 Frames (in millions) Py fishing derby 4 ° S âuw o a [ oe 2 < 0 50 100 150 200 Frames (in millions) 0 50 100 150 200 Frames (in millions) kung fu_master 45000 » 40000 â Dueling ; 5 35000 NoisyNet-Dueling & 3000 2 25000 $ 20000 = 5000 10000 0 50 190 150 200 Frames (in millions) ° itfall ~200 i 8 -400 » & -600 4 -800 > = -1000 -1200 50 100 150 200 Frames (in millions) 350000 road_runner y 200000 ââ Dueling - 5 2 NoisyNet-Dueling & 150000 Â¥ 100000 = sooco [) 0 * 100 150 200 Frames (in millions) 6000 space_invaders â Dueling g 500 NoisyNet-Dueling & 4000 a 3000 £ 3 2000 < 1000 0 0 50 100 150 200 Frames (in millions) 200 tutankham â Dueling 250 f â § 200 a . 7] $150 2 g 100 < sa 0 0 50 100 150 200 Frames (in millions) agonal yars_ revenge t) 50 100 150 200 Frames (in millions) as00 amidar 12000 assault â Dueling ââ Dueling Â¥ a NolsyNet-Du 4 ââ NoisyNet-Duelin 8 8 8000 ¢ © 6000 Bone © 2000 © 1000 9 = 599 < 2000 0 0 0 50 100 150 200 0 50 100 150 200 Frames (in millions) Frames (in millions) 1400 bank _heist 0000 battle zone â Dueling 2 paces isyNet-Oueling § 8 40000 ââ â & & 30000 8 5 20000 < < 10000 0 0 50 100 150 200 0 50 100 1s0 200 Frames (in millions) Frames (in millions) 100 boxin 300 breakout Dueling 20 Dueling s 80 NoisyNet-Dueling g â _ NoisyNet-Oueling 200 a 60 & cy @ 150 @ 40 ia g & 100 =z 2 Zz 50 C) 0 C) 50 100 150 200 t) 50 100 150 200 Frames (in millions) Frames (in millions) 45000 defender 80000 demon _attack 40000 ââ Dueling 70000 ~~ Dueling § 35000 NgisyNet-O g 60000 ââ _NoisyNet-ueling g 30000 soo00 g 25000 @ aspen £ ® 30000 5 15000 c 2 10000 zB 20000 5000 10000 0 0 ft) 50 100 150 200 0 50 100 150 200 Frames (in millions) Frames (in millions) 35 4000 frostbite e 30 6 25 hae ra 20 @ 15 ry £10 <5 C) fi) 50 100 150 200 0 50 100 150 200 Frames (in millions) Frames {in millions) â ice_hockey 6000 jamesbond ââ Duelin: â Duelin go 7 2 . y 3000 ree] 5S oa Wy ry 4 g -2 g 4000 vo .. g-4 6 > -8 ery -12 0 50 100 150 200 0 50 100 150 200 Frames (in millions) Frames (in millions) 60 montezuma_revenge 5500 ms_pacman ee Ces Dueling 2 â NoisyNet-Dueling 8 40 uu $0 c G20 > <10 0 (i) 50 100 150 200 t) 50 100 150 200 Frames (in millions) Frames (in millions) 30 pong 400 private â _Duelin â Dueling % NoisyNet-Dueling gy 300 noisyiNet-Dueling S 10 S 200 â â Fs ° Fs 100 5-10 5 0 > > < -20 < -100 -30 -200 0 50 100 150 200 0 50 100 1s0 200 Frames (in millions) Frames (in millions) 70 robotank 20000 seaquest 60 ââ Dueliny â Dueling 5 50 or? £ sso00 ââ NoisyNet-Oueling & 40 a 4 x Â¥ 10000 § 5 3/20 £ sooo Zo < t) C) t) 50 100 150 200 C) 50 100 150 200 Frames (in millions) Frames (in millions) 80000 star_gunner 0 surround 70000 ââ Dueling 4 2 â_Nals 5 60000 & 5 y % S0000 hy Y 40000 @ 0 © 30000 i $ 20000 S -s < s0000 - t) -10 fi) 50 100 150 200 ft) 50 100 150 200 Frames (in millions) Frames (in millions) 140000 up_n_down 1400 venture y 120000 -ââ Dueling ala y 1200 ââ NoisyNet-Oueli 5 1 joisy! jueling 5 1000 a a ra 80000 Fa 800 = 60000 B 600 © 40000 2 400 < 20000 = 200 C) 0 fi) 50 100 150 200 f) 50 100 150 200 Frames (in millions) Frames (in millions) 16000 Zaxxon Lo 14000 ââ Dueling £ 12000 â gos & 210000 4 06 ov Â¥, 8000 8 5 Cn) eee > 4000 > 02 = so00 =z C) 0.0 f) 50 100 150 200 00 O02 O04 O06 O8 10 Frames (in millions) Frames (in millions) asterix ââ Dueling ~~ NoisyNet-Dueling 0 50 100 150 Frames {in millions) 18000 beam _rider 16000 ââ Dueling © 14000 ââ_ NoisyNet-Duelin & 12000 @ 10000 aeons ol ee Z 4000 2000 0 0 50 100 150 Frames (in millions) 9000 centipede 8000 _ââ Bueling ov ââ 8 7000 NoisyNet-Dueling & so00 c @ 4000 < 3000 2000 0 50 100 150 Frames (in millions) 25 double_ dunk 20 â Ndi: 2 ° 9 a & 3 i po $ < o 50 100 150 Frames (in millions) gopher â Dueling ââ NoisyNet-Oueling 0 50 100 150 Frames (in millions) 16000 kangaroo y 14009 5 12000 Not % 10000 © 3000 © 6000 ov 2 4000 2000 0 (a 50 100 150 Frames (in millions) 12000 name _this_game 11000 .âââ Dueling £ 10000 & 9000 © 3000 £ 7000 2 6000 5000 4000 0 50 100 150 Frames (in millions) 30000 qbert â Dueling p 25000 Noisy & 20000 ra iat & 10000 = sp00 0 0 50 100 150 Frames (in millions) 5000 skiing 0000 0 50 100 150 Frames {in millions) 0 tennis Average score iL ° 50 100 150 Frames (in millions) 900000 video pinball 800000 ââ Dueling © 700000 ââ NeisyNet-Dueling & soo000 @ 500000 © 400000 5 200000 2 200000 100000 0 50 100 150 Frames (in millions) Average score oo 2g 2 bo ® fe Us bd i) ° Frames (in millions) ââ Dueling cr tent, â2 uaa if i} â4 eT @ 14000 ââ_ NoisyNet-Duell 0.2 o4 0.6 o8 140000 asteroids q 120000 ââ Dueling 5 100000 âââ~ NolsyNet-Dueling ww ® 80000 60000 ® 40000 <X 20000 0 200 ° 50 100 150 200 Frames (in millions) 2000 berzerk 1800 ââ Oueling 2 1600 ââ_ NoisyNet-Dueling $ 1400 sis 5 800 Zz 600 400 200 200 ° 50 100 150 200 Frames (in millions) 12000 chopper_command â Dueling 20000 â _NoisyNet-Duelin § 3000 my : uv FE 6000 & 4000 < 2000 0 200 ° 50 100 150 200 Frames (in millions) 2000 enduro ~~, Dueling = 1500 ; Vv uv Â¥ 1000 5 > 500 4 () 200 ° 50 100 150 200 Frames (in millions) 2500 gravitar â Dueling @ 2000 Noicyn ° & 1500 & © 1000 vo Z 500 0 200 ° 50 100 150 200 Frames (in millions) 11000 krull » 10000 8 9000 2 8000 # @ 7000 @ 6000 = sooo 200 baht 50 100 150 200 Frames (in millions) 10000 phoenix 9000 ââ Dueling 2 so00 Dueling $ 7000 & S000 5 4000 Zz moo 2000 1000 200 ° 50 100 150 200 Frames (in millions) 25000 riverraid â Dueling @ 20000 Noisy et, " ° & 15000 & © 10000 ov Z 5000 0 200 ° 50 100 150 200 Frames (in millions) 6000 solaris â Dueling y 3000 NoisyhNet: Dualing § 4000 & 00 o 2000 1000 0 200 ° 50 100 150 200 Frames {in millions} 18000 time_pilot 16000 ââ Dueling 200 ° 50 100 150 200 Frames (in millions) wizard_of_wor 200 ° 50 100 150 200 Frames (in millions) 10 290 02 #04 O06 O08 LO Frames {in millions)
Figure 7: Training curves for all Atari games comparing Duelling and NoisyNet-Dueling.
20
Published as a conference paper at ICLR 2018
2000 alien 1800 w 1600 S 1400 wv 1200 > 1000 ~~ ® 800 <= 600 NoisyNet-A3C and A3C 50 100 150 200 250 300 Frames (in millions) 500000 atlantis 400000 2 fe uh ROO aae NoisyNet-A3C > § 200000 A3e 2 = 100000 ° 50 100 150 200 250 300 Frames (in millions) 45 bowling NoisyNet-A3C 40 | A3BC 235 & 30 3,25 £20 2415 10 a 50 100 150 200 250 300 Frames (in millions) 140000 crazy_climber 120000 = 100000 a 80000 = 60000 2 40000 20000 nd 50 100 150 200 250 300 Frames (in millions) 20 fishing derby NoisyNet-A3C o | ABC 2 ° ra) oa a od v2 = = a 50 100 150 200 250 300 Frames (in millions) 35000 hero NoisyNet-A3C 30000 A3c es 5 25000 a 20000 = 15000 = 10000 5000 ° 50 100 150 200 250 300 Frames (in millions) 70000 kung _fu_master NoisyNet-A3C 60000 A3c £ 50000 ai 40000 = 30000 2 20000 10000 c 50 100 150 200 250 300 Frames (in millions) o tfall -50 2 S -100 w Â¥ -150 £ @ â200 = -~250 NoisyNet-A3C BOO | A3BC ~ 50 100 150 200 250 300 Frames (in millions) 50000 road_runner « 40000 2 ° cA 30000 y a © 20000 $ < 10000 NoisyNet-A3C . A3C 50 100 150 200 250 300 Frames (in millions) 1200 space_invaders NoisyNet-A3C 1000 | ABC wy S 800 wo ®% 600 Cc g 400 =z 200 ° 50 100 150 200 250 300 Frames (in millions) 250 tutankham 200 2 } o R150 i ov b=a] © 100 g =< 50 NoisyNet-A3C o | ABC 50 100 150 200 250 300 Frames (in millions) 70000 yars_revenge NoisyNet-A3C 60000 el ASC = 50000 a 40000 = 30000 2 20000 10000 ° 50 100 150 200 250 300 Frames (in millions) 1000 amidar Pa 800 S ; &% 600 ow Da ec 400 o = <= 200 ° 50 100 150 200 250 300 Frames (in millions) 1400 bank_heist 1200 oy = 1000 Â¥ â» 8s00 wo = 600 2 400 = 200 Ls 50 100 150 200 250 300 Frames (in millions) 100 boxing 80 2 Ss 60 iva) m 40 [4 v 20 = ° -20 50 100 150 200 250 300 Frames (in millions) 60000 defender 50000 & 40000 rr) aes ® 20000 = = 10000 °o 50 100 150 200 250 300 Frames (in millions) 25 freeway 20 (4 S Bis MaWenat ryt tt eg Da © 10 2 <os o . _ 50 100 150 200 250 300 Frames (in millions) ~2 ice_hockey -4 2 S a -6 wv a ec -8 2 = _10 -12 50 100 150 200 250 300 Frames (in millions) 16 montezuma_revenge 14 12 8 10 2 8 £6 a4 ft Stine 50 100 150 200 250 300 Frames (in millions) Average Score 50 100 150 200 250 300 Frames (in millions) robotank &S ° Average Score RPRPNN Ww ovuow ow 50 100 150 200 250 300 Frames (in millions) 60000 star_gunner 50000 40000 30000 20000 10000 Average Score Ls 50 100 150 200 250 300 Frames (in millions) 140000 up_n_down 120000 100000 80000 60000 f 40000 20000 ° Average Score 50 100 150 200 250 300 Frames (in millions) 18000 ZAXKON 16000 14000 12000 Average Score 6 3 50 100 150 200 250 300 Frames (in millions) . \ Average Score hw RF N $333 ---e-) 25000 20000 Average Score 5 6 u Oo 8 8 8 o 6 66 \ ° 6 6 & oo 6 N ° ° Y Average Score 8 35000 30000 25000 20000 15000 10000 5000 âore â & a Averag Average Score 6 8 $ 8 $8 8 8 3500 Average Score BERR \ 5000 4000 3000 2000 1000 Average Score -â1000 1800 we be pe 883 ooo Average Score r-) 8§338 co ° Average Score N 0.06 0.04 0.02 0.00 -0.02 Average Score â-0.04 0.06 oe assault 50 100 150 200 250 300 Frames (in millions) battle zone 50 100 150 200 250 300 Frames (in millions) breakout 50 100 150 200 250 300 Frames (in millions) demon_attack 50 100 150 200 250 300 Frames (in millions) frostbite | 50 100 150 200 250 300 Frames (in millions) jamesbond 50 100 150 200 250 300 Frames (in millions) ms_pacman 50 100 150 200 250 300 Frames (in millions) private_eye 50 100 150 200 250 300 Frames (in millions) seaquest 50 100 150 200 250 300 Frames (in millions) surround 50 100 150 200 250 300 Frames (in millions) venture 50 100 150 200 250 300 Frames (in millions) e 5 asterix e 50 100 150 200 250 300 Frames (in millions) 12000 beam _rider 10000 2 S 8000 a) a 6000 Fd ®@ 4000 2 = 2000 id 50 100 150 200 250 300 Frames (in millions) 9000 centipede 8000 x ° ° ° h ° ° Average Score 86 $8 ae) ae) ° ° 2000 50 100 150 200 250 300 Frames (in millions) 5 double_dunk e ° co A -5 ov o> fe -10 av = <-4a5 -â20 50 100 150 200 250 300 Frames (in millions) 25000 gopher wv 20000 °c # 15000 a a © 10000 ov = = 5000 q | 50 100 150 200 250 300 Frames (in millions) 1800 kangaroo 1600 Pa 1400 : â S 1200 a 1000 2 800 y 600 =< 400 200 0 50 100 150 200 250 300 Frames (in millions) 10000 name_this_game 93000 ® 8000 & 7000 & 6000 & 5000 = 4000 3000 nai 50 100 150 200 250 300 Frames (in millions) 20000 qbert ~ 15000 Co v w %, 10000 Ss o = 5000 °o 50 100 150 200 250 300 Frames (in millions) ~10000 skiing w -â15000 ee °o i -20000 v a © -25000 ov > <= -30000 â35000 50 100 150 200 250 300 Frames (in millions) ° tennis ov £ o vy âw v a £ S <= 50 100 150 200 250 300 Frames (in millions) 300000 video_pinball 250000 2 S 200000 w m 150000 ® 100000 > < 50000 50 100 150 200 250 300 Frames (in millions) cy a a oo as 1. asteroids 8 8 3500 3000 2500 2000 1500 1000 Average Score 50 100 150 200 250 300 Frames {in millions) berzerk 1400 1200 1000 Average Score 200 50 100 150 200 250 300 Frames {in millions) chopper_command Average Score $88$3a8 $888888 3 ° 50 100 150 200 250 300 Frames (in millions) enduro Average Score ye Nw ew g¢g:egeg oo 6 6 -) 50 100 150 200 250 300 Frames (in millions) 400 gravitar 350 300 250 200 150 100 50 âore a oa Averag 50 100 150 200 250 300 Frames (in millions) 30000 krull 25000 2 S 20000 Sc §, 15000 10000 5000 Avera 50 100 150 200 250 300 Frames {in millions) 60000 phoenix 50000 40000 30000 20000 Average Score 10000 ° 50 100 150 200 250 300 Frames {in millions) riverraid 88888388 88888888 Average Score 8 ° 50 100 150 200 250 300 Frames {in millions) solaris 888588 $888888 Average Score hw N Ww ees } ecco 50 100 150 200 250 300 Frames {in millions) time_pilot Average Score 50 100 150 200 250 300 Frames (in millions) wizard_of_wor Average Score 50 100 150 200 250 300 Frames {in millions) rey rey a
Figure 8: Training curves for all Atari games comparing A3C and NoisyNet-A3C.
21 | {
"id": "1703.07608"
} |
1706.08612 | VoxCeleb: a large-scale speaker identification dataset | Most existing datasets for speaker identification contain samples obtained
under quite constrained conditions, and are usually hand-annotated, hence
limited in size. The goal of this paper is to generate a large scale
text-independent speaker identification dataset collected 'in the wild'. We
make two contributions. First, we propose a fully automated pipeline based on
computer vision techniques to create the dataset from open-source media. Our
pipeline involves obtaining videos from YouTube; performing active speaker
verification using a two-stream synchronization Convolutional Neural Network
(CNN), and confirming the identity of the speaker using CNN based facial
recognition. We use this pipeline to curate VoxCeleb which contains hundreds of
thousands of 'real world' utterances for over 1,000 celebrities. Our second
contribution is to apply and compare various state of the art speaker
identification techniques on our dataset to establish baseline performance. We
show that a CNN based architecture obtains the best performance for both
identification and verification. | http://arxiv.org/pdf/1706.08612 | Arsha Nagrani, Joon Son Chung, Andrew Zisserman | cs.SD | The dataset can be downloaded from
http://www.robots.ox.ac.uk/~vgg/data/voxceleb . 1706.08612v2: minor fixes; 6
pages | null | cs.SD | 20170626 | 20180530 | 8 1 0 2
y a M 0 3 ] D S . s c [
2 v 2 1 6 8 0 . 6 0 7 1 : v i X r a
# VoxCeleb: a large-scale speaker identiï¬cation dataset
Arsha Nagraniâ , Joon Son Chungâ , Andrew Zisserman
# Visual Geometry Group, Department of Engineering Science, University of Oxford, UK {arsha,joon,az}@robots.ox.ac.uk
# Abstract
Most existing datasets for speaker identiï¬cation contain sam- ples obtained under quite constrained conditions, and are usu- ally hand-annotated, hence limited in size. The goal of this pa- per is to generate a large scale text-independent speaker identi- ï¬cation dataset collected âin the wildâ.
We make two contributions. First, we propose a fully au- tomated pipeline based on computer vision techniques to create the dataset from open-source media. Our pipeline involves ob- taining videos from YouTube; performing active speaker veriï¬- cation using a two-stream synchronization Convolutional Neu- ral Network (CNN), and conï¬rming the identity of the speaker using CNN based facial recognition. We use this pipeline to cu- rate VoxCeleb which contains hundreds of thousands of âreal worldâ utterances for over 1,000 celebrities.
Our second contribution is to apply and compare various state of the art speaker identiï¬cation techniques on our dataset to establish baseline performance. We show that a CNN based architecture obtains the best performance for both identiï¬cation and veriï¬cation. Index Terms: large-scale, dataset, convolutional neural network
# 1. Introduction
Speaker recognition under noisy and unconstrained conditions is an extremely challenging topic. Applications of speaker recognition are many and varied, ranging from authentication in high-security systems and forensic tests, to searching for per- sons in large corpora of speech data. All such tasks require high speaker recognition performance under âreal worldâ con- ditions. This is an extremely difï¬cult task due to both extrinsic and intrinsic variations; extrinsic variations include background chatter and music, laughter, reverberation, channel and micro- phone effects; while intrinsic variations are factors inherent to the speaker themself such as age, accent, emotion, intonation and manner of speaking, amongst others [1].
Deep Convolutional Neural Networks (CNNs) have given rise to substantial improvements in speech recognition, com- puter vision and related ï¬elds due to their ability to deal with real world, noisy datasets without the need for handcrafted fea- tures [2, 3, 4]. One of the most important ingredients for the success of such methods, however, is the availability of large training datasets.
Unfortunately, large-scale public datasets in the ï¬eld of speaker identiï¬cation with unconstrained speech samples are lacking. While large-scale evaluations are held regularly by the National Institute of Standards in Technology (NIST), these datasets are not freely available to the research community. The only freely available dataset curated from multimedia is the
Speakers in the Wild (SITW) dataset [5], which contains speech samples of 299 speakers across unconstrained or âwildâ condi- tions. This is a valuable dataset, but to create it the speech sam- ples have been hand-annotated. Scaling it further, for example to thousands of speakers across tens of thousands of utterances, would require the use of a service such as Amazon Mechanical Turk (AMT). In the computer vision community AMT like ser- vices have been used to produce very large-scale datasets, such as ImageNet [6].
This paper has two goals. The ï¬rst is to propose a fully automated and scalable pipeline for creating a large-scale âreal worldâ speaker identiï¬cation dataset. By using visual active speaker identiï¬cation and face veriï¬cation, our method circum- vents the need for human annotation completely. We use this method to curate VoxCeleb, a large-scale dataset with hun- dreds of utterances for over a thousand speakers. The second goal is to investigate different architectures and techniques for training deep CNNs on spectrograms extracted directly from the raw audio ï¬les with very little pre-processing, and compare our results on this new dataset with more traditional state-of-the-art methods.
VoxCeleb can be used for both speaker identiï¬cation and veriï¬cation. Speaker identiï¬cation involves determining which speaker has produced a given utterance, if this is performed for a closed set of speakers then the task is similar to that of multi- class classiï¬cation. Speaker veriï¬cation on the other hand in- volves determining whether there is a match between a given utterance and a target model. We provide baselines for both tasks.
The dataset can be downloaded from http://www. robots.ox.ac.uk/Ëvgg/data/voxceleb.
# 2. Related Works
For a long time, speaker identiï¬cation was the domain of Gaus- sian Mixture Models (GMMs) trained on low dimensional fea- ture vectors [7, 8]. The state of the art in more recent times in- volves both the use of joint factor analysis (JFA) based methods which model speaker and channel subspaces separately [9], and i-vectors which attempt to model both subspaces into a single compact, low-dimensional space [10]. Although state of the art in speaker recognition tasks, these methods all have one thing in common â they rely on a low dimensional representation of the audio input, such as Mel Frequency Cepstrum Coefï¬cients (MFCCs). However, not only does the performance of MFCCs degrade rapidly in real world noise [11, 12], but by focusing only on the overall spectral envelope of short frames, MFCCs may be lacking in speaker-discriminating features (such as pitch information). This has led to a very recent shift from hand- crafted features to the domain of deep CNNs which can be ap- plied to higher dimensional inputs [13, 14] and for speaker iden- tiï¬cation [15]. Essential to this task however, is a large dataset obtained under real world conditions.
â These authors contributed equally to this work.
Many existing datasets are obtained under controlled con- ditions, for example: forensic data intercepted by police of- ï¬cials [16], data from telephone calls [17], speech recorded live in high quality environments such as acoustic laborato- ries [18, 19], or speech recorded from mobile devices [20, 21]. [22] consists of more natural speech but has been manually pro- cessed to remove extraneous noises and crosstalk. All the above datasets are also obtained from single-speaker environments, and are free from audience noise and overlapping speech.
Datasets obtained from multi-speaker environments include those from recorded meeting data [23, 24], or from audio broad- casts [25]. These datasets usually contain audio samples un- der less controlled conditions. Some datasets contain artiï¬cial degradation in an attempt to mimic real world noise, such as those developed using the TIMIT dataset [19]: NTIMIT, (trans- mitting TIMIT recordings through a telephone handset) and CTIMIT, (passing TIMIT ï¬les through cellular telephone cir- cuits).
Table 1 summarises existing speaker identiï¬cation datasets. Besides lacking real world conditions, to the best of our knowl- edge, most of these datasets have been collected with great man- ual effort, other than [25] which was obtained by mapping sub- titles and transcripts to broadcast data.
Name Cond. Free #POL # Utter. ELSDSRâ Clean Speech vo 22 198) MIT Mobile Mobile Devices - 88 7,884 SWB (27 Telephony - 3,114 33,039 POLYCOST [17] Telephony : 133 | 1,285§ ICSI Meeting Corpus Meetings = 33 922 Forensic Comparison [22] Telephony T 352 1.264 ANDOSL [18} Clean speech : 204 33,900 TIMIT T Clean speech - 630 6,300 SITW Multi-media v 299 2,800 NIST SRE 229) Clean speech = 2,000+ * VoxCeleb Multi-media v 1,251 153,516
Table 1: Comparison of existing speaker identiï¬cation datasets. Cond.: Acoustic conditions; POI: Person of Interest; Ut- ter.: Approximate number of utterances. â And its derivatives. â¡Number of telephone calls. â varies by year.
3. Dataset Description VoxCeleb contains over 100,000 utterances for 1,251 celebri- ties, extracted from videos uploaded to YouTube. The dataset is gender balanced, with 55% of the speakers male. The speakers span a wide range of different ethnicities, accents, professions and ages. The nationality and gender of each speaker (obtained from Wikipedia) is also provided.
Videos included in the dataset are shot in a large num- ber of challenging multi-speaker acoustic environments. These include red carpet, outdoor stadium, quiet studio interviews, speeches given to large audiences, excerpts from profession- ally shot multimedia, and videos shot on hand-held devices. Crucially, all are degraded with real world noise, consisting of background chatter, laughter, overlapping speech, room acous- tics, and there is a range in the quality of recording equipment and channel noise. Unlike the SITW dataset, both audio and video for each speaker is released. Table 2 gives the dataset statistics.
# 4. Dataset Collection Pipeline
This section describes our multi-stage approach for collect- ing a large speaker recognition dataset, starting from YouTube videos. Using this fully automated pipeline, we have obtained hundreds of utterances for over a thousand different Persons of
# of POIs # of male POIs # of videos per POI # of utterances per POI Length of utterances (s) 1,251 690 36 / 18 / 8 250 / 123 / 45 145.0 / 8.2 / 4.0
| |_# |_# |_#
Table 2: VoxCeleb dataset statistics. Where there are three entries in a ï¬eld, numbers refer to the maximum / average / minimum.
Interest (POIs). The pipeline is summarised in Figure 1 left, and key stages are discussed in the following paragraphs: Stage 1. Candidate list of POIs. The ï¬rst stage is to obtain a list of POIs. We start from the list of people that appear in the VGG Face dataset [30] , which is based on an intersection of the most searched names in the Freebase knowledge graph, and the Internet Movie Data Base (IMDB). This list contains 2,622 identities, ranging from actors and sportspeople to en- trepreneurs, of which approximately half are male and the other half female. Stage 2. Downloading videos from YouTube. The top 50 videos for each of the 2,622 POIs are automatically downloaded using YouTube search. The word âinterviewâ is appended to the name of the POI in search queries to increase the likelihood that the videos contain an instance of the POI speaking, and to ï¬lter out sports or music videos. No other ï¬ltering is done at this stage. Stage 3. Face tracking. The HOG-based face detector [32] is used to detect the faces in every frame of the video. Facial landmark positions are detected for each face detection using the regression tree based method of [33]. The shot boundaries are detected by comparing colour histograms across consecutive frames. Within each detected shot, face detections are grouped together into face tracks using a position-based tracker. This stage is closely related to the tracking pipeline of [34, 35], but optimised to reduce run-time given the very large number of videos to process. Stage 4. Active speaker veriï¬cation. The goal of this stage is to determine the audio-video synchronisation between mouth motion and speech in a video in order to determine which (if any) visible face is the speaker. This is done by using âSync- Netâ, a two-stream CNN described in [36] which estimates the correlation between the audio track and the mouth motion of the video. This method is able to reject the clips that contain dubbing or voice-over. Stage 5. Face veriï¬cation. Active speaker face tracks are then classiï¬ed into whether they are of the POI or not using the VGG Face CNN. This classiï¬cation network is based on the VGG-16 CNN [3] trained on the VGG Face dataset (which is a ï¬ltered collection of Google Image Search results for the POI name). Veriï¬cation is done by directly using this classiï¬cation score with a high threshold. Discussion. In order to ensure that our system is extremely conï¬dent that a person is speaking (Stage 4), and that they have been correctly identiï¬ed (Stage 5) without any manual interfer- ence, we set very conservative thresholds in order to minimise the number of false positives. Precision-recall curves for both tasks on their respective benchmark datasets [30, 31] are shown in Figure 1 right, and the values at the operating point are given in Table 3. Employing these thresholds ensures that although we discard a lot of the downloaded videos, we can be reason- ably certain that the dataset has few labelling errors. This ensures a completely automatic pipeline that can be scaled up to any number of speakers and utterances (if available) as
Elon Musk Download videos |» Face detection A+ Anytime + J i ee see Audio feature Sanersaton wih lo Face tracking extraction âeeee J 1 persia] austere = ; ro Treanfeon Â¥ Bonus How pat Active speaker verification 1 Face verification || VoxCeleb database
1 Elon Musk Download videos |» Face detection A+ Anytime + J i 0.9 ee see Audio feature Sanersaton wih lo Face tracking extraction âeeee J 1 5 08 persia] austere Ss = 2 8 ; ro Treanfeon £07 Â¥ Bonus How pat Active speaker verification 06 âActive speaker verification 1 Face verification Face verification || VoxCeleb database 05 0.5 0.6 0.7 0.8 0.9 1 Recall
Figure 1: Left: Data processing pipeline; Right: Precision-recall curves for the active speaker veriï¬cation (using a 25-frame window) and the face veriï¬cation steps, tested on standard benchmark datasets [30, 31]. Operating points are shown in circles.
required.
softmax in order to produce a distribution over the 1,251 differ- ent speakers. Veriï¬cation. For veriï¬cation, feature vectors can be obtained from the classiï¬cation network using the 1024 dimension fc7 vectors, and a cosine distance can be used to compare vec- tors. However, it is better to learn an embedding by training a Siamese network with a contrastive loss [38]. This is better suited to the veriï¬cation task as the network learns to optimize similarity directly, rather than indirectly via a classiï¬cation loss. For the embedding network, the last fully connected layer (fc8) is modiï¬ed so that the output size is 1024 instead of the number of classes. We compare both methods in the experiments. Testing. A traditional approach to handling variable length ut- terances at test time is to break them up into ï¬xed length seg- ments (e.g. 3 seconds) and average the results on each segment to give a ï¬nal class prediction. Average pooling, however al- lows the network to accommodate variable length inputs at test time, as the entire test utterance can be evaluated at once by changing the size of the apool6 layer. Not only is this more el- egant, it also leads to an increase in classiï¬cation accuracy, as shown in Table 7.
Task Active speaker veriï¬cation Face veriï¬cation Dataset [31] [30] Precision 1.000 1.000 Recall 0.613 0.726
Table 3: Precision-recall values at the chosen operating points.
# 5. CNN Design and Architecture
Our aim is to move from techniques that require traditional hand-crafted features, to a CNN architecture that can choose the features required for the task of speaker recognition. This allows us to minimise the pre-processing of the audio data and hence avoid losing valuable information in the process. Input features. All audio is ï¬rst converted to single-channel, 16-bit streams at a 16kHz sampling rate for consistency. Spec- trograms are then generated in a sliding window fashion using a hamming window of width 25ms and step 10ms. This gives spectrograms of size 512 x 300 for 3 seconds of speech. Mean and variance normalisation is performed on every frequency bin of the spectrum. This normalisation is crucial, leading to an al- most 10% increase in classiï¬cation accuracy, as shown in Ta- ble 7. No other speech-speciï¬c preprocessing (e.g. silence re- moval, voice activity detection, or removal of unvoiced speech) is used. These short time magnitude spectrograms are then used as input to the CNN. Architecture. Since speaker identiï¬cation under a closed set can be treated as a multiple-class classiï¬cation problem, we base our architecture on the VGG-M [37] CNN, known for good classiï¬cation performance on image data, with modiï¬cations to adapt to the spectrogram input. The fully connected fc6 layer of dimension 9 à 8 (support in both dimensions) is replaced by two layers â a fully connected layer of 9 à 1 (support in the fre- quency domain) and an average pool layer with support 1 à n, where n depends on the length of the input speech segment (for example for a 3 second segment, n = 8). This makes the net- work invariant to temporal position but not frequency, and at the same time keeps the output dimensions the same as those of the original fully connected layer. This also reduces the number of parameters from 319M in VGG-M to 67M in our network, which helps avoid overï¬tting. The complete CNN architecture is speciï¬ed in Table 4. Identiï¬cation. Since identiï¬cation is treated as a simple classi- ï¬cation task, the output of the last layer is fed into a 1,251-way
Layer conv1 mpool1 conv2 mpool2 conv3 conv4 conv5 mpool5 fc6 apool6 fc7 fc8 Support 7Ã7 3Ã3 5Ã5 3Ã3 3Ã3 3Ã3 3Ã3 5Ã3 9Ã1 1Ãn 1Ã1 1Ã1 Filt dim. 1 - 96 - 256 384 256 - 256 - 4096 1024 # ï¬lts. 96 - 256 - 384 256 256 - 4096 - 1024 1251 Stride 2Ã2 2Ã2 2Ã2 2Ã2 1Ã1 1Ã1 1Ã1 3Ã2 1Ã1 1Ã1 1Ã1 1Ã1 Data size 254Ã148 126Ã73 62Ã36 30Ã17 30Ã17 30Ã17 30Ã17 9Ã8 1Ã8 1Ã1 1Ã1 1Ã1
Table 4: CNN architecture. The data size up to fc6 is for a 3- second input, but the network is able to accept inputs of variable lengths.
Implementation details and training. Our implementation is based on the deep learning toolbox MatConvNet [39] and trained on a NVIDIA TITAN X GPU. The network is trained using batch normalisation [40] and all hyper-parameters (e.g. weight decay, learning rates) use the default values provided with the toolbox. To reduce overï¬tting, we augment the data by taking random 3-second crops in the time domain during train- ing. Using a ï¬xed input length is also more efï¬cient. For veri- ï¬cation, the network is ï¬rst trained for classiï¬cation (excluding the test POIs for the veriï¬cation task, see Section 6), and then
all ï¬lter weights are frozen except for the modiï¬ed last layer and the Siamese network trained with contrastive loss. Choos- ing good pairs for training is very important in metric learning. We randomly select half of the negative examples, and the other half using Hard Negative Mining, where we only sample from the hardest 10% of all negatives.
# 6. Experiments
This section describes the experimental setup for both speaker identiï¬cation and veriï¬cation, and compares the performance of our devised CNN baseline to a number of traditional state of the art methods on VoxCeleb.
# 6.1. Experimental setup
Speaker identiï¬cation. For identiï¬cation, the training and the testing are performed on the same POIs. From each POI, we reserve the speech segments from one video for test. The test video contains at least 5 non-overlapping segments of speech. For identiï¬cation, we report top-1 and top-5 accuracies. The statistics are given in Table 5. Speaker veriï¬cation. For veriï¬cation, all POIs whose name starts with an âEâ are reserved for testing, since this gives a good balance of male and female speakers. These POIs are not used for training the network, and are only used at test time. The statistics are given in Table 6.
Two key performance metrics are used to evaluate system performance for the veriï¬cation task. The metrics are similar to those used by existing datasets and challenges, such as NIST SRE12 [29] and SITW [5]. The primary metric is based on the cost function Cdet
Cdet = Cmiss à Pmiss à Ptar + Cf a à Pf a à (1 â Ptar) (1)
where we assume a prior target probability Ptar of 0.01 and equal weights of 1.0 between misses Cmiss and false alarms Cf a. The primary metric, Cmin det , is the minimum value of Cdet for the range of thresholds. The alternative performance mea- sure used here is the Equal Error Rate (EER) which is the rate at which both acceptance and rejection errors are equal. This measure is commonly used for identity veriï¬cation systems.
Set Dev Test Total # POIs 1,251 1,251 1,251 # Vid. / POI 17.0 1.0 1.0 # Utterances 145,265 8,251 153,516
Table 5: Development and test set statistics for identiï¬cation.
Set Dev Test Total # POIs 1,211 40 1,251 # Vid. / POI 18.0 17.4 18.0 # Utterances 148,642 4,874 153,516
Table 6: Development and test set statistics for veriï¬cation.
# 6.2. Baselines
GMM-UBM. The GMM-UBM system uses MFCCs of dimen- sion 13 as input. Cepstral mean and variance normalisation (CMVN) is applied on the features. Using the conventional GMM-UBM framework, a single speaker-independent univer- sal background model (UBM) of 1024 mixture components is trained for 10 iterations from the training data.
I-vectors/PLDA. Gender independent i-vector extractors [10] are trained on the VoxCeleb dataset to produce 400- dimensional i-vectors. Probabilistic LDA (PLDA) [41] is then used to reduce the dimension of the i-vectors to 200. Inference. For identiï¬cation, a one-vs-rest binary SVM clas- siï¬er is trained for each speaker m (m â 1...K). All feature inputs to the SVM are L2 normalised and a held out validation set is used to determine the C parameter (determines trade off between maximising the margin and penalising training errors). Classiï¬cation during test time is done by choosing the speaker corresponding to the highest SVM score. The PLDA scoring function [41] is used for veriï¬cation.
# 6.3. Results
Results are given in Tables 7 and 8. For both speaker recogni- tion tasks, the CNN provides superior performance to the tradi- tional state-of-the-art baselines.
For identiï¬cation we achieve an 80.5% top-1 classiï¬cation accuracy over 1,251 different classes, almost 20% higher than traditional state of the art baselines. The CNN architecture uses the average pooling layer for variable length test data. We also compare to two variants: âCNN-fc-3sâ, this architecture has a fully connected fc6 layer, and divides the test data into 3s seg- ments and averages the scores. As is evident there is a con- siderable drop in performance compared to the average pooling original â partly due to the increased number of parameters that must be learnt; âCNN-fc-3s no var. norm.â, this is the CNN-fc-3s architecture without the variance normalization pre-processing of the input (the input is still mean normalized). The differ- ence in performance between the two shows the importance of variance normalization for this data.
For veriï¬cation, the margin over the baselines is narrower, but still a signiï¬cant improvement, with the embedding being the crucial step.
Accuracy I-vectors + SVM I-vectors + PLDA + SVM CNN-fc-3s no var. norm. CNN-fc-3s CNN Top-1 (%) 49.0 60.8 63.5 72.4 80.5 Top-5 (%) 56.6 75.6 80.3 87.4 92.1
Table 7: Results for identiï¬cation on VoxCeleb (higher is bet- ter). The different CNN architectures are described in Section 5.
Metrics GMM-UBM I-vectors + PLDA CNN-1024D CNN + Embedding Cmin det 0.80 0.73 0.75 0.71 EER (%) 15.0 8.8 10.2 7.8
Table 8: Results for veriï¬cation on VoxCeleb (lower is bet- ter).
# 7. Conclusions
We provide a fully automated and scalable pipeline for audio data collection and use it to create a large-scale speaker identiï¬cation dataset called VoxCeleb, with 1,251 speakers and over 100,000 utterances. In order to establish benchmark performance, we develop a novel CNN architecture with the ability to deal with variable length audio inputs, which out- performs traditional state-of-the-art methods for both speaker identiï¬cation and veriï¬cation on this dataset.
Acknowledgements. Funding for this research is provided by the EPSRC Programme Grant Seebibyte EP/M013774/1 and IARPA grant JANUS. We would like to thank Andrew Senior for helpful comments.
8. References [1] L. L. Stoll, âFinding difï¬cult speakers in automatic speaker recog- nition,â Technical Report No. UCB/EECS-2011-152, 2011.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImageNet classi- ï¬cation with deep convolutional neural networks,â in Advances in Neural Information Processing Systems, pp. 1106â1114, 2012.
[3] K. Simonyan and A. Zisserman, âVery deep convolutional net- works for large-scale image recognition,â in Proceedings of the International Conference on Learning Representations, 2015.
[4] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â arXiv preprint arXiv:1512.03385, 2015.
[5] M. McLaren, L. Ferrer, D. Castan, and A. Lawson, âThe speak- ers in the wild (SITW) speaker recognition database,â INTER- SPEECH, vol. 2016, 2016.
[6] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and F. Li, âImagenet large scale visual recognition challenge,â Inter- national Journal of Computer Vision, 2015.
[7] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, âSpeaker veri- ï¬cation using adapted gaussian mixture models,â Digital signal processing, vol. 10, no. 1-3, pp. 19â41, 2000.
[8] D. A. Reynolds and R. C. Rose, âRobust text-independent speaker identiï¬cation using gaussian mixture speaker models,â IEEE transactions on speech and audio processing, vol. 3, no. 1, pp. 72â 83, 1995.
[9] P. Kenny, âJoint factor analysis of speaker and session variability: Theory and algorithms,â CRIM, Montreal, CRIM-06/08-13, 2005.
[10] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, âFront-end factor analysis for speaker veriï¬cation,â IEEE Trans- actions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788â798, 2011.
[11] U. H. Yapanel, X. Zhang, and J. H. Hansen, âHigh performance digit recognition in real car environments.,â in INTERSPEECH, 2002.
[12] J. H. Hansen, R. Sarikaya, U. H. Yapanel, and B. L. Pellom, âRo- bust speech recognition in noise: an evaluation using the spine corpus.,â in INTERSPEECH, pp. 905â908, 2001.
[13] T. N. Sainath, R. J. Weiss, A. W. Senior, K. W. Wilson, and O. Vinyals, âLearning the speech front-end with raw waveform CLDNNs,â in INTERSPEECH, pp. 1â5, 2015.
[14] S. Hershey, S. Chaudhuri, D. P. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A. Saurous, B. Seybold, et al., âCNN architectures for large-scale audio classiï¬cation,â arXiv preprint arXiv:1609.09430, 2016.
[15] Y. Lukic, C. Vogt, O. D¨urr, and T. Stadelmann, âSpeaker iden- tiï¬cation and clustering using convolutional neural networks,â in IEEE 26th International Workshop on Machine Learning for Sig- nal Processing (MLSP), pp. 1â6, IEEE, 2016.
[16] D. van der Vloed, J. Bouten, and D. A. van Leeuwen, âNFI- FRITS: a forensic speaker recognition database and some ï¬rst ex- periments,â in The Speaker and Language Recognition Workshop, 2014.
[17] J. Hennebert, H. Melin, D. Petrovska, and D. Genoud, âPOLY- COST: a telephone-speech database for speaker recognition,â Speech communication, vol. 31, no. 2, pp. 265â270, 2000.
[18] J. B. Millar, J. P. Vonwiller, J. M. Harrington, and P. J. Der- mody, âThe Australian national database of spoken language,â in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 1, pp. Iâ97, IEEE, 1994.
[19] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, and D. S. Pallett, âDARPA TIMIT acoustic-phonetic continous speech cor- pus CD-ROM. NIST speech disc 1-1.1,â NASA STI/Recon techni- cal report, vol. 93, 1993.
[20] C. McCool and S. Marcel, âMobio database for the ICPR 2010 face and speech competition,â tech. rep., IDIAP, 2009.
[21] R. Woo, A. Park, and T. J. Hazen, âThe MIT Mobile Device Speaker Veriï¬cation Corpus: Data collection and preliminary ex- periments,â The Speaker and Language Recognition Workshop, 2006.
[22] G. Morrison, C. Zhang, E. Enzinger, F. Ochoa, D. Bleach, M. Johnson, B. Folkes, S. De Souza, N. Cummins, and D. Chow, âForensic database of voice recordings of 500+ Australian English speakers,â URL: http://databases.forensic-voice-comparison.net, 2015.
[23] A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, et al., âThe ICSI meet- ing corpus,â in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 1, IEEE, 2003.
[24] I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, et al., âThe AMI meeting corpus,â in International Conference on Meth- ods and Techniques in Behavioral Research, vol. 88, 2005.
[25] P. Bell, M. J. Gales, T. Hain, J. Kilgour, P. Lanchantin, X. Liu, A. McParland, S. Renals, O. Saz, M. Wester, et al., âThe MGB challenge: Evaluating multi-genre broadcast media recognition,â in IEEE Workshop on Automatic Speech Recognition and Under- standing, pp. 687â693, IEEE, 2015.
[26] L. Feng and L. K. Hansen, âA new database for speaker recogni- tion,â tech. rep., 2005.
[27] J. J. Godfrey, E. C. Holliman, and J. McDaniel, âSwitchboard: Telephone speech corpus for research and development,â in Pro- ceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 1, pp. 517â520, IEEE, 1992.
[28] W. M. Fisher, G. R. Doddington, and K. M. Goudie-Marshall, âThe DARPA speech recognition research database: speciï¬ca- tions and status,â in Proc. DARPA Workshop on speech recogni- tion, pp. 93â99, 1986.
[29] C. S. Greenberg, âThe NIST year 2012 speaker recognition eval- uation plan,â NIST, Technical Report, 2012.
[30] O. M. Parkhi, A. Vedaldi, and A. Zisserman, âDeep face recog- nition,â in Proceedings of the British Machine Vision Conference, 2015.
[31] P. Chakravarty and T. Tuytelaars, âCross-modal supervision for learning active speaker detection in video,â arXiv preprint arXiv:1603.08907, 2016.
[32] D. E. King, âDlib-ml: A machine learning toolkit,â The Journal of Machine Learning Research, vol. 10, pp. 1755â1758, 2009.
[33] V. Kazemi and J. Sullivan, âOne millisecond face alignment with an ensemble of regression trees,â in Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pp. 1867â 1874, 2014.
[34] J. S. Chung and A. Zisserman, âLip reading in the wild,â in Pro- ceedings of the Asian Conference on Computer Vision, 2016.
[35] M. Everingham, J. Sivic, and A. Zisserman, âTaking the bite out of automatic naming of characters in TV video,â Image and Vision Computing, vol. 27, no. 5, 2009.
[36] J. S. Chung and A. Zisserman, âOut of time: automated lip sync in the wild,â in Workshop on Multi-view Lip-reading, ACCV, 2016.
[37] K. Chatï¬eld, K. Simonyan, A. Vedaldi, and A. Zisserman, âRe- turn of the devil in the details: Delving deep into convolutional nets,â in Proceedings of the British Machine Vision Conference, 2014.
[38] S. Chopra, R. Hadsell, and Y. LeCun, âLearning a similarity met- ric discriminatively, with application to face veriï¬cation,â in Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 539â546, IEEE, 2005.
[39] A. Vedaldi and K. Lenc, âMatconvnet â convolutional neural net- works for MATLAB,â CoRR, vol. abs/1412.4564, 2014.
[40] S. Ioffe and C. Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â arXiv preprint arXiv:1502.03167, 2015.
[41] S. Ioffe, âProbabilistic linear discriminant analysis,â in Proceed- ings of the European Conference on Computer Vision, pp. 531â 542, Springer, 2006. | {
"id": "1502.03167"
} |
1706.08098 | FReLU: Flexible Rectified Linear Units for Improving Convolutional Neural Networks | Rectified linear unit (ReLU) is a widely used activation function for deep
convolutional neural networks. However, because of the zero-hard rectification,
ReLU networks miss the benefits from negative values. In this paper, we propose
a novel activation function called \emph{flexible rectified linear unit
(FReLU)} to further explore the effects of negative values. By redesigning the
rectified point of ReLU as a learnable parameter, FReLU expands the states of
the activation output. When the network is successfully trained, FReLU tends to
converge to a negative value, which improves the expressiveness and thus the
performance. Furthermore, FReLU is designed to be simple and effective without
exponential functions to maintain low cost computation. For being able to
easily used in various network architectures, FReLU does not rely on strict
assumptions by self-adaption. We evaluate FReLU on three standard image
classification datasets, including CIFAR-10, CIFAR-100, and ImageNet.
Experimental results show that the proposed method achieves fast convergence
and higher performances on both plain and residual networks. | http://arxiv.org/pdf/1706.08098 | Suo Qiu, Xiangmin Xu, Bolun Cai | cs.CV | null | null | cs.CV | 20170625 | 20180129 | # FReLU: Flexible Rectiï¬ed Linear Units for Improving Convolutional Neural Networks
Suo Qiu, Xiangmin Xu and Bolun Cai School of Electronic and Information Engineering, South China University of Technology Wushan RD., Tianhe District, Guangzhou, P.R.China Email: q.suo@foxmail.com, xmxu@scut.edu.cn, caibolun@gmail.com
AbstractâRectiï¬ed linear unit (ReLU) is a widely used activa- tion function for deep convolutional neural networks. However, because of the zero-hard rectiï¬cation, ReLU networks miss the beneï¬ts from negative values. In this paper, we propose a novel activation function called ï¬exible rectiï¬ed linear unit (FReLU) to further explore the effects of negative values. By redesigning the rectiï¬ed point of ReLU as a learnable parameter, FReLU expands the states of the activation output. When the network is successfully trained, FReLU tends to converge to a negative value, which improves the expressiveness and thus the performance. Furthermore, FReLU is designed to be simple and effective without exponential functions to maintain low cost computation. For being able to easily used in various network architectures, FReLU does not rely on strict assumptions by self-adaption. We evaluate FReLU on three standard image classiï¬cation datasets, including CIFAR-10, CIFAR-100, and ImageNet. Experimental results show that the proposed method achieves fast convergence and higher performances on both plain and residual networks.
8 1 0 2
# n a J
9 2 ] V C . s c [
# I. INTRODUCTION
Activation function is an important component in neural networks. It provides the non-linear properties for deep neural networks and controls the information propagation through adjacent layers. Therefore, the design of an activation function matters for the learning behaviors and performances of neural networks. And different activation functions have different characteristics and are used for different tasks. For example, long short-term memory (LSTM) models [1] use sigmoid or hyperbolic tangent functions, while rectiï¬ed linear unit (ReLU) [2], [3], [4] is more popular in convolutional neural networks (CNNs). In this paper, we mainly focus on extending ReLU function to improve convolutional neural networks.
2 v 8 9 0 8 0 . 6 0 7 1 : v i X r a
ReLU [5] is a classical activation function, which effec- tiveness has been veriï¬ed in previous works [6], [2], [3], [4]. The success of ReLU owes to identically propagating all the positive inputs, which alleviates gradient vanishing and allows the supervised training of much deeper neural networks. In addition, ReLU is computational efï¬cient by just outputing zero for negative inputs, and thus widely used in neural networks. Although ReLU is fantastic, researchers found that it is not the end of story about the activation function â the challenges of activation function arise from two main aspects: negative missing and zero-center property.
(a) ReLU (b) FReLU
relu(x)
fretu(x)
Illustration of (a) ReLU and (b) FReLU function.
[7], parametric ReLU (PReLU) [8], and randomized ReLU (RReLU) [9], enable non-zero slope to the negative part. It is proven that the negative parts are helpful for network learning. However, non-hard rectiï¬cation of these activation functions will destroy sparsity.
the authors explained that pushing the activation means closer to zero (zero-like) can speed up learning. ReLU is apparently non zero-like. LReLU, PReLU, and RReLU cannot ensure a noise-robust negative deactivation state. To this end, exponential linear unit (ELU) [10] was proposed to keep negative values and saturate the negative part to push the activation means closer to zero. Recent variants [11], [12], [13], [14], [15] of ELU and penal- ized tanh function [16] also demonstrate similar performance improvements. However, the incompatibility between ELU and batch normalization (BN) [17] has not been well treated.
In this paper, we propose a novel activation function called ï¬exible rectiï¬ed linear unit (FReLU), which can adaptively adjust the ReLU output by a rectiï¬ed point to capture nega- tive information and provide zero-like property. We evaluate FReLU on image classiï¬cation tasks and ï¬nd that the ï¬exible rectiï¬cation can improve the capacity of neural networks. In addition, the proposed activation function FReLU brings the following beneï¬ts:
Negative missing. ReLU simply restrains the negative value to hard-zero, which provides sparsity but results negative miss- ing. The variants of ReLU, including leaky ReLU (LReLU)
fast convergence and higher performance; ⢠low computation cost without exponential operation; ⢠compatibility with batch normalization; ⢠weak assumptions and self-adaptation.
II. THE PROPOSED METHOD
A. Flexible Rectiï¬ed Linear Unit
As illustrated in Fig. 1(a), let variable x represent the input, and rectiï¬ed linear unit (ReLU) [2] is deï¬ned as:
relu(x) = x 0 if x > 0 if x ⤠0 . (1)
By redesigning the rectiï¬ed point of ReLU as a learnable parameter, we propose ï¬exible rectiï¬ed linear unit (FReLU) to improve ï¬exibility on the horizontal and vertical axis, which is expressed as:
f relu(x) = relu(x + a) + b, (2)
where a and b are two learnable variables. By further consid- eration, activation function follows convolutional/linear layer generally, the variable a can be learned together with the bias of the preceding convolutional/linear layer. Therefore, the Equ. (2) equals to
f relu(x) = relu(x) + b, (3)
which is illustrated in Fig. 1(b).
Therefore, the forward pass function of FReLU is rewrite
# as:
f relu(x) = x + bl bl if x > 0 if x ⤠0 , (4)
where bl is the l-th layer-wise learnable parameter, which con- trols the output range of FReLU. Note that FReLU naturally generates ReLU when bl = 0.
The backward pass function of FReLU is given by:
1 0 = = 1 if x > 0 if x ⤠0 .
âf relu(x) âx âf relu(x) âbl B. Parameter Initialization with FReLU
As mentioned in [8], it is necessary to adopt appropriate initialization method for a novel activation function to prevent the vanishing problem of gradients. In this subsection, we provide a brief analysis on the initialization for FReLU. More discussions about the initialization of neural networks can refer to [18], [8].
1) Back propagation: For the back propagation case, the gradient of a convolution layer is computed by: Cost = , Where 2, = W/a,. WwW, is a c-by-? matrix which is reshaped from W). Here, c is the number of chan- nels for the input and 2? = k?d (k is the kernel size, and d is the number of channels for the output). We as- sume fj wis and w;, and 2Cost are independent of each other. When w; is initialized by a symmetric distribution around zero, Var [292] = = fyVarluiJE | (222)? |: And V7, ACost Wi x] for FReLU, we have: 2Cost = Creo Oost According to Equ. (5), we know that E [(2sest)?] = 1V ar[2Ceet]. Therefore, Var [2a] = 1A 57uVar [w)] Var [ acest]. Then
(5)
# [age]
for a network with L layers, we have Var [age] = Var <n aGext| (Tie > daVar (wl). Therefore, we have the initialization condition:
1 2 ËnlV ar [wl] = 1, âl,
which is the same with the msra method [8] for ReLU.
2) Forward propagation: For the forward propagation case, that is 2; = W/)%,, where W; is a d-by-n ma- trix and n = kc. As above, we have Var[x] = nV ar|w;|E[z?7] with the independent assumption. For FReLU, 7 = max(0,x7_,) + max(0,2bz;_1) + b?. In general, x is finite or has Gaussian shape around zero, then Ez?) $Var(xi_-1] + 67. Thus, we have Var[x] ~ (4njVar[x;-1] + mbZ)Var[wi]. And for a network with L layers, Var[xz] © Vara] [jo dmVar[wi] + â¬, where ⬠= ar (Ha iran mVar(wi)). We found that the term ⬠makes forward propagation more complex. Fortunately when using Equ. (6) for initialization, Var[x,] ~ $Var[x1] + Vee 2 ae KDR.
# ck dL
# b2 k.
k=2 In conclusion, when using the initialization condition (Equ. (6)) for FReLU, the variance of back propagation is stable and the variance of forward propagation will be scaled by some scalars. FReLU has a relatively stable learning characteristic except in complex applications. Thus, for stable learning, the absolute of bl prefers to be a small number, especially for very deep models. In practice, by using batch normalization [17], networks will be less sensitive to the initialization method. And the data-driven initialization method LSUV [19] is also a good choice. For convenience, in this paper, we use MSRA method [8] (Equ. (6)) for all our experiments.
C. Analysis and Discussion for FReLU
In this section, we analyze the improvement of FReLU for neural networks and discuss tips for FReLU.
1) State Extension by FReLU: By adding a learnable bias term, the output range of FReLU [b, +â) is helpful to ensure efï¬cient learning. When b < 0, FReLU satisï¬es the principle that activation functions with negative values can be used to reduce bias effect [10]. Besides, negative values can improve the expressiveness of the activation function. There are three output states represented by FReLU with b < 0:
f relu(x) = positive negative inactivation if x > 0 and x + b > 0 if x > 0 and x + b < 0 if x ⤠0 . (7)
Considering a layer with n units, FReLU with b = 0 (equal to ReLU) or b > 0 can only generate 2n output states, while FReLU with b < 0 can generate 3n output states. Shown in Table III, the learnable biases tend to negative b < 0 and bring the improvement in the network by training success. Another factor is that FReLU retains the same non-linear and sparse characteristics as ReLU. In addition, the self-adaptation of FReLU is also helpful to ï¬nd a specialized activation function.
(6)
2) Batch Normalization with FReLU: According to the conclusion in [10] and the experiments in Table II, PReLU, SReLU, and ELU are not compatible with batch normalization (BN) [17]. It is because training conï¬ict between the repre- sentation restore (scale γ and bias β) in BN and the negative parameter in the activation function. In FReLU, max (x, 0) isolates two pairs of learnable terms between BN and FReLU. In this paper, we introduce batch normalization (BN) [17] to stabilize the learning when using the large learning rate for achieving better performance. With BN, backward propagation through a layer is unaffected by the scale of its parameters. Speciï¬cally, for a scalar c, there is BN (W u) = BN ((cW )u) = âBN (W u) and thus âBN ((cW )u) . Batch normalization is âu also a data-driven method, does not rely on strict distribution assumptions. We show the compatibility between BN and FReLU in our experiments (Table II).
D. Comparisons
We compare the proposed FReLU function with a few cor- relative activation functions, including ReLU, PReLU, ELU, and SReLU.
Illustration of the correlative activation functions.
1) ReLU: The activation function ReLU [2] is deï¬ned as relu(x) = max(x, 0). The proposed FReLU function is an extension of ReLU by adding a learnable bias term b. There- fore, FReLU retains the same non-linear and sparse properties as ReLU, and extends the output range from [0, +â) to [b, +â). Here, b is learnable parameter for adaptive selection by training. When b = 0, FReLU generates ReLU. When b > 0, FReLU tends to move the output distribution of ReLU to larger positive areas, which is unnecessary for state extension proven in the experiments. When b < 0, FReLU expands the states of the output to increase the expressiveness of the activation function.
2) PReLU/LReLU: The activation function PReLU [8] is deï¬nded as prelu(x) = max(x, 0) + k â min(x, 0), where k is the learnable parameter. When k is a small ï¬xed number, PReLU becomes LReLU [7]. To avoid zero gradients, PReLU and LReLU propagate the negative input with penalization, thus avoid negative missing. However, PReLU and LReLU probably lose sparsity, which is an important factor to achieve good performance for neural networks. Note that FReLU also can generate negative outputs, but in a different way. FReLU obstructs the negative input as same as ReLU, the backward
gradient of FReLU for the negative part is zero and retains sparsity.
3) ELU: The activation function ELU [10] is deï¬ned as elu(x) = max(x, 0) + min((exp(x) â 1), 0). FReLU and ELU have similar shapes and properties in some extent. Different from ELU, FReLU uses the bias term instead of exponential operation, and reduces the computation complexity. Although FReLU is non-differentiable at x = 0, the experiments show that FReLU can achieve good performance. In addition, FReLU has a better compatibility with batch normalization than ELU.
4) SReLU: In this paper, shifted ReLU (SReLU) is deï¬ned as srelu(x) = max(x, â), where â is the learnable param- eter. Both SReLU and FReLU have ï¬exibility of choosing horizontal shifts from learned biases and both SReLU and FReLU can choose vertical shifts. Speciï¬cally, SReLU can be reformed as srelu(x) = max(x, â) = max(x â â, 0) + â = max(xâ(αââ)ââ, 0)+â, where (αââ) is the learned bias for SReLU. To some extent, SReLU is equivalent to FReLU. In the experiments, we ï¬nd that SReLU is less compatible with batch normalization and lower performance than FReLU.
# III. EXPERIMENTS
In this section, we evaluate FReLU on three standard image classiï¬cation datasets, including CIFAR-10, CIFAR-100 [20] and ImageNet [21]. We conduct all experiments based on fb.resnet.torch1 [22] using the default data augmentation and training settings. The default learning rate is initially set to 0.1. The weight decay is set to 0.0001, and the momentum is set to 0.9. For CIFAR-10 and CIFAR-100, the models are trained by stochastic gradient descent (SGD) with batch size of 128 for 200 epochs (no warming up). The learning rate is decreased by a factor of 10 at 81 and 122 epochs. For ImageNet, the models are trained by SGD with batch size of 256 for 90 epochs. The learning rate is decreased by a factor of 10 every 30 epochs. In addition, the parameter b for FReLU is set to â1 as the initialization by default in this paper. For fair comparison and reducing the random inï¬uences, all experimental results on CIFAR-10 and CIFAR-100 are reported with the mean and standard deviation of ï¬ve runs with different random seeds.
A. The Analyses for FReLU
1) Convergence Rate and Performance: We ï¬rstly evaluate the proposed FReLU on a small convolutional neural network (referred to as SmallNet). It contains 3 convolutional layers followed by two fully connected layers detailed in Table I. The ACT module is either ReLU, ELU or FReLU. We used SmallNet to perform object classiï¬cation on the CIFAR-100 dataset [20]. Both training and test error rates are shown in Table II and we also draw learning curves in Fig. 3. We ï¬nd that FReLU achieves fast convergence and higher generation performance than ReLU, FReLU, ELU, and SReLU. Note that the error rate on test set is lower than training set is a normal phenomenon for a small network on CIFAR-100.
1https://github.com/facebook/fb.resnet.torch
Error rates 40 o 50 100 150 200 Epochs
100 150 200 Epochs °
=BNeReLU SBNSELU 80- âBN+FReLU o 50 100 150 200 Epochs
=BNeReLU SBNFELU âBN+FReLU B0- 70- 60 Error rates 50- 40 ° 50 100 150 200 Epochs
(a) Training error (b) Test error (c) Training error with BN (d) Test error with BN
Fig. 3. Error curves on the CIFAR-100 dataset for SmallNet. The base learning rate is 0.01. Best viewed in color.
TABLE I SMALLNET ARCHITECTURE ON THE CIFAR-100 DATASET. (BN: BATCH NORMALIZATION; ACT: ACTIVATION FUNCTION.)
Patch Size/Stride | #Kernels 3x3/1 32 Type Convolution (BN +) ACT MAX Pool Dropout (20%) Convolution (BN +) ACT MAX Pool Dropout (20%) Convolution (BN +) ACT = MAX Pool 2x2/2 = Dropout (20%) = Linear = (BN +) ACT = = Dropout (50%) = = Linear = Softmax = = 2x2/2 = 3x3/1 64 2x2/2 = 3x3/1
"100-300 5009 iso 200 250 300 350 Actuation ofthe Ist neuron
50 a 30 100 Bo 200 Activation ofthe Ist neuron
(a) ReLU (b) FReLU
Fig. 4. The distribution of deeply learned features for (a) ReLU and (b) FReLU on the test set of MNIST dataset. The points with different colors denote features from different classes. Best viewed in color.
expectation of activation function f (x) can be expressed as E{z] = J ye exp (â0.52?) f (a). When the parameter of FReLU b = â0.398 proven in Table|III| E[z] is approximately equal to zero. Therefore, FReLU is a normalize activation function to ensure the normalization of the entire network.
2) Compatibility with Batch Normalization: We investigate the compatibilities with batch normalization (BN) on Small- Net. As same in [10], BN improves ReLU networks but damages ELU networks. We also empirically ï¬nd that BN does not improve PReLU, SReLU and FReLU when the base learning rate equals to 0.01. No matter with or without BN, FReLU all achieves the lowest testing error rates. Moreover, when using large base learning rate 0.1, ReLU, PReLU, ELU, SReLU, and FReLU networks all cannot converge without BN. With higher learning rates, ReLU, PReLU, and FReLU enjoy the beneï¬ts of BN, but ELU and SReLU does not. These phenomenons reï¬ect that FReLU is compatible with BN, which avoids exploding and achieves better performances with large learning rate.
In order to explore the advantage of FReLU, we further visualize the deep feature embeddings for ReLU and FReLU layers. We conduct this experiment on MNIST [23] dataset with LeNets++2. As the output number of the last hidden layer in LeNets++ is 2, we can directly plot the features on 2-D surface for visualization. In LeNets++, we use ReLU as the activation function. To visualize the effect of FReLU for feature learning, we only replace the activation function of the last hidden layer as FReLU. We draw the embeddings on the test dataset after training, which are shown in Fig. 4 and ten classes are shown in different colors. We observe that embeddings of the FReLU network are more discriminative than ReLUâs.The accuracy of the FReLU network is 97.8%, while the ReLU network is 97.05%. With negative bias, FReLU provides larger space for feature representation than ReLU.
3) Different Initialization Values for FReLU: In this sub- section, we further explore the effects of different initialization values for FReLU. We report the results on the CIFAR-100 dataset with the SmallNet. By using a small network, the pa- rameter of FReLU can be fully learned. The test error rates and the convergence values b are shown in Table III. Interestingly, networks with different initialization values (including positive and negative values) for FReLU are ï¬nally converged to close negative value. Assuming the input x â¼ N (0, 1), the output
B. Results on CIFAR-10 and CIFAR-100
In this subsection, we compare ReLU, PReLU, ELU, SReLU and FReLU on the Network in Network (referred to as NIN) [24] model. We evaluate this model on both CIFAR-10 and CIFAR-100 datasets. We use the default base learning rate 0.1 and test with
2https://github.com/ydwen/caffe-face/tree/caffe-face/mnist example
TABLE II COMPARING RELU [5], PRELU [8], ELU [10], SRELU, AND FRELU WITH SMALLNET ON THE CIFAR-100 DATASET. WE REPORT THE MEAN (STD) ERROR RESULTS OVER FIVE RUNS.
Base Learning Rate Method ReLU PReLU ELU SReLU FReLU BN+ReLU BN+PReLU BN+ELU BN+SReLU BN+FReLU Training 44.20 (0.31) 42.49 (0.12) 40.79 (0.14) 39.85 (0.15) 38.69 (0.17) 44.07 (0.18) 42.46 (0.27) 45.10 (0.18) 43.47 (0.09) 40.38 (0.26) 0.01 Test 40.55 (0.25) 38.48 (0.33) 37.55 (0.47) 36.91 (0.17) 36.87 (0.35) 39.20 (0.32) 39.42 (0.54) 38.77 (0.18) 38.22 (0.28) 37.13 (0.30) Training not converge exploding exploding exploding exploding 42.60 (0.16) 40.85 (0.17) 43.27 (0.11) 40.15 (0.07) 38.83 (0.18) 0.1 Test not converge exploding exploding exploding exploding 38.50 (0.43) 37.14 (0.42) 37.80 (0.16) 37.20 (0.26) 35.82 (0.12)
TABLE III MEAN (STD) ERROR RESULTS ON THE CIFAR-100 DATASET AND CONVERGENCE VALUES (LAYER 1 TO 4) FOR FRELU WITH SMALLNET.
Init. Value 0.5 0.2 0 -0.4 -1 Error Rate 37.05 (0.07) 36.71 (0.32) 36.91 (0.34) 37.10 (0.33) 36.87 (0.35) Layer1 -0.3175 -0.3112 -0.3144 -0.3235 -0.3272 Layer2 -0.4570 -0.4574 -0.4367 -0.4480 -0.4757 Layer3 -0.2824 -0.2749 -0.2891 -0.2917 -0.2849 Layer4 -0.3284 -0.3314 -0.3313 -0.3315 -0.3282
TABLE IV COMPARING RELU [5], PRELU [8], ELU [10], SRELU AND FRELU WITH NIN [24] MODEL ON THE CIFAR-10 AND CIFAR-100 DATASETS. THE BASE LEARNING RATE IS 0.1. WE REPORT THE MEAN (STD) RESULTS OVER FIVE RUNS.
Dataset Method BN+ReLU BN+PReLU BN+ELU BN+SReLU BN+FReLU CIFAR-10 Training 2.89(0.11) 1.36(0.03) 4.15(0.07) 2.68(0.06) 2.02(0.06) Test 8.05(0.15) 8.86(0.18) 8.08(0.26) 7.93(0.24) 7.30(0.20) CIFAR-100 Training 14.11(0.06) 8.96(0.12) 13.36(0.10) 13.48(0.12) 11.40(0.11) Test 29.46(0.29) 33.73(0.29) 28.33(0.32) 29.50(0.34) 28.47(0.21)
BN. Results are shown in Table IV. PReLU seems overï¬tting and does not obtain good performance. The proposed method FReLU achieves the lowest error rates on the test datasets.
(input) 7 {conv 3x3} BN t (convae ) â BN â- CT }
(Input _} 7 (Conv 3x3} BN â t as (cova ) â BN +e {Output}
[input _ } } (Conv 3x3} P + ay (conv 3x3} â= BN ++ il {Output}
(input) (Input _} [input _ } 7 7 } {conv 3x3} (Conv 3x3} (Conv 3x3} BN BN t â t P + as ay (convae ) (cova ) (conv 3x3} â â â= BN BN BN â- +e ++ CT } il {Output} {Output}
(a) Ori. bottleneck [4] (b) w/o ACT after ad- dition (c) w/o BN after ï¬rst Conv [25]
Fig. 5. Various residual blocks.
C. Results on ImageNet
We also evaluate FReLU on the ImageNet dataset. Table VI shows the results with NIN model and a modiï¬ed CaffeNet, where the result of CaffeNet comes from a benchmark testing [26] and the detailed settings can refer to their project web- site3. FReLU performs well, outperforming other activation functions.
# IV. CONCLUSION AND FUTURE WORK
2) Evaluation on Residual Networks: We also investigate the effectiveness of FReLU with residual networks on the CIFAR-10 and CIFAR-100 datasets. Results are shown in Table V. In order to compare the compatibility of FReLU and ELU with BN, we ï¬rst investigate the performances of residual networks with simply replacing the ReLU with FReLU and ELU, that is using the architecture in Fig. 5(a). We observe that ELU damages the performances but FReLU improves, which demonstrates that FReLU has the higher compatibility with BN than ELU. Inspired by [25], we further compare the performances with the modiï¬ed networks, where ELU uses the architecture in Fig. 5(c) and FReLU uses the architecture in Fig. 5(b). We also observe that FReLU achieves better performances.
In this paper, a novel activation function called FReLU is proposed to improve convolutional neural networks. As a variant of ReLU, FReLU retains non-linear and sparsity as ReLU and extends the expressiveness. FReLU is a general concept and does not depend on any speciï¬c assumption. We show that FReLU achieves higher performances and empirically ï¬nd that FReLU is more compatible with batch normalization than ELU. Our results suggest that negative values are useful for neural networks. There are still many questions requiring further investigation: (1) How to solve the dead neuron problem well? (2) How to design an efï¬cient
3
3 https://github.com/ducha-aiki/caffenet-benchmark/blob/master/ Activations.md
TABLE V COMPARING RELU, ELU ((A) [10] (C) [25]) AND FRELU WITH RESNET-20/32/44/56/110 [4] ON THE CIFAR-10 AND CIFAR-100 DATASETS. WE REPORT THE MEAN (STD) ERROR RATES OVER FIVE RUNS.
Dataset #Depths Original ELU (a) FReLU (a) ELU (c) FReLU (b) Dataset #Depths Original ELU (c) FReLU (b) 20 8.12(0.18) 8.04(0.08) 8.10(0.18) 8.28(0.09) 8.00(0.14) 20 31.93(0.13) 31.90(0.36) 31.84(0.30) 32 7.28(0.19) 7.62(0.21) 7.30(0.17) 7.07(0.17) 6.99(0.11) 32 30.16(0.32) 30.39(0.37) 29.95(0.27) CIFAR-10 44 6.97(0.24) 7.51(0.22) 6.91(0.25) 6.78(0.10) 6.58(0.19) CIFAR-100 44 29.30(0.45) 29.34(0.39) 29.02(0.25) 56 6.87(0.54) 7.71(0.26) 6.54(0.22) 6.54(0.20) 6.31(0.20) 56 29.19(0.61) 28.81(0.42) 28.07(0.47) 110 6.82(0.63) 8.21(0.21) 6.20(0.23) 5.86(0.14) 5.71(0.19) 110 28.48(0.85) 27.02(0.32) 26.70(0.38)
TABLE VI COMPARING RELU, ELU AND FRELU WITH NIN MODEL ON THE IMAGENET DATASET.
[12] Y. Li, C. Fan, Y. Li, and Q. Wu, âImproving deep neural net- work with multiple parametric exponential linear units,â arXiv preprint arXiv:1606.00305, 2016.
Network NIN CaffeNet3 Method BN+ReLU BN+ELU BN+FReLU ReLU PReLU ELU FReLU Top-1 error 35.65 38.55 34.82 53.00 52.20 51.20 51.20 Top-5 error 14.53 16.62 14.00 â â â â
[13] B. Carlile, G. Delamarter, P. Kinney, A. Marti, and B. Whitney, âIm- proving deep learning by inverse square root linear units (isrlus),â arXiv preprint arXiv:1710.09967, 2017.
[14] G. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, âSelf- normalizing neural networks,â arXiv preprint arXiv:1706.02515, 2017. [15] R. Duggal and A. Gupta, âP-telu: Parametric tan hyperbolic linear unit activation for deep neural networks,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 974â 978.
[16] B. Xu, R. Huang, and M. Li, âRevise saturated activation functions,â arXiv preprint arXiv:1602.05980, 2016.
activation that can use negative values better and also has better learning property?
# REFERENCES
[17] S. Ioffe and C. Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate shift,â arXiv preprint arXiv:1502.03167, 2015.
[18] X. Glorot and Y. Bengio, âUnderstanding the difï¬culty of training deep feedforward neural networks,â in Proceedings of the Thirteenth International Conference on Artiï¬cial Intelligence and Statistics, 2010, pp. 249â256.
[1] S. Hochreiter and J. Schmidhuber, âLong short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â1780, 1997.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â in Advances in neural infor- mation processing systems, 2012, pp. 1097â1105.
[3] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, âInception-v4, inception-resnet and the impact of residual connections on learning,â arXiv preprint arXiv:1602.07261, 2016.
[4] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770â778.
[5] V. Nair and G. E. Hinton, âRectiï¬ed linear units improve restricted boltz- mann machines,â in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807â814.
[19] D. Mishkin and J. Matas, âAll you need is a good init,â arXiv preprint arXiv:1511.06422, Nov. 2015.
[20] A. Krizhevsky and G. Hinton, âLearning multiple layers of features from tiny images,â 2009.
[21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., âImagenet large scale visual recognition challenge,â International Journal of Computer Vision, vol. 115, no. 3, pp. 211â252, 2015.
[22] S. Gross and M. Wilber, âTraining and investigating residual nets,â Facebook AI Research, CA.[Online]. Avilable: http://torch. ch/blog/2016/02/04/resnets. html, 2016.
[23] C. J. B. Yann LeCun, Corinna Cortes, âThe mnist database of handwrit- ten digits,â http://yann.lecun.com/exdb/mnist/, 1998.
[24] M. Lin, Q. Chen, and S. Yan, âNetwork in network,â arXiv preprint arXiv:1312.4400, 2013.
[6] X. Glorot, A. Bordes, and Y. Bengio, âDeep sparse rectiï¬er neural networks.â in Aistats, vol. 15, no. 106, 2011, p. 275.
[25] A. Shah, E. Kadam, H. Shah, and S. Shinde, âDeep residual networks with exponential linear unit,â arXiv preprint arXiv:1604.04112, 2016.
[7] A. L. Maas, A. Y. Hannun, and A. Y. Ng, âRectiï¬er nonlinearities improve neural network acoustic models,â in Proc. ICML, vol. 30, no. 1, 2013.
[8] K. He, X. Zhang, S. Ren, and J. Sun, âDelving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation,â in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026â1034.
[26] D. Mishkin, N. Sergievskiy, and J. Matas, âSystematic evaluation of convolution neural network advances on the imagenet,â Computer Vision and Image Understanding, 2017. [Online]. Available: http: //www.sciencedirect.com/science/article/pii/S1077314217300814
[9] B. Xu, N. Wang, T. Chen, and M. Li, âEmpirical evaluation of rectiï¬ed activations in convolutional network,â arXiv preprint arXiv:1505.00853, 2015.
[10] D.-A. Clevert, T. Unterthiner, and S. Hochreiter, âFast and accurate deep network learning by exponential linear units (elus),â arXiv preprint arXiv:1511.07289, 2015.
[11] L. Trottier, P. Gigu`ere, and B. Chaib-draa, âParametric exponential linear unit for deep convolutional neural networks,â arXiv preprint arXiv:1605.09332, 2016. | {
"id": "1505.00853"
} |
1706.07881 | On Sampling Strategies for Neural Network-based Collaborative Filtering | Recent advances in neural networks have inspired people to design hybrid
recommendation algorithms that can incorporate both (1) user-item interaction
information and (2) content information including image, audio, and text.
Despite their promising results, neural network-based recommendation algorithms
pose extensive computational costs, making it challenging to scale and improve
upon. In this paper, we propose a general neural network-based recommendation
framework, which subsumes several existing state-of-the-art recommendation
algorithms, and address the efficiency issue by investigating sampling
strategies in the stochastic gradient descent training for the framework. We
tackle this issue by first establishing a connection between the loss functions
and the user-item interaction bipartite graph, where the loss function terms
are defined on links while major computation burdens are located at nodes. We
call this type of loss functions "graph-based" loss functions, for which varied
mini-batch sampling strategies can have different computational costs. Based on
the insight, three novel sampling strategies are proposed, which can
significantly improve the training efficiency of the proposed framework (up to
$\times 30$ times speedup in our experiments), as well as improving the
recommendation performance. Theoretical analysis is also provided for both the
computational cost and the convergence. We believe the study of sampling
strategies have further implications on general graph-based loss functions, and
would also enable more research under the neural network-based recommendation
framework. | http://arxiv.org/pdf/1706.07881 | Ting Chen, Yizhou Sun, Yue Shi, Liangjie Hong | cs.LG, cs.IR, cs.SI, stat.ML | This is a longer version (with supplementary attached) of the KDD'17
paper | null | cs.LG | 20170623 | 20170623 | 7 1 0 2
n u J 3 2 ] G L . s c [
1 v 1 8 8 7 0 . 6 0 7 1 : v i X r a
# On Sampling Strategies for Neural Network-based Collaborative Filtering
# Ting Chen University of California, Los Angeles Los Angeles, CA 90095 tingchen@cs.ucla.edu
Yizhou Sun University of California, Los Angeles Los Angeles, CA 90095 yzsun@cs.ucla.edu
Yue Shiâ Yahoo! Research Sunnyvale, CA 94089 yueshi@acm.org
Liangjie Hong Etsy Inc. Brooklyn, NY 11201 lhong@etsy.com
ABSTRACT Recent advances in neural networks have inspired people to de- sign hybrid recommendation algorithms that can incorporate both (1) user-item interaction information and (2) content information including image, audio, and text. Despite their promising results, neural network-based recommendation algorithms pose extensive computational costs, making it challenging to scale and improve upon. In this paper, we propose a general neural network-based recommendation framework, which subsumes several existing state- of-the-art recommendation algorithms, and address the efficiency issue by investigating sampling strategies in the stochastic gradient descent training for the framework. We tackle this issue by first establishing a connection between the loss functions and the user- item interaction bipartite graph, where the loss function terms are defined on links while major computation burdens are located at nodes. We call this type of loss functions âgraph-basedâ loss func- tions, for which varied mini-batch sampling strategies can have different computational costs. Based on the insight, three novel sampling strategies are proposed, which can significantly improve the training efficiency of the proposed framework (up to Ã30 times speedup in our experiments), as well as improving the recommen- dation performance. Theoretical analysis is also provided for both the computational cost and the convergence. We believe the study of sampling strategies have further implications on general graph- based loss functions, and would also enable more research under the neural network-based recommendation framework.
ACM Reference format: Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong. 2017. On Sampling Strategies for Neural Network-based Collaborative Filtering. In Proceedings of KDD â17, Halifax, NS, Canada, August 13-17, 2017, 14 pages. https://doi.org/10.1145/3097983.3098202
# âNow at Facebook.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. KDD â17, August 13-17, 2017, Halifax, NS, Canada © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa- tion for Computing Machinery. ACM ISBN 978-1-4503-4887-4/17/08. . . $15.00 https://doi.org/10.1145/3097983.3098202
1 INTRODUCTION Collaborative Filtering (CF) has been one of the most effective meth- ods in recommender systems, and methods like matrix factorization [17, 18, 27] are widely adopted. However, one of its limitation is the dealing of âcold-startâ problem, where there are few or no observed interactions for new users or items, such as in news recommenda- tion. To overcome this problem, hybrid methods are proposed to incorporate side information [7, 25, 28], or item content informa- tion [11, 31] into the recommendation algorithm. Although these methods can deal with side information to some extent, they are not effective for extracting features in complicated data, such as image, audio and text. On the contrary, deep neural networks have been shown very powerful at extracting complicated features from those data automatically [15, 19]. Hence, it is natural to combine deep learning with traditional collaborative filtering for recommendation tasks, as seen in recent studies [1, 4, 32, 37].
In this work, we generalize several state-of-the-art neural network-
based recommendation algorithms [1, 4, 30], and propose a more general framework that combines both collaborative filtering and deep neural networks in a unified fashion. The framework inherits the best of two worlds: (1) the power of collaborative filtering at capturing user preference via their interaction with items, and (2) that of deep neural networks at automatically extracting high-level features from content data. However, it also comes with a price. Traditional CF methods, such as sparse matrix factorization [17, 27], are usually fast to train, while the deep neural networks in gen- eral are much more computationally expensive [19]. Combining these two models in a new recommendation framework can easily increase computational cost by hundreds of times, thus require a new design of the training algorithm to make it more efficient.
We tackle the computational challenges by first establishing a connection between the loss functions and the user-item interaction bipartite graph. We realize the key issue when combining the CF and deep neural networks are in: the loss function terms are defined over the links, and thus sampling is on links for the stochastic gradient training, while the main computational burdens are located at nodes (e.g., Convolutional Neural Network computation for image of an item). For this type of loss functions, varied mini-batch sampling strategies can lead to different computational costs, depending on how many node computations are required in a mini-batch. The existing stochastic sampling techniques, such as IID sampling, are
KDD â17, August 13-17, 2017, Halifax, NS, Canada
inefficient, as they do not take into account the node computations that can be potentially shared across links/data points.
Inspired by the connection established, we propose three novel sampling strategies for the general framework that can take coupled computation costs across user-item interactions into consideration. The first strategy is Stratified Sampling, which try to amortize costly node computation by partitioning the links into different groups based on nodes (called stratum), and sample links based on these groups. The second strategy is Negative Sharing, which is based on the observation that interaction/link computation is fast, so once a mini-batch of user-item tuples are sampled, we share the nodes for more links by creating additional negative links between nodes in the same batch. Both strategies have their pros and cons, and to keep their advantages while avoid their weakness, we form the third strategy by combining the above two strategies. Theoretical analysis of computational cost and convergence is also provided.
⢠We propose a general hybrid recommendation framework (Neural Network-based Collaborative Filtering) combining CF and content-based methods with deep neural networks, which generalize several state-of-the-art approaches.
⢠We establish a connection between the loss functions and the user-item interaction graph, based on which, we propose sampling strategies that can significantly improve training efficiency (up to Ã30 times faster in our experiments) as well as the recommendation performance of the proposed framework.
⢠We provide both theoretical analysis and empirical experi- ments to demonstrate the superiority of the proposed meth- ods.
# 2 A GENERAL FRAMEWORK FOR NEURAL NETWORK-BASED COLLABORATIVE FILTERING
In this section, we propose a general framework for neural network- based Collaborative Filtering that incorporates both interaction and content information.
2.1 Text Recommendation Problem In this work, we use the text recommendation task [1, 4, 31, 32] as an illustrative application for the proposed framework. However, the proposed framework can be applied to more scenarios such as music and video recommendations.
We use xu and xv to denote features of user u and item v, respec- tively. In text recommendation setting, we set xu to one-hot vector indicating uâs user id (i.e. a binary vector with only one at the u-th position)1, and xv as the text sequence, i.e. xv = (w1, w2, · · · , wt ). A response matrix ËR is used to denote the historical interactions be- tween users and articles, where Ëruv indicates interaction between a user u and an article v, such as âclick-or-notâ and âlike-or-notâ. Fur- thermore, we consider ËR as implicit feedback in this work, which means only positive interactions are provided, and non-interactions are treated as negative feedback implicitly.
1Other user profile features can be included, if available.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
r(u,v) fQ | |g t t Xu Xy
Figure 1: The functional embedding framework.
Given user/item features {xu }, {xv } and their historical interac- tion ËR, the goal is to learn a model which can rank new articles for an existing user u based on this userâs interests and an articleâs text content.
2.2 Functional Embedding In most of existing matrix factorization techniques [17, 18, 27], each user/item ID is associated with a latent vector u or v (i.e., embedding), which can be considered as a simple linear trans- formation from the one-hot vector represented by their IDs, i.e. uu = f(xu ) = WT xu (W is the embedding/weight matrix). Al- though simple, this direct association of user/item ID with repre- sentation make it less flexible and unable to incorporate features such as text and image.
In order to effectively incorporate user and item features such as content information, it has been proposed to replace embedding vectors u or v with functions such as decision trees [38] and some specific neural networks [1, 4]. Generalizing the existing work, we propose to replace the original embedding vectors u and v with general differentiable functions f(·) â Rd and g(·) â Rd that take user/item features xu , xv as their inputs. Since the user/item embed- dings are the output vectors of functions, we call this approach Func- tional Embedding. After embeddings are computed, a score function r (u, v) can be defined based on these embeddings for a user/item pair (u, v), such as vector dot product r (u, v) = f(xu )T g(xv ) (used in this work), or a general neural network. The model framework is shown in Figure 1. It is easy to see that our framework is very general, as it does not explicitly specify the feature extraction func- tions, as long as the functions are differentiable. In practice, these function can be specified with neural networks such as CNN or RNN, for extracting high-level information from image, audio, or text sequence. When there are no features associated, it degenerates to conventional matrix factorization where user/item IDs are used as their features.
For simplicity, we will denote the output of f(xu ) and g(xv ) by fu and gv , which are the embedding vectors for user u and item v.
2.3 Loss Functions for Implicit Feedback In many real-world applications, users only provide positive signals according to their preferences, while negative signals are usually implicit. This is usually referred as âimplicit feedbackâ [13, 23, 26].
On Sampling Strategies for Neural Network-based Collaborative Filtering
# Table 1: Examples of loss functions for recommendation.
Pointwise loss SG-loss [22]: -L(u, veo ( log o(f7 gv) + AEyâ~p,, log o(-f gu ) MSE-loss [30]: D(u,v)eo (rie - fi] gu) + ABEL ~py (Fy â £0 Be! *) Pairwise loss Log-loss [26]: -D(u,o)ep Eoâ~P,, log olriet Bo â fy ev) Hinge-loss [33]: 5 (u,vjeo Boâ~Pn max (« gu -fTeo ty. °
In this work, we consider two types of loss functions that can handle recommendation tasks with implicit feedback, namely, pointwise loss functions and pairwise loss functions. Pointwise loss functions have been applied to such problems in many existing work. In [1, 30, 32], mean square loss (MSE) has been applied where âneg- ative termsâ are weighted less. And skip-gram (SG) loss has been successfully utilized to learn robust word embedding [22].
These two loss functions are summarized in Table 1. Note that we use a weighted expectation term over all negative samples, which can be approximated with small number of samples. We can also abstract the pointwise loss functions into the following form:
Lpointwise = Ey~p,(u)|Ev~Py(v|u)Cuv壉 (u, 214) (1) + Ey np, (vyCuyrL (u,v'|0)
v â²â¼Pn (v â²)câ where Pd is (empirical) data distribution, Pn is user-defined negative data distribution, c is user defined weights for the different user- item pairs, θ denotes the set of all parameters, L+(u, v |θ ) denotes the loss function on a single positive pair (u, v), and Lâ(u, v |θ ) denotes the loss on a single negative pair. Generally speaking, given a user u, pointwise loss function encourages her score with positive items {v}, and discourage her score with negative items {v â²}.
When it comes to ranking problem as commonly seen in implicit feedback setting, some have argued that the pairwise loss would be advantageous [26, 33], as pairwise loss encourages ranking of positive items above negative items for the given user. Different from pointwise counterparts, pairwise loss functions are defined on a triplet of (u, v, v â²), where v is a positive item and v â² is a negative item to the user u. Table 1 also gives two instances of such loss functions used in existing papers [26, 33] (with γ being the pre- defined âmarginâ parameter). We can also abstract pairwise loss functions by the following form:
v â²â¼Pn (v â²)cuvv â² L(u, v, v â²|θ ) (2) where the notations are similarly defined as in Eq. 1 and L(u, v, v â²|θ ) denotes the loss function on the triplet (u, v, v â²).
# 2.4 Stochastic Gradient Descent Training and Computational Challenges
To train the model, we use stochastic gradient descent based algo- rithms [3, 16], which are widely used for training matrix factoriza- tion and neural networks. The main flow of the training algorithm is summarized in Algorithm 1. By adopting the functional embed-
KDD â17, August 13-17, 2017, Halifax, NS, Canada
# Algorithm 1 Standard model training procedure
while not converged do // mini-batch sampling draw a mini-batch of user-item tuples (u, v)2 // forward pass compute f(xu ), g(xv ) and their interaction fT compute the loss function L // backward pass compute gradients and apply SGD updates u gv end while
ding with (deep) neural networks, we can increase the power of the model, but it also comes with a cost. Figure 2 shows the training time (for CiteULike data) with different item functions g(·), namely linear embedding taking item id as feature (equivalent to conven- tional MF), CNN-based content embedding, and RNN/LSTM-based content embedding. We see orders of magnitude increase of train- ing time for the latter two embedding functions, which may create barriers to adopt models under this framework.
Breaking down the computation cost of the framework, there are three major parts of computational cost. The first part is the user based computation (denoted by tf time units per user), which includes forward computation of user function f(xu ), and backward computation of the function output w.r.t. its parameters. The second part is the item based computation (denoted by tд time units per item), which similarly includes forward computation of item func- tion g(xv ), as well as the back computation. The third part is the computation for interaction function (denoted by ti time units per interaction). The total computational cost for a mini-batch is then tf à # of users + tд à # of items + ti à # of interactions, with some other minor operations which we assume ignorable. In the text rec- ommendation application, user IDs are used as user features (which can be seen as linear layer on top of the one-hot inputs), (deep) neural networks are used for text sequences, vector dot product is used as interaction function, thus the dominant computational cost is tд (orders of magnitude larger than tf and ti ). In other words, we assume tд ⫠tf , ti in this work.
8108 © 5 ® B10? ® £ 101 & & E 100 Linear/MF CNN RNN Item function
Figure 2: Model training time per epoch with different types of item functions (in log-scale).
2Draw a mini-batch of user-item triplets (u, v, v â²) if a pairwise loss function is adopted.
KDD â17, August 13-17, 2017, Halifax, NS, Canada
iz|
Figure 3: The bipartite interaction graph for pointwise loss functions, where loss functions are defined over links. The pairwise loss functions are defined over pairs of links.
# 3 MINI-BATCH SAMPLING STRATEGIES FOR EFFICIENT MODEL TRAINING
In this section, we propose and discuss different sampling strategies that can improve the efficiency of the model training.
3.1 Computational Cost in a Graph View Before the discussion of different sampling strategies, we motivate our readers by first making a connection between the loss func- tions and the bipartite graph of user-item interactions. In the loss functions laid out before, we observed that each loss function term in Eq. 1, namely, L(u, v), involves a pair of user and item, which corresponds to a link in their interaction graph. And two types of links corresponding to two types of loss terms in the loss func- tions, i.e., positive links/terms and negative links/terms. Similar analysis holds for pairwise loss in Eq. 2, though there are slight differences as each single loss function corresponds to a pair of links with opposite signs on the graph. We can also establish a cor- respondence between user/item functions and nodes in the graph, i.e., f(u) to user node u and g(v) to item node v. The connection is illustrated in Figure 3. Since the loss functions are defined over the links, we name them âgraph-basedâ loss functions to emphasize the connection.
The key observation for graph-based loss functions is that: the loss functions are defined over links, but the major computational burden are located at nodes (due to the use of costly g(·) function). Since each node is associated with multiple links, which are corre- sponding to multiple loss function terms, the computational costs of loss functions over links are coupled (as they may share the same nodes) when using mini-batch based SGD. Hence, varied sampling strategies yield different computational costs. For example, when we put links connected to the same node together in a mini-batch, the computational cost can be lowered as there are fewer g(·) to compute3. This is in great contrast to conventional optimization problems, where each loss function term dose not couple with others in terms of computation cost.
3This holds for both forward and backward computation. For the latter, the gradient from different links can be aggregated before back-propagating to g(·).
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
3.2 Existing Mini-Batch Sampling Strategies In standard SGD sampler, (positive) data samples are drawn uni- formly at random for gradient computation. Due to the appearance of negative samples, we draw negative samples from some prede- fined probability distribution, i.e. (u â², v â²) â¼ Pn (u â², v â²). We call this approach âIID Samplingâ, since each positive link is dependently and identical distributed, and the same holds for negative links (with a different distribution).
Many existing algorithms with graph-based loss functions [1, 22, 29] adopt the âNegative Samplingâ strategy, in which k negative samples are drawn whenever a positive example is drawn. The neg- ative samples are sampled based on the positive ones by replacing the items in the positive samples. This is illustrated in Algorithm 2 and Figure 4(a).
Algorithm 2 Negative Sampling [1, 21, 29]
Require: number of positive links in a mini-batch b, number of negative links per positive one: k draw b positive links uniformly at random for each of b positive links do
draw k negative links by replacing true item v with v â² â Pn (v â²) end for
The IID Sampling strategy dose not take into account the prop- erty of graph-based loss functions, since samples are completely independent of each other. Hence, the computational cost in a single mini-batch cannot be amortized across different samples, leading to very extensive computations with (deep) neural networks. The Negative Sampling does not really help, since the item function computation cost tд is the dominant one. To be more specific, con- sider a mini-batch with b(1 + k) links sampled by IID Sampling or Negative Sampling, we have to conduct item based g(·) compu- tation b(1 + k) times, since items in a mini-batch are likely to be non-overlapping with sufficient large item sets.
# 3.3 The Proposed Sampling Strategies
Stratified Sampling (by Items). Motivated by the connec- tion between the loss functions and the bipartite interaction graph as shown in Figure 3, we propose to sample links that share nodes, in particular those with high computational cost (i.e. tд for item function g(·) in our case). By doing so, the computational cost within a mini-batch can be amortized, since fewer costly functions are computed (in both forward and backward propagations).
In order to achieve this, we (conceptually) partition the links, which correspond to loss function terms, into strata. A stratum in the strata is a set of links on the bipartite graph sharing the same source or destination node. Instead of drawing links directly for training, we will first draw stratum and then draw both positive and negative links. Since we want each stratum to share the same item, we can directly draw an item and then sample its links. The details are given in Algorithm 3 and illustrated in Figure 4(b).
Compared to Negative Sampling in Algorithm 2, there are several differences: (1) Stratified Sampling can be based on either item or user, but in the negative sampling only negative items are drawn; and (2) each node in stratified sampling can be associated with more than 1 positive link (i.e., s > 1, which can help improve the
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
(a) Negative (b) Stratified (by Items) (c) Negative Sharing (d) Stratified with N.S.
ut v4 v2 {l/l fl /1
ut Vi 2 us. U4
Ui Vi u2 v2 u3 v3 ua v4
ui u2 Vi us v2 ua
Figure 4: Illustration of four different sampling strategies. 4(b)-4(d) are the proposed sampling strategies. Red lines denote positive links/interactions, and black lines denote negative links/interactions.
Algorithm 3 Stratified Sampling (by Items)
# Algorithm 4 Negative Sharing
Require: number of positive links in a mini-batch: b, number of positive links per stratum: s, number of negative links per positive one: k repeat
Require: number of positive links in a mini-batch: b draw b positive user-item pairs {(u, v)} uniformly at random construct negative pairs by connecting non-linked users and items in the batch
draw an item v â Pd (v) draw s positive users {u} of v uniformly at random draw k à s negative users {u â²} â Pd (u â²) until a mini-batch of b positive links are sampled
cost of training is on the node computation and the node set is fixed given the batch of b positive links, we can share the nodes for negative links without increasing much of computational burdens. Based on this idea, Algorithm 4 summarizes an extremely simple sampling procedure, and it is illustrated in Figure 4(c).
speedup as shown below), while in negative sampling each node is only associated with one positive link.
Now we consider its speedup for a mini-batch including b posi- tive links/interactions and bk negative ones, which contains b(1+k) users and b/s items. The Stratified Sampling (by Items) only requires b/s computations of g(·) functions, while the Negative Sampling requires b(1 + k) computations. Assuming tд â« tf , ti , i.e. the com- putation cost is dominated by the item function д(·), the Stratified Sampling (by Items) can provide s(1 + k) times speedup in a mini- batch. With s = 4, k = 10 as used in some of our experiments, it yields to Ã40 speedup optimally. However, it is worth pointing out that item-based Stratified Sampling cannot be applied to pairwise loss functions, which compare preferences over items based on a given user.
3.3.2 Negative Sharing. The idea of Negative Sharing is inspired from a different aspect of the connection between the loss func- tions and the bipartite interaction graph. Since ti ⪠tд, i.e. the computational cost of interaction function (dot product) is ignor- able compared to that of item function, when a mini-batch of users and items are sampled, increasing the number of interactions among them may not result in a significant increase of computational cost. This can be achieved by creating a complete bipartite graph for a mini-batch by adding negative links between all non-interaction pairs between users and items. Using this strategy, we can draw NO negative links at all!
More specifically, consider the IID Sampling, when b positive links are sampled, there will be b users and b items involved (assum- ing the sizes of user set and item set are much larger than b). Note that, there are b(b â1) non-interactions in the mini-batch, which are not considered in IID Sampling or Negative Sampling, instead they draw additional negative samples. Since the main computational
Since Negative Sharing avoids sampling k negative links, it only contains b items while in Negative Sampling contains b(1 + k) items. So it can provide (1 +k) times speedup compared to Negative Sampling (assuming tд ⫠tf , ti , and total interaction cost is still insignificant). Given the batch size b is usually larger than k (e.g., b = 512, k = 20 in our experiments), much more negative links (e.g. 512 à 511) will also be considered, this is helpful for both faster convergence and better performance, which is shown in our experiments. However, as the number of negative samples increases, the performance and the convergence will not be improved linearly. diminishing return is expected.
Stratified Sampling with Negative Sharing. The two strate- gies above can both reduce the computational cost by smarter sampling of the mini-batch. However, they both have weakness: Stratified Sampling cannot deal with pairwise loss and it is still dependent on the number of negative examples k, and Negative Sharing introduces a lot of negative samples which may be unnec- essary due to diminishing return.
The good news is, the two sampling strategies are proposed from different perspectives, and combining them together can preserve their advantages while avoid their weakness. This leads to the Stratified Sampling with Negative Sharing, which can be applied to both pointwise and pairwise loss functions, and it can have flexible ratio between positive and negative samples (i.e. more positive links given the same negative links compared to Negative Sharing). To do so, basically we sample positive links according to Stratified Sampling, and then sample/create negative links by treating non- interactions as negative links. The details are given in Algorithm 5 and illustrated in Figure 4(d).
Computationally, Stratified Sampling with Negative Sharing only involve b/s item nodes in a mini-batch, so it can provide the same
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Algorithm 5 Stratified Sampling with Negative Sharing
Require: number of positive links in a mini-batch: b, number of positive links per stratum: s repeat
draw an item v â Pd (v) draw s positive users of item v uniformly at random
until a mini-batch of b/s items are sampled construct negative pairs by connecting non-linked users and items in the batch
s(1 + k) times speedup over Negative Sampling as Stratified Sam- pling (by Items) does, but it will utilize much more negative links compared to Negative Sampling. For example, in our experiments with b = 512, s = 4, we have 127 negative links per positive one, much larger than k = 10 in Negative Sampling, and only requires 1/4 times of g(·) computations compared to Negative Sharing.
Implementation Details. When the negative/noise distri- bution Pn is not unigram4, we need to adjust the loss function in order to make sure the stochastic gradient is unbiased. For point- wise loss, each of the negative term is adjusted by multiplying a weight of Pn (v â²) ; for pairwise loss, each term based on a triplet of Pd (v â²) (u, v, v â²) is adjusted by multiplying a weight of Pn (v â²) Pd (v â²) the sampled negative item.
Instead of sampling, we prefer to use shuffling as much as we can, which produces unbiased samples while yielding zero variance. This can be a useful trick for achieving better performance when the number of drawn samples are not large enough for each loss terms. For IID and Negative Sampling, this can be easily done for positive links by simply shuffling them. As for the Stratified Sampling (w./wo. Negative Sharing), instead of shuffling the positive links directly, we shuffle the randomly formed strata (where each stratum contains roughly a single item)5. All other necessary sampling operations required are sampling from discrete distributions, which can be done in O(1) with Alias method.
In Negative Sharing (w./wo. Stratified Sampling), We can com- pute the user-item interactions with more efficient operator, i.e. replacing the vector dot product between each pair of (f, g) with matrix multiplication between (F, G), where F = [fu1 , · · · , fun ], G = [gv1 , · · · , gvm ]. Since matrix multiplication is higher in BLAS level than vector multiplication [14], even we increase the number of interactions, with medium matrix size (e.g. 1000à 1000) it does not affect the computational cost much in practice.
# 3.4 Computational Cost and Convergence Analysis
Here we provide a summary for the computational cost for different sampling strategies discussed above, and also analyze their conver- gences. Two aspects that can lead to speedup are analyzed: (1) the computational cost for a mini-batch, i.e. per iteration, and (2) the number of iterations required to reach some referenced loss.
4Unigram means proportional to item frequency, such as node degree in user-item interaction graph. 5This can be done by first shuffling users associated with each item, and then concate- nating all links according to items in random order, random strata is then formed by segmenting the list.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
3.4.1 Computational Cost. To fairly compare different sampling strategies, we fix the same number of positive links in each of the mini-batch, which correspond to the positive terms in the loss function. Table 2 shows the computational cost of different sampling strategies for a given mini-batch. Since tд ⫠tf , ti in practice, we approximate the theoretical speedup per iteration by comparing the number of tд computation. We can see that the proposed sampling strategies can provide (1 + k), by Negative Sharing, or s(1 + k), by Stratified Sampling (w./w.o. Negative Sharing), times speedup for each iteration compared to IID Sampling or Negative Sampling. As for the number of iterations to reach a reference loss, it is related to number of negative samples utilized, which is analyzed below.
3.4.2 Convergence Analysis. We want to make sure the SGD training under the proposed sampling strategies can converge cor- rectly. The necessary condition for this to hold is the stochastic gradient estimator has to be unbiased, which leads us to the follow- ing lemma.
Lemma 1. (unbiased stochastic gradient) Under sampling Algo- rithm 2, 3, 4, and 5, we have EB [âLB (θ t )] = âL(θ t ). In other words, the stochastic mini-batch gradient equals to true gradient in expecta- tion.
This holds for both pointwise loss and pairwise loss. It is guar- anteed since we draw samples stochastically and re-weight certain samples accordingly. The detailed proof can be found in the supple- mentary material.
Given this lemma, we can further analyze the convergence be- havior of the proposed sampling behaviors. Due to the highly non- linear and non-convex functions composed by (deep) neural net- works, the convergence rate is usually difficult to analyze. So we show the SGD with the proposed sampling strategies follow a local convergence bound (similar to [10, 24]).
Proposition 1. (local convergence) Suppose L has Ï -bounded , and θ â is the gradient; let ηt = η = c/ minimizer to L. Then, the following holds for the proposed sampling strategies given in Algorithm 2, 3, 4, 5
ein, LIV-L@) IP] J AO FO,
The detailed proof is also given in the supplementary material. Furthermore, utilizing more negative links in each mini-batch can lower the expected stochastic gradient variance. As shown in [35, 36], the reduction of variance can lead to faster convergence. This suggests that Negative Sharing (w./wo. Stratified Sampling) has better convergence than the Stratified Sampling (by Items).
4 EXPERIMENTS 4.1 Data Sets Two real-world text recommendation data sets are used for the experiments. The first data set CiteULike, collected from CiteU- Like.org, is provided in [31]. The CiteULike data set contains users bookmarking papers, where each paper is associated with a title and an abstract. The second data set is a random subset of Yahoo!
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Table 2: Computational cost analysis for a batch of b positive links. We use vec to denote vector multiplication, and mat to denote matrix multiplication. Since tд ⫠tf , ti in practice, the theoretical speedup per iteration can be approximated by comparing the number of tд computation, which is colored red below. The number of iterations to reach a referenced loss is related to the number of negative links in each mini-batch.
Sampling IID [3] Negative [1, 21, 29] Stratified (by Items) Negative Sharing Stratified with N.S. # pos. links b b b b b # neg. links bk bk bk b(b â 1) b(bâ1) s # tf b(1 + k) b b(1 + k) b b # tд b(1 + k) b(1 + k) b s b b s # ti b(1 + k) vec b(1 + k) vec b(1 + k) vec b à b mat b à b mat s pointwise â â â â â pairwise à â à â â
News data set 6, which contains users clicking on news presented at Yahoo!. There are 5,551 users and 16,980 items, and total of 204,986 positive interactions in CiteULike data. As for Yahoo! News data, there are 10,000 users, 58,579 items and 515,503 interactions.
Table 3: Comparisons of speedup for different sampling strategies against IID Sampling: per iteration, # of iteration, and total speedup.
Following [4], we select a portion (20%) of items to form the pool of test items. All user interactions with those test items are held-out during training, only the remaining user-item interactions are used as training data, which simulates the scenarios for recommending newly-emerged text articles.
4.2 Experimental Settings The main purpose of experiments is to compare the efficiency and effectiveness of our proposed sampling strategies against existing ones. So we mainly compare Stratified Sampling, Negative Sharing, and Stratified Sampling with Negative Sharing, against IID sampling and Negative Sampling. It is worth noting that several existing state- of-the-art models [1, 4, 30] are special cases of our framework (e.g. using MSE-loss/Log-loss with CNN or RNN), so they are compared to other loss functions under our framework.
Model Sampling Negative Stratified Per it. 1.02 8.83 CiteULike # of it. 1.00 0.97 Total 1.02 8.56 Per it. 1.03 6.40 News # of it. 1.03 0.97 CNN LSTM N.S. Strat. w. N.S. Negative Stratified N.S. Strat. w. N.S. 8.42 15.53 0.99 3.1 2.87 3.4 2.31 1.87 0.96 0.77 2.45 2.22 19.50 29.12 0.95 2.38 7.03 7.57 6.54 11.49 1.0 3.12 2.78 3.13 2.21 2.17 1.25 1.03 4.14 3.32 Total 1.06 6.20 14.45 24.98 1.25 3.22 11.5 10.41
Evaluation Metrics. For recommendation performance, we follow [1, 32] and use recall@M. As pointed out in [32], the precision is not a suitable performance measure since non interactions may be due to (1) the user is not interested in the item, or (2) the user does not pay attention to its existence. More specifically, for each user, we rank candidate test items based on the predicted scores, and then compute recall@M based on the list. Finally the recall@M is averaged over all users.
As for the computational cost, we mainly measure it in three dimensions: the training time for each iteration (or epoch equiv- alently, since batch size is fixed for all methods), the number of iterations needed to reach a referenced loss, and the total amount of computation time needed to reach the same loss. In our exper- iments, we use the smallest loss obtained by IID sampling in the maximum 30 epochs as referenced loss. Noted that all time measure mentioned here is in Wall Time.
Parameter Settings. The key parameters are tuned with validation set, while others are simply set to reasonable values. We adopt Adam [16] as the stochastic optimizer. We use the same batch size b = 512 for all sampling strategies, we use the number of positive link per sampled stratum s = 4, learning rate is set to 0.001 for MSE-loss, and 0.01 for others. γ is set to 0.1 for Hinge-loss, and 10 for others. λ is
set to 8 for MSE-loss, and 128 for others. We set number of negative examples k = 10 for convolutional neural networks, and k = 5 for RNN/LSTM due to the GPU memory limit. All experiments are run with Titan X GPUs. We use unigram noise/negative distribution. For CNN, we adopt the structure similar in [15], and use 50 filters with filter size of 3. Regularization is added using both weight decay on user embedding and dropout on item embedding. For RNN, we use LSTM [12] with 50 hidden units. For both models, the dimensions of user and word embedding are set to 50. Early stop is utilized, and the experiments are run to maximum 30 epochs.
# 4.3 Speedup Under Different Sampling Strategies
Table 3 breaks down the speedup into (1) speedup for training on a given mini-batch, (2) number of iterations (to reach referenced cost) speedup, and (3) the total speedup, which is product of the first two. Different strategies are compared against IID Sampling. It is shown that Negative Sampling has similar computational cost as IID Sampling, which fits our projection. All three proposed sampling strategies can significantly reduce the computation cost within a mini-batch. Moreover, the Negative Sharing and Stratified Sampling with Negative Sharing can further improve the convergence w.r.t. the number of iterations, which demonstrates the benefit of using larger number of negative examples.
# 6https://webscope.sandbox.yahoo.com/catalog.php?datatype=r&did=75
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
(a) Citeulike (epoch) (b) Citeulike (wall time) (c) News (epoch) (d) News (wall time)
Figure 5: Training loss curves (all methods have the same number of b positive samples in a mini-batch)
(a) Citeulike (epoch) (b) Citeulike (wall time) (c) News (epoch) (d) News (wall time)
Figure 6: Test performance/recall curves (all methods have the same number of b positive samples in a mini-batch).
Figure 5 and 6 shows the convergence curves of both loss and test performance for different sampling strategies (with CNN + SG- loss). In both figures, we measure progress every epoch, which is equivalent to a fixed number of iterations since all methods have the same batch size b. In both figures, we can observe mainly two types of convergences behavior. Firstly, in terms of number of it- erations, Negative Sharing (w./wo. Stratified Sampling) converge fastest, which attributes to the number of negative samples used. Secondly, in terms of wall time, Negative Sharing (w./wo. Stratified Sampling) and Stratified Sampling (by Items) are all significantly faster than baseline sampling strategies, i.e. IID Sampling and Neag- tive Sampling. It is also interesting to see that that overfitting occurs earlier as convergence speeds up, which does no harm as early stop- ping can be used.
For Stratified Sampling (w./wo. negative sharing), the number of positive links per stratum s can also play a role to improve speedup as we analyzed before. As shown in Figure 7, the convergence time as well as recommendation performance can both be improved with a reasonable s, such as 4 or 8 in our case.
(a) Loss (Stratified) (b) Loss (Stratified with N.S.)
# (c) Recall (Stratified)
# (d) Recall (Stratified with N.S.)
Figure 7: The number of positive links per stratum s VS loss and performance.
# 4.4 Recommendation Performance Under Different Sampling Strategies
It is shown in above experiments that the proposed sampling strate- gies are significantly faster than the baselines. But we would also like to further access the recommendation performance by adopting the proposed strategies.
Negative Sharing and Stratified Sampling with Negative Sharing, since there are much more negative samples utilized, their perfor- mances are significantly better. We also observe that the current recommendation models based on MSE-loss [1, 30] can be improved by others such as SG-loss and pairwise loss functions [4].
Table 4 compares the proposed sampling strategies with CNN/RNN
models and four loss functions (both pointwise and pairwise). We can see that IID Sampling, Negative Sampling and Stratified Sam- pling (by Items) have similar recommendation performances, which is expected since they all utilize same amount of negative links. For
To further investigate the superior performance brought by Neg- ative Sharing. We study the number of negative examples k and the convergence performance. Figure 8 shows the test performance against various k. As shown in the figure, we observe a clear di- minishing return in the improvement of performance. However,
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Table 4: Recall@50 for different sampling strategies under different models and losses.
CiteULike News Model Sampling SG-loss MSE-loss Hinge-loss Log-loss SG-loss MSE-loss Hinge-loss Log-loss IID 0.4746 0.4437 - - 0.1091 0.0929 - - Negative 0.4725 0.4408 0.4729 0.4796 0.1083 0.0956 0.1013 0.1009 CNN Stratified 0.4761 0.4394 - - 0.1090 0.0913 - - Negative Sharing 0.4866 0.4423 0.4794 0.4769 0.1131 0.0968 0.0909 0.0932 Stratified with N.S. 0.4890 0.4535 0.4790 0.4884 0.1196 0.1043 0.1059 0.1100 IID 0.4479 0.4718 - - 0.0971 0.0998 - - Negative 0.4371 0.4668 0.4321 0.4540 0.0977 0.0977 0.0718 0.0711 LSTM Stratified 0.4344 0.4685 - - 0.0966 0.0996 - - Negative Sharing 0.4629 0.4839 0.4605 0.4674 0.1121 0.0982 0.0806 0.0862 Stratified with N.S. 0.4742 0.4877 0.4703 0.4730 0.1051 0.1098 0.1017 0.1002
(a) CiteULike (b) News
# F
recommender systems, recent efforts are made in combining col- laborative filtering and neural networks [1, 4, 30, 32]. [32] adopts autoencoder for extracting item-side text information for article recommendation, [1] adopts RNN/GRU to better understand the text content. [4] proposes to use CNN and pairwise loss functions, and also incorporate unsupervised text embedding. The general functional embedding framework in this work subsumes existing models [1, 4, 30].
Figure 8: The number of negatives VS performances.
the performance seems still increasing even we use 20 negative examples, which explains why our proposed method with negative sharing can result in better performance.
5 RELATED WORK Collaborative filtering [18] has been one of the most effective meth- ods in recommender systems, and methods like matrix factorization [17, 27] are widely adopted. While many papers focus on the ex- plicit feedback setting such as rating prediction, implicit feedback is found in many real-world scenarios and studied by many pa- pers as well [13, 23, 26]. Although collaborative filtering techniques are powerful, they suffer from the so-called âcold-startâ problem since side/content information is not well leveraged. To address the issue and improve performance, hybrid methods are proposed to incorporate side information [5, 7, 25, 28, 38], as well as content information [4, 11, 31, 32].
Stochastic Gradient Descent [3] and its variants [16] have been widely adopted in training machine learning models, including neural networks. Samples are drawn uniformly at random (IID) so that the stochastic gradient vector equals to the true gradient in expectation. In the setting where negative examples are over- whelming, such as in word embedding (e.g., Word2Vec [22]) and network embedding (e.g., LINE [29]) tasks, negative sampling is utilized. Recent efforts have been made to improve SGD conver- gence by (1) reducing the variance of stochastic gradient estimator, or (2) distributing the training over multiple workers. Several sam- pling techniques, such as stratified sampling [35] and importance sampling [36] are proposed to achieve the variance reduction. Dif- ferent from their work, we improve sampling strategies in SGD by reducing the computational cost of a mini-batch while preserving, or even increasing, the number of data points in the mini-batch. Sampling techniques are also studied in [9, 39] to distribute the computation of matrix factorization, their objectives in sampling strategy design are reducing the parameter overlapping and cache miss. We also find that the idea of sharing negative examples is exploited to speed up word embedding training in [14].
Deep Neural Networks (DNNs) have been showing extraordinary abilities to extract high-level features from raw data, such as video, audio, and text [8, 15, 34]. Compared to traditional feature detectors, such as SIFT and n-grams, DNNs and other embedding methods [5, 6, 29] can automatically extract better features that produce higher performance in various tasks. To leverage the extraordinary feature extraction or content understanding abilities of DNNs for
6 DISCUSSIONS While it is discussed under content-based collaborative filtering problem in this work, the study of sampling strategies for âgraph- basedâ loss functions have further implications. The IID sampling strategy is simple and popular for SGD-based training, since the loss function terms usually do not share the common computations. So no matter how a mini-batch is formed, it almost bears the same
KDD â17, August 13-17, 2017, Halifax, NS, Canada
amount of computation. This assumption is shattered by models that are defined under graph structure, with applications in social and knowledge graph mining [2], image caption ranking [20], and so on. For those scenarios, we believe better sampling strategies can result in much faster training than that with IID sampling.
We would also like to point out limitations of our work. The first one is the setting of implicit feedback. When the problem is posed under explicit feedback, Negative Sharing can be less effective since the constructed negative samples may not overlap with the explicit negative ones. The second one is the assumption of efficient com- putation for interaction functions. When we use neural networks as interaction functions, we may need to consider constructing negative samples more wisely for Negative Sharing as it will also come with a noticeable cost.
7 CONCLUSIONS AND FUTURE WORK In this work, we propose a hybrid recommendation framework, combining conventional collaborative filtering with (deep) neural networks. The framework generalizes several existing state-of-the- art recommendation models, and embody potentially more pow- erful ones. To overcome the high computational cost brought by combining âcheapâ CF with âexpensiveâ NN, we first establish the connection between the loss functions and the user-item interac- tion bipartite graph, and then point out the computational costs can vary with different sampling strategies. Based on this insight, we propose three novel sampling strategies that can significantly improve the training efficiency of the proposed framework, as well as the recommendation performance.
In the future, there are some promising directions. Firstly, based on the efficient sampling techniques of this paper, we can more efficiently study different neural networks and auxiliary informa- tion for building hybrid recommendation models. Secondly, we can also study the effects of negative sampling distributions and its affect on the design of more efficient sampling strategies. Lastly but not least, it would also be interesting to apply our sampling strategies in a distributed training environments where multi-GPUs and multi-machines are considered.
ACKNOWLEDGEMENTS The authors would like to thank anonymous reviewers for helpful suggestions. The authors would also like to thank NVIDIA for the donation of one Titan X GPU. This work is partially supported by NSF CAREER #1741634.
REFERENCES [1] Trapit Bansal, David Belanger, and Andrew McCallum. 2016. Ask the GRU: Multi-task Learning for Deep Text Recommendations. In RecSysâ16. 107â114. [2] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Ok- sana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NIPSâ13. 2787â2795.
[3] Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In COMPSTATâ2010. Springer, 177â186.
Joint Text Em- bedding for Personalized Content-based Recommendation. In arXiv preprint arXiv:1706.01084.
[5] Ting Chen and Yizhou Sun. 2017. Task-Guided and Path-Augmented Heteroge- neous Network Embedding for Author Identification. In WSDMâ17. 295â304. [6] Ting Chen, Lu-An Tang, Yizhou Sun, Zhengzhang Chen, and Kai Zhang. 2016. Entity Embedding-based Anomaly Detection for Heterogeneous Categorical Events. In IJCAIâ16. Miami.
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
[7] Tianqi Chen, Weinan Zhang, Qiuxia Lu, Kailong Chen, Zhao Zheng, and Yong Yu. 2012. SVDFeature: a toolkit for feature-based collaborative filtering. Journal of Machine Learning Research 13, Dec (2012), 3619â3622.
[8] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12, Aug (2011), 2493â2537.
[9] Rainer Gemulla, Erik Nijkamp, Peter J Haas, and Yannis Sismanis. 2011. Large- scale matrix factorization with distributed stochastic gradient descent. In KDDâ11. 69â77.
[10] Saeed Ghadimi and Guanghui Lan. 2013. Stochastic first-and zeroth-order meth- ods for nonconvex stochastic programming. SIAM Journal on Optimization 23, 4 (2013), 2341â2368.
[11] Prem K Gopalan, Laurent Charlin, and David Blei. 2014. Content-based recom- mendations with poisson factorization. In NIPSâ14. 3176â3184.
[12] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735â1780.
[13] Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In ICDMâ08. 263â272.
[14] Shihao Ji, Nadathur Satish, Sheng Li, and Pradeep Dubey. 2016. Parallelizing word2vec in shared and distributed memory. arXiv preprint arXiv:1604.04661 (2016).
[15] Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014).
[16] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimiza- tion. arXiv preprint arXiv:1412.6980 (2014).
[17] Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDDâ08. 426â434.
[18] Yehuda Koren, Robert Bell, Chris Volinsky, et al. 2009. Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30â37.
[19] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classifi- cation with deep convolutional neural networks. In NIPSâ12. 1097â1105. [20] Xiao Lin and Devi Parikh. 2016. Leveraging visual question answering for image-
caption ranking. In ECCVâ16. Springer, 261â277.
[21] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
[22] T Mikolov and J Dean. 2013. Distributed representations of words and phrases and their compositionality. NIPSâ13 (2013).
[23] Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-class collaborative filtering. In ICDMâ08. 502â511.
[24] Sashank J Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. 2016. Stochastic Variance Reduction for Nonconvex Optimization. In ICMLâ16. 314â323.
[25] Steffen Rendle. 2010. Factorization machines. In ICDMâ10. 995â1000. [26] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In UAIâ09. AUAI Press, 452â461.
[27] Ruslan Salakhutdinov and Andriy Mnih. 2011. Probabilistic matrix factorization. In NIPSâ11, Vol. 20. 1â8.
[28] Ajit P Singh and Geoffrey J Gordon. 2008. Relational learning via collective matrix factorization. In KDDâ08. 650â658.
[29] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In WWWâ15. 1067â1077. [30] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. 2013. Deep
content-based music recommendation. In NIPSâ13. 2643â2651.
[31] Chong Wang and David M Blei. 2011. Collaborative topic modeling for recom- mending scientific articles. In KDDâ11. 448â456.
[32] Hao Wang, Naiyan Wang, and Dit-Yan Yeung. 2015. Collaborative deep learning for recommender systems. In KDDâ15. 1235â1244.
Improving maximum margin matrix factorization. Machine Learning 72, 3 (2008), 263â276. [34] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional
networks for text classification. In NIPSâ15. 649â657.
[35] Peilin Zhao and Tong Zhang. 2014. Accelerating minibatch stochastic gradient descent using stratified sampling. arXiv preprint arXiv:1405.3080 (2014). [36] Peilin Zhao and Tong Zhang. 2015. Stochastic Optimization with Importance
Sampling for Regularized Loss Minimization. In ICMLâ15. 1â9.
[37] Yin Zheng, Bangsheng Tang, Wenkui Ding, and Hanning Zhou. 2016. A Neural Autoregressive Approach to Collaborative Filtering. In ICMLâ16. 764â773. [38] Ke Zhou, Shuang-Hong Yang, and Hongyuan Zha. 2011. Functional matrix
factorizations for cold-start recommendation. In SIGIRâ11. 315â324.
[39] Yong Zhuang, Wei-Sheng Chin, Yu-Chin Juan, and Chih-Jen Lin. 2013. A fast parallel SGD for matrix factorization in shared memory systems. In Recsys. 249â 256.
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
SUPPLEMENTARY MATERIAL A PROOFS Here we give the proofs for both the lemma and the proposition introduced in the main paper. For brevity, throughout we assume by default the loss function L is the pointwise loss of Eq. (1) in the main paper. Proofs are only given for the pointwise loss, but it can be similarly derived for the pairwise loss. We start by first introducing some definitions.
Definition 1. A function f is L-smooth if there is a constant L such that
â¥âf (x) â âf (y)⥠⤠Lâ¥x â y â¥
Such an assumption is very common in the analysis of first-order methods. In the following proof, we assume any loss functions L is
L-smooth.
Property 1. (Quadratic Upper Bound) A L-smooth function f has the following property â¥y â x â¥2
f (y) ⤠f (x) + âf (x)T (y â x) + L 2
Definition 2. We say a function f has a Ï -bounded gradient if â¥âfi (θ )â¥2 â¤ Ï for all i â [n] and any θ â Rd . For each training iteration, we first sample a mini-batch of links (denoted by B) of both positive links (B
+) and negative links (Bâ), according to the sampling algorithm (one of the Algorithm 2, 3, 4, 5), and then the stochastic gradient is computed and applied to the parameters as follows:
nt Mt - - pitta gt_ Tt » Civ VL" (Olu,v) ~ = » Cay VL" (Olu, v) (3) (u,v) â¬By (u,v) â¬By
Here we use L+(θ |u, v) to denote the gradient of loss function L+(θ ) given a pair of (u, v). And m, n are the number of positive and negative links in the batch B, respectively.
Lemma 1. (unbiased stochastic gradient) Under sampling Algorithm 2, 3, 4, 5, we have EB [âLB (θ t )] = âL(θ t ). In other words, the stochastic mini-batch gradient equals to true gradient in expectation.
Proof. Below we prove this lemma for each for the sampling Algorithm. For completeness, we also show the proof for Uniform Sampling as follows. The main idea is show the expectation of stochastic gradient computed in a randomly formed mini-batch equal to the true gradient of objective in Eq. 1.
IID Sampling. The positive links in the batch B are i.i.d. samples from Pd (u, v) (i.e. drawn uniformly at random from all positive links), and the negative links in B are i.i.d. samples from Pd (u)Pn (v), thus we have
EB [âLB (θ t )] 1
1m ix - ur , == D1 Beuv)PatuoyletoVL* (Ol, 2)]1 + = Y Beu,vy-Pa(w Pao Caer VL Ole, vâ)) i=1 i=1 (4) =Ey~py(u)|Eo~Py(v|u) [Cue VL" (lu, v)] + Ee p, (oy [Cue VL (Alu, vâ)] =VL(0")
The first equality is due to the definition of sampling procedure, the second equality is due to the definition of expectation, and the final equality is due to the definition of pointwise loss function in Eq. 1.
Negative Sampling. In Negative Sampling, we have batch B consists of i.i.d. samples of m positive links, and conditioning on each positive link, k negative links are sampled by replacing items in the same i.i.d. manner. Positive links are sampled from Pd (u, v), and negative items are sampled from Pn (v â²), thus we have
EB [âLB (θ t )] 1
k 1< 1 _ _ , == D Eu orPalurdg > Bo'~PaoleeoVL* Olt 2) + Coy VL (Olu, 0â)] i=1 j=l (5) =E,,~p,(u)|Eo~Py(v|u)lCuv VL" (Olu, Â¥)] + Ev~p,(o) [Cu VL (Ou, 0â) =VL£(6")
The first equality is due to the definition of sampling procedure, and the second equality is due to the properties of joint probability distribution and expectation.
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
Stratified Sampling (by Items). In Stratified Sampling (by Items), a batch B consists of links samples drawn in two steps: (1) draw an item v â¼ Pd (v), and (2) draw positive users u â¼ Pd (u|v) and negative users u â² â¼ Pd (u) respectively. Additionally, negative terms are also re-weighted, thus we have
# EplVLB(0")]
By ryi)| + So BucrytuinleteoW £*(Olu 0) + 2 a ~rqnleao per VLâ (82) i=1 =Buo)-Pyluo)lCtoVL* (Olu, 0)] + cur ranracsne 39.£°(6lu2)] =E(u,v)~Py(u,v)Cuy VL" (Alu, 2)] + Eu, v)~Py(u)Pa(o)Cuw VL (lu, 2)] =E,~pg(u)|Ev~Pg(v|u) luv VL" (lu, 2) + Ey p, (wy leu VL (Alu, 0â)
=âL(θ t )
The first equality is due to the definition of sampling procedure, and the second, the third and the forth equality is due to the properties of joint probability distribution and expectation.
Negative Sharing. In Negative Sharing, we only draw positive links uniformly at random (i.e. (u, v) â¼ Pd (u, v)), while constructing negative links from sharing the items in the batch. So the batch B we use for computing gradient consists of both m positive links and m(m â 1) negative links.
Although we do not draw negative links directly, we can still calculate their probability according to the probability distribution from which we draw the positive links. So a pair of constructed negative link in the batch is drawn from (u, v) â¼ Pd (u, v) = Pd (v)Pd (u|v). Additionally, negative terms are also re-weighted, we have EB [âLB (θ t )]
oo + veto Domes? Palo peg âmy (u,e)-Pa(u,e lus VL" Blu, 2] + oy » Eue)Paluolue p gy VL (Ol 21 =Ey~py(u)|Eo~Py(v|u) [cue VL (lu, v)] + Ee p,(o [Cue VL (Alu, vâ)] =VL(0')
=âL(θ t )
The first equality is due to the definition of sampling procedure, and the second equality is due to the properties of joint probability distribution and expectation.
Stratified Sampling with Negative Sharing. Under this setting, we follow a two-step sampling procedure: (1) draw an item v â¼ Pd (v), and (2) draw positive users u â¼ Pd (u|v). Negative links are constructed from independently drawn items in the same batch. So the batch B consists of m positive links and n negative links.
We can use the same method as in Negative Sharing to calculate the probability of sampled negative links, which is also (u, v) â¼ Pd (u, v). Again, negative terms are re-weighted, thus we have
EB [âLB (θ t )] 1
(2) =â 15" EL Py (v),u~Pg(ulv) lous VL" (Alu, v)] + â 1y Evu,v)~Pg(u,v)Cuw ao ye (Olu, v)] i=l naa =E(u,v)~Py(u,v) (Cu VL" (Olu, 2)] + E(u, 0)~Pg(u)Pn(v) Cae VL" (Olu, 0â) =E,~P,(u)|Eo~Py(o|u) leu VL" (Alu, 2)] + By~p, (wo [eye VL (Olu, 0â) =VL(0')
=âL(θ t )
The first equality is due to the definition of sampling procedure, and the second, third and fourth equality is due to the properties of joint â¡ probability distribution and expectation.
â
Proposition 1. Suppose L has o-bounded gradient; let np =n = c/VT where c = | AO 60"), and 6* is the minimizer to L. Then, the following holds for the sampling strategies given in Algorithm 2, 3, 4, 5
min Ell|VL(6â)II7] < 0<t<T-1
(6)
(7)
(8)
On Sampling Strategies for Neural Network-based Collaborative Filtering
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Proof. With the property of L-smooth function L, we have
E[L(θ t +1)] ⤠E[L(θ t ) + â¨âL(θ t ), θ t +1 â θ t â© + L 2 â¥Î¸ t +1 â θ t â¥2]
By applying the stochastic update equation, lemma 1, i.e. EB [âLB (θ t )] = âL(θ t ), we have
E[â¨âL(θ t ), θ t +1 â θ t â© + L 2 â¥Î¸ t +1 â θ t â¥2] â¤Î·t E[â¥âL(θ t )â¥2] + Lη 2 2 t E[â¥âLB (θ t )â¥2] (10)
Combining results in Eq. 9 and 10, with assumption that the function L is Ï -bounded, we have
2 Lη t 2b
E[L(θ t +1)] ⤠E[L(θ t )] + ηt E[â¥âL(θ t )â¥2] + Ï
2
Rearranging the above equation we obtain
1 E[L(θ t â L(θ t +1)] + Lηt 2b E[â¥âL(θ t )â¥2] â¤ Ï 2 (11)
t we have 1 T-1
â
By summing Eq. 11 from t = 0 to T â 1 and setting η = c/
c/-VT,
1 E[â¥âL(θ t )â¥2] ⤠E[â¥L(θ t )â¥2] min t T 0 ⤠c 1 â T (L(θ 0) â L(θ â)) + Lc â 2 T Ï 2 (12)
By setting
By setting
c = 2(L(θ 0) â L(θ â)) LÏ 2
We obtain the desired result.
B VECTOR DOT PRODUCT VERSUS MATRIX MULTIPLICATION Here we provide some empirical evidence for the computation time difference of replacing vector dot product with matrix multiplication. Since vector dot product can be batched by element-wise matrix multiplication followed by summing over each row. We compare two operations between two square matrices of size n: (1) element-wise matrix multiplication, and (2) matrix multiplication. A straightforward 3). However, modern computation devices such implementation of the former has algorithmic complexity of O(n as GPUs are better optimized for the latter, so when the matrix size is relatively small, their computation time can be quite similar. This is demonstrated in Figure 9. In our choice of batch size and embedding dimension, n ⪠1000, so the computation time is comparable. Furthermore, ti ⪠tд, so even several times increase would also be ignorable.
140 120 100 Ss E So 8 5 . E 3 0 ° z z E 5 0 + z I 20 . 0 fe ® -20 0O 2000 4000 6000 8000 10000 12000 14000 16000 matrix size
Figure 9: The computation time ratio between matrix multiplication and element-wise matrix multiplication for different square matrix sizes.
(9)
â¡
ao
KDD â17, August 13-17, 2017, Halifax, NS, Canada
Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong
C FUNCTIONAL EMBEDDING VERSUS FUNCTIONAL REGULARIZATION In this work we propose a functional embedding framework, in which the embedding of a user/item is obtained by some function such as neural networks. We notice another approach is to penalize the distance between user/item embedding and the function output (instead of equate them directly as in functional embedding), which we refer as functional regularization, and it is used in [32]. More specifically, functional regularization emits following form of loss function:
L(hu , hv ) + λâ¥hu â f(xu )â¥2 Here we point out its main issue, which does not appear in Functional Embedding. In order to equate the two embedding vectors, we need to increase λ. However, setting large λ will slow down the training progress under coordinate descent. The gradient w.r.t. hu is u â ft (xu ), which means hu cannot be effectively updated by interaction information. âhu | {
"id": "1706.01084"
} |
1706.07269 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | 8 1 0 2
g u A 5 1 ] I A . s c [
3 v 9 6 2 7 0 . 6 0 7 1 : v i X r a
Explanation in Artiï¬cial Intelligence: Insights from the Social Sciences
# Tim Miller
School of Computing and Information Systems University of Melbourne, Melbourne, Australia tmiller@ unimelb. edu. au
# Abstract
There has been a recent resurgence in the area of explainable artiï¬cial intelligence as re- searchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artiï¬cial intelligence. However, it is fair to say that most work in explainable artiï¬cial intelligence uses only the researchersâ intuition of what constitutes a âgoodâ explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people deï¬ne, gener- ate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the ï¬eld of explainable artiï¬cial intelligence should build on this existing re- search, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important ï¬ndings, and discusses ways that these can be infused with work on explainable artiï¬cial intelligence.
Keywords: Explanation, Explainability, Interpretability, Explainable AI, Transparency
# Contents
3 4 6 7 7 2.1 Deï¬nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Explanation as a Product . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Explanation as Abductive Reasoning . . . . . . . . . . . . . . . . . Interpretability and Justiï¬cation . . . . . . . . . . . . . . . . . . . 2.1.5 8 8 8 11 12 13 14
1 Introduction 1.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Major Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Outline 1.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
# 2 Philosophical Foundations â What Is Explanation?
Preprint submitted to Journal Name
February 14, 2022
2.2 Why People Ask for Explanations . . . . . . . . . . . . . . . . . . . . . . 2.3 Contrastive Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Types and Levels of Explanation . . . . . . . . . . . . . . . . . . . . . . . 2.5 Structure of Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Explanation and XAI 2.6.1 Causal Attribution is Not Causal Explanation . . . . . . . . . . . . 2.6.2 Contrastive Explanation . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 Explanatory Tasks and Levels of Explanation . . . . . . . . . . . . 2.6.4 Explanatory Model of Self . . . . . . . . . . . . . . . . . . . . . . . Structure of Explanation . . . . . . . . . . . . . . . . . . . . . . . 2.6.5
3.1 Deï¬nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Intentionality and Explanation . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Beliefs, Desires, Intentions, and Traits . . . . . . . . . . . . . . . . . . . . 3.3.1 Malleâs Conceptual Model for Social Attribution . . . . . . . . . . 3.4 Individual vs. Group Behaviour . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Norms and Morals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Social Attribution and XAI 3.6.1 Folk Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Malleâs Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Collective Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.4 Norms and Morals . . . . . . . . . . . . . . . . . . . . . . . . . . . tions? 4.1 Causal Connection, Explanation Selection, and Evaluation . . . . . . . . . 4.2 Causal Connection: Abductive Reasoning . . . . . . . . . . . . . . . . . . 4.2.1 Abductive Reasoning and Causal Types . . . . . . . . . . . . . . . 4.2.2 Background and Discounting . . . . . . . . . . . . . . . . . . . . . 4.2.3 Explanatory Modes . . . . . . . . . . . . . . . . . . . . . . . . . . Inherent and Extrinsic Features . . . . . . . . . . . . . . . . . . . . 4.2.4 4.3 Causal Connection: Counterfactuals and Mutability . . . . . . . . . . . . 4.3.1 Abnormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Temporality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Controllability and Intent . . . . . . . . . . . . . . . . . . . . . . . Social Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 4.4 Explanation Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Facts and Foils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Abnormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Intentionality and Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Necessity, Suï¬ciency and Robustness 4.4.5 Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 Preconditions, Failure, and Intentions 4.5 Explanation Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Coherence, Simplicity, and Generality . . . . . . . . . . . . . . . . 4.5.2 Truth and Probability . . . . . . . . . . . . . . . . . . . . . . . . . 23 23 24 26 26 27 28 30 30 30 31 31 32 32 34 34 35 35 36 37 38 38 38 39 39 39 40 41 41 42 43 43 43 44
# 3 Social Attribution â How Do People Explain Behaviour?
# 4 Cognitive Processes â How Do People Select and Evaluate Explana-
14 16 17 18 20 20 20 21 22 23
4.5.3 Goals and Explanatory Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Abductive Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Mutability and Computation . . . . . . . . . . . . . . . . . . . . . 4.6.3 Abnormality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 Intentionality and Functionality . . . . . . . . . . . . . . . . . . . . 4.6.5 Perspectives and Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.6 Evaluation of Explanations 4.6 Cognitive Processes and XAI 5.1 Explanation as Conversation . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Logic and Conversation . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Relation & Relevance in Explanation Selection . . . . . . . . . . . 5.1.3 Argumentation and Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Linguistic structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Explanatory Dialogue 5.3 Social Explanation and XAI . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Conversational Model . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Dialogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Theory of Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implicature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 5.3.5 Dilution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Social and Interactive Explanation . . . . . . . . . . . . . . . . . . 5.3.6 46 46 47 47 48 48 48 49 49 50 50 51 53 54 55 56 56 56 57 58 58 59 59
# 5 Social Explanation â How Do People Communicate Explanations?
# 6 Conclusions
# 1. Introduction
Recently, the notion of explainable artiï¬cial intelligence has seen a resurgence, after having slowed since the burst of work on explanation in expert systems over three decades ago; for example, see Chandrasekaran et al. [23], [168], and Buchanan and Shortliï¬e [14]. Sometimes abbreviated XAI (eXplainable artiï¬cial intelligence), the idea can be found in grant solicitations [32] and in the popular press [136]. This resurgence is driven by evidence that many AI applications have limited take up, or are not appropriated at all, due to ethical concerns [2] and a lack of trust on behalf of their users [166, 101]. The running hypothesis is that by building more transparent, interpretable, or explainable systems, users will be better equipped to understand and therefore trust the intelligent agents [129, 25, 65].
While there are many ways to increase trust and transparency of intelligent agents, two complementary approaches will form part of many trusted autonomous systems: (1) generating decisions1 in which one of the criteria taken into account during the compu- tation is how well a human could understand the decisions in the given context, which is often called interpretability or explainability; and (2) explicitly explaining decisions
1We will use decision as the general term to encompass outputs from AI systems, such as categori-
sations, action selection, etc.
3
to people, which we will call explanation. Applications of explanation are considered in many sub-ï¬elds of artiï¬cial intelligence, such as justifying autonomous agent behaviour [129, 65], debugging of machine learning models [89], explaining medical decision-making [45], and explaining predictions of classiï¬ers [157].
If we want to design, and implement intelligent agents that are truly capable of providing explanations to people, then it is fair to say that models of how humans explain decisions and behaviour to each other are a good way to start analysing the problem. Researchers argue that people employ certain biases [82] and social expectations [72] when they generate and evaluate explanation, and I argue that such biases and expectations can improve human interactions with explanatory AI. For example, de Graaf and Malle [34] argues that because people assign human-like traits to artiï¬cial agents, people will expect explanations using the same conceptual framework used to explain human behaviours.
Despite the recent resurgence of explainable AI, most of the research and practice in this area seems to use the researchersâ intuitions of what constitutes a âgoodâ explanation. Miller et al. [132] shows in a small sample that research in explainable AI typically does not cite or build on frameworks of explanation from social science. They argue that this could lead to failure. The very experts who understand decision-making models the best are not in the right position to judge the usefulness of explanations to lay users â a phenomenon that Miller et al. refer to (paraphrasing Cooper [31]) as âthe inmates running the asylumâ. Therefore, a strong understanding of how people deï¬ne, generate, select, evaluate, and present explanations seems almost essential.
In the ï¬elds of philosophy, psychology, and cognitive science, there is a vast and ma- ture body of work that studies these exact topics. For millennia, philosophers have asked the questions about what constitutes an explanation, what is the function of explana- tions, and what are their structure. For over 50 years, cognitive and social psychologists have analysed how people attribute and evaluate the social behaviour of others. For over two decades, cognitive psychologists and scientists have investigated how people generate explanations and how they evaluate their quality.
I argue here that there is considerable scope to infuse this valuable body of research into explainable AI. Building intelligent agents capable of explanation is a challenging task, and approaching this challenge in a vacuum considering only the computational problems will not solve the greater problems of trust in AI. Further, while some recent work builds on the early ï¬ndings on explanation in expert systems, that early research was undertaken prior to much of the work on explanation in social science. I contend that newer theories can form the basis of explainable AI â although there is still a lot to learn from early work in explainable AI around design and implementation.
This paper aims to promote the inclusion of this existing research into the ï¬eld of ex- planation in AI. As part of this work, over 250 publications on explanation were surveyed from social science venues. A smaller subset of these were chosen to be presented in this paper, based on their currency and relevance to the topic. The paper presents relevant theories on explanation, describes, in many cases, the experimental evidence supporting these theories, and presents ideas on how this work can be infused into explainable AI.
# 1.1. Scope
In this article, the term âExplainable AI â loosely refers to an explanatory agent reveal- ing underlying causes to its or another agentâs decision making. However, it is important
4
Social Science XAI Human-Agent Interaction Artiï¬cial Intelligence Human-Computer Interaction
Figure 1: Scope of Explainable Artiï¬cial Intelligence
to note that the solution to explainable AI is not just âmore AIâ. Ultimately, it is a human-agent interaction problem. Human-agent interaction can be deï¬ned as the inter- section of artiï¬cial intelligence, social science, and human-computer interaction (HCI); see Figure 1. Explainable AI is just one problem inside human-agent interaction.
This article highlights the top circle in Figure 1: the philosophy, social and cognitive psychology, and cognitive science views of explanation, and their relation to the other two circles: their impact on the design of both artiï¬cial intelligence and our interactions with them. With this scope of explainable AI in mind, the scope of this article is threefold:
⢠Survey: To survey and review relevant articles on the philosophical, cognitive, and social foundations of explanation, with an emphasis on âeverydayâ explanation.
⢠Everyday explanation: To focus on âeverydayâ (or local) explanations as a tool and process for an agent, who we call the explainer, to explain decisions made by itself or another agent to a person, who we call the explainee. âEverydayâ explanations are the explanations of why particular facts (events, properties, decisions, etc.) occurred, rather than explanations of more general relationships, such as those seen in scientiï¬c explanation. We justify this focus based on the observation from AI literature that trust is lost when users cannot understand traces of observed behaviour or decisions [166, 129], rather than trying to understand and construct generalised theories. Despite this, everyday explanations also sometimes refer to generalised theories, as we will see later in Section 2, so scientiï¬c explanation is relevant, and some work from this area is surveyed in the paper.
5
⢠Relationship to Explainable AI : To draw important points from relevant articles to some of the diï¬erent sub-ï¬elds of explainable AI.
The following topics are considered out of scope of this article:
⢠Causality: While causality is important in explanation, this paper is not a survey on the vast work on causality. I review the major positions in this ï¬eld insofar as they relate to the relationship with models of explanation.
⢠Explainable AI : This paper is not a survey on existing approaches to explanation or interpretability in AI, except those that directly contribute to the topics in scope or build on social science. For an excellent short survey on explanation in machine learning, see Biran and Cotton [9].
1.2. Major Findings
As part of this review, I highlight four major ï¬ndings from the surveyed literature that I believe are important for explainable AI, but which I believe most research and practitioners in artiï¬cial intelligence are currently unaware:
1. Explanations are contrastive â they are sought in response to particular counter- factual cases, which are termed foils in this paper. That is, people do not ask why event P happened, but rather why event P happened instead of some event Q. This has important social and computational consequences for explainable AI. In Sections 2â4, models of how people provide contrastive explanations are reviewed.
2. Explanation are selected (in a biased manner) â people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes inï¬nite number of causes to be the explanation. However, this selection is inï¬uenced by certain cognitive biases. In Section 4, models of how people select explanations, including how this relates to contrast cases, are reviewed.
3. Probabilities probably donât matter â while truth and likelihood are important in explanation and probabilities really do matter, referring to probabilities or statis- tical relationships in explanation is not as eï¬ective as referring to causes. The most likely explanation is not always the best explanation for a person, and importantly, using statistical generalisations to explain why events occur is unsatisfying, unless accompanied by an underlying causal explanation for the generalisation itself.
4. Explanations are social â they are a transfer of knowledge, presented as part of a conversation2 or interaction, and are thus presented relative to the explainerâs beliefs about the explaineeâs beliefs. In Section 5, models of how people interact regarding explanations are reviewed.
2Note that this does not imply that explanations must be given in natural language, but implies that
explanation is a social interaction between the explainer and the explainee.
6
These four points all converge around a single point: explanations are not just the presentation of associations and causes (causal attribution), they are contextual. While an event may have many causes, often the explainee cares only about a small subset (relevant to the context), the explainer selects a subset of this subset (based on several diï¬erent criteria), and explainer and explainee may interact and argue about this explanation.
I assert that, if we are to build truly explainable AI, especially intelligent systems that are able to oï¬er explanations, then these three points are imperative in many applications.
# 1.3. Outline
The outline of this paper is as follows. Section 1.4 presents a motivating example of an explanatory agent that is used throughout the paper. Section 2 presents the philosophical foundations of explanation, deï¬ning what explanations are, what they are not, how to relate to causes, their meaning and their structure. Section 3 focuses on one speciï¬c type of explanation â those relating to human or social behaviour, while Section 4 surveys work on how people generate and evaluate explanations more generally; that is, not just social behaviour. Section 5 describes research on the dynamics of interaction in explanation between explainer and explainee. Section 6 concludes and highlights several major challenges to explanation in AI.
1.4. Example
This section presents a simple example, which is used to illustrate many important concepts through this paper. It is of a hypothetical system that categorises images of arthropods into several diï¬erent types, based on certain physical features of the arthro- pods, such as number of legs, number of eyes, number of wings, etc. The algorithm is assumed to have been trained on a large set of valid data and is highly accurate. It is used by entomologists to do automatic classiï¬cation of their research data. Table 1 outlines a simple model of the features of arthropods for illustrative purposes. An explanation function is available for the arthropod system.
Compound Type No. Legs Stinger No. Eyes Eyes Wings Spider 8 x 8 x 0 Beetle 6 x 2 Vv 2 Bee 6 v 5 Vv 4 Fly 6 x 5 Vv 2
Table 1: A simple lay model for distinguishing common arthropods.
Now, consider the idealised and simple dialogue between a human user and âExplA- gentâ, who is the interactive explanation agent, outlined in Table 2. This dialogue is not intended to be realistic, but is merely illustrative of how a particular explanatory agent may interact: responding to posed questions, using mixed modalities â in this case, language and visual images â and being able to answer a range of questions about its decision making. This example shows diï¬erent types of questions being posed, and demonstrates that the explanatory agent will need to keep track of the state of the ex- planation; for example, by noting what it has already told the explainee, and may have to infer what the explainee has inferred themselves.
7
Person: ExplAgent: âBecause the arthropod in image J has eight legs, consistent with those
âWhy is image J labelled as a Spider instead of a Beetle? â
ExplAgent: âBecause the arthropod in image J has eight legs, consistent with those in the category Spider, while those in Beetle have six legs.â
in the category Spider, while those in Beetle have six legs.â âWhy did you infer that the arthropod in image J had eight legs instead of six? â
# Person:
ExplAgent: âI counted the eight legs that I found, as I have just highlighted on the image now.â (ExplAgent shows the image with the eight legs counted). âHow do you know that spiders have eight legs? â
Person: ExplAgent: âBecause in the training set I was trained on, almost all animals with
ExplAgent: âBecause in the training set I was trained on, almost all animals with eight legs were labelled as Spider.â
eight legs were labelled as Spider.â âBut an octopus can have eight legs too. Why did you not classify image J as an octopus? â
# Person:
ExplAgent: âBecause my function is only to classify arthropods.â
Figure 2: Example Explanation Dialogue between a Person and an Explanation Agent
We will refer back to this example throughout the paper and link diï¬erence parts of work the diï¬erent parts of the dialogue above.
# 2. Philosophical Foundations â What Is Explanation?
To explain an event is to provide some information about its causal history. In an act of explaining, someone who is in possession of some information about the causal history of some event â explanatory information, I shall call it â tries to convey it to someone else. â Lewis [99, p. 217]
In this section, we outline foundational work in explanation, which helps to deï¬ne causal explanation and how it diï¬ers from other concepts such as causal attribution and interpretability.
2.1. Deï¬nitions
There are several related concepts in explanation, which seem to be used interchange- ably between authors and also within articles, often demonstrating some conï¬ation of the terms. In particular, this section describes the diï¬erence between causal attribution and causal explanation. We will also brieï¬y touch on the diï¬erence between explanation and interpretability.
# 2.1.1. Causality
The idea of causality has attracted much work, and there are several diï¬erent accounts of what constitutes a cause of an event or property. The various deï¬nitions of causation can be broken into two major categories: dependence theories and transference theories.
Causality and Counterfactuals. Hume [79, Section VII] is credited with deriving what is known as the regularity theory of causation. This theory states that there is a cause between two types of events if events of the ï¬rst type are always followed by events of the second. However, as argued by Lewis [98], the deï¬nition due to Hume is in fact about 8
counterfactuals, rather than dependence alone. Hume argues that the co-occurrence of events C and E, observed from experience, do not give causal information that is useful. Instead, the cause should be understood relative to an imagined, counterfactual case: event C is said to have caused event E if, under some hypothetical counterfactual case the event C did not occur, E would not have occurred. This deï¬nition has been argued and reï¬ned, and many deï¬nitions of causality are based around this idea in one way or another; c.f. Lewis [98], Hilton [71].
This classical counterfactual model of causality is well understood but competing deï¬nitions exist. Interventionist theories of causality [191, 58] state that event C can be deemed a cause of event E if and only if any change to event E can be brought about solely by intervening on event C. Probabilistic theories, which are extensions of interventionist theories, state that event C is a cause of event E if and only if the occurrence of C increases the probability of E occurring [128].
Transference theories [5, 43, 39], on the other hand, are not deï¬ned on dependence, but instead describe physical causation as the transference of energy between objects. In short, if E is an event representing the change of energy of an object O, then C causes E if object O is in contact with the object that causes C, and there is some quantity of energy transferred.
While the aim here is not a detailed survey of causality, however, it is pertinent to note that the dependence theories all focus around the concept of counterfactuals: the state of aï¬airs that would have resulted from some event that did not occur. Even transference theories, which are not explicitly deï¬ned as counterfactual, consider that causation is an unnatural transference of energy to the receiving object, implying what would have been otherwise. As such, the notion of âcounterfactualâ is important in causality.
Gerstenberg et al. [49] tested whether people consider counterfactuals when making causal judgements in an experiment involving colliding balls. They presented experiment participants with diï¬erent scenarios involving two balls colliding, with each scenario having diï¬erent outcomes, such as one ball going through a gate, just missing the gate, or missing the gate by a long distance. While wearing eye-tracking equipment, participants were asked to determine what the outcome would have been (a counterfactual) had the candidate cause not occurred (the balls had not collided). Using the eye-gaze data from the tracking, they showed that their participants, even in these physical environments, would trace where the ball would have gone had the balls not collided, thus demonstrating that they used counterfactual simulation to make causal judgements.
Necessary and Suï¬cient Causes. Kelley [87] proposes a taxonomy of causality in social attribution, but which has more general applicability, and noted that there are two main types of causal schemata for causing events: multiple necessary causes and multiple suï¬cient causes. The former deï¬nes a schema in which a set of events are all necessary to cause the event in question, while the latter deï¬nes a schema in which there are multiple possible ways to cause the event, and only one of these is required. Clearly, these can be interleaved; e.g. causes C1, C2, and C3 for event E, in which C1 is necessary and either of C2 or C3 are necessary, while both C2 and C3 are suï¬cient to cause the compound event (C2 or C3).
Internal and External Causes. Heider [66], the grandfather of causal attribution in social psychology, argues that causes fall into two camps: internal and external. Internal causes
9
of events are those due to the characteristics of an actor, while external causes are those due to the speciï¬c situation or the environment. Clearly, events can have causes that mix both. However, the focus of work from Heider was not on causality in general, but on social attribution, or the perceived causes of behaviour. That is, how people attribute the behaviour of others. Nonetheless, work in this ï¬eld, as we will see in Section 3, builds heavily on counterfactual causality.
Causal Chains. In causality and explanation, the concept of causal chains is important. A causal chain is a path of causes between a set of events, in which a cause from event C to event E indicates that C must occur before E. Any events without a cause are root causes.
Hilton et al. [76] deï¬ne ï¬ve diï¬erent types of causal chain, outlined in Table 2, and note that diï¬erent causal chains are associated with diï¬erent types of explanations.
Type Description Example Temporal Coincidental Unfolding Opportunity chains Pre-emptive Distal events do not constraint proxi- mal events. Events can be switched in time without changing the outcome Distal events do not constraint prox- imal events. The causal relationships holds in a particular case, but not in general. Distal events strongly constrain prox- imal events. The causal relationships hold in general and in this particular case and cannot be switched. The distal event enables the proximal event. Distal precedes proximal and prevents the proximal from causing an event. A and B together cause C ; order of A and B is irrelevant; e.g. two peo- ple each ï¬ipping a coin win if both coins are heads; it is irrelevant who ï¬ips ï¬rst. A causes B this time, but the general relationship does not hold; e.g. a per- son smoking a cigarette causes a house ï¬re, but this does not generally hap- pen. A causes B and B causes C ; e.g. switching a light switch causes an elec- tric current to run to the light, which causes the light to turn on in- A enables B, B causes C ; e.g. stalling a light switch enables it to be switched, which causes the light to turn on. B causes C, A would have caused C if B did not occur; e.g. my action of unlocking the car with my remote lock would have unlocked the door if my wife had not already unlocked it with the key.
Table 2: Types of Causal Chains according to Hilton et al. [76].
People do not need to understand a complete causal chain to provide a sound expla- nation. This is evidently true: causes of physical events can refer back to events that occurred during the Big Bang, but nonetheless, most adults can explain to a child why a bouncing ball eventually stops.
Formal Models of Causation. While several formal models of causation have been pro- posed, such as those based on conditional logic [53, 98], the model of causation that 10
I believe would be of interest to many in artiï¬cial intelligence is the formalisation of causality by Halpern and Pearl [58]. This is a general model that should be accessible to anyone with a computer science background, has been adopted by philosophers and psychologists, and is accompanied by many additional results, such as an axiomatisation [57] and a series articles on complexity analysis [40, 41].
Halpern and Pearl [58] deï¬ne a model-based approach using structural causal models over two sets of variables: exogenous variables, whose values are determined by factors external to the model, and endogenous variables, whose values are determined by re- lationships with other (exogenous or endogenous) variables. Each endogenous variable has a function that deï¬nes its value from other variables. A context is an assignment of values to variables. Intuitively, a context represents a âpossible worldâ of the model. A model/context pair is called a situation. Given this structure, Halpern and Pearl deï¬ne a actual cause of an event X = x (that is, endogenous variable X receiving the value x) as a set of events E (each of the form Y = y) such that (informally) the following three criteria hold:
AC1 Both the event X = x and the cause E are true in the actual situation.
AC2 If there was some counterfactual values for the variables of the events in E, then the event X = x would not have occurred.
AC3 E is minimal â that is, there are no irrelevant events in the case.
A suï¬cient cause is simply a non-minimal actual cause; that is, it satisï¬es the ï¬rst two items above.
We will return later to this model in Section 5.1.2 to to discuss Halpern and Pearlâs model of explanation.
2.1.2. Explanation
An explanation is an assignment of causal responsibility â Josephson and Josephson [81]
Explanation is both a process and a product, as noted by Lombrozo [104]. However, I argue that there are actually two processes in explanation, as well as the product:
1. Cognitive process â The process of abductive inference for âï¬lling the gapsâ [27] to determine an explanation for a given event, called the explanandum, in which the causes for the event are identiï¬ed, perhaps in relation to a particular counterfactual cases, and a subset of these causes is selected as the explanation (or explanans).
In social science, the process of identifying the causes of a particular phenomenon is known as attribution, and is seen as just part of the entire process of explanation.
2. Product â The explanation that results from the cognitive process is the product of the cognitive explanation process.
3. Social process â The process of transferring knowledge between explainer and explainee, generally an interaction between a group of people, in which the goal is that the explainee has enough information to understand the causes of the event; although other types of goal exists, as we discuss later.
11
Question Reasoning Description What? Associative Reason about which unobserved events could have oc- curred given the observed events How? Interventionist Simulate a change in the situation to see if the event still happens Why? Counterfactual Simulating alternative causes to see whether the event still happens
Table 3: Classes of Explanatory Question and the Reasoning Required to Answer
But what constitutes an explanation? This question has created a lot of debate in philosophy, but accounts of explanation both philosophical and psychology stress the importance of causality in explanation â that is, an explanation refers to causes [159, 191, 107, 59]. There are, however, deï¬nitions of non-causal explanation [52], such as explaining âwhat happenedâ or explaining what was meant by a particular remark [187]. These deï¬nitions out of scope in this paper, and they present a diï¬erent set of challenges to explainable AI.
2.1.3. Explanation as a Product
We take the deï¬nition that an explanation is an answer to a whyâquestion [35, 138, 99, 102].
According to Bromberger [13], a why-question is a combination of a whetherâquestion, preceded by the word âwhyâ. A whether-question is an interrogative question whose correct answer is either âyesâ or ânoâ. The presupposition within a whyâquestion is the fact referred to in the question that is under explanation, expressed as if it were true (or false if the question is a negative sentence). For example, the question âwhy did they do that? â is a why-question, with the inner whether-question being âdid they do that? â, and the presupposition being âthey did thatâ. However, as we will see in Section 2.3, whyâquestions are structurally more complicated than this: they are contrastive.
However, other types of questions can be answered by explanations. In Table 3, I propose a simple model for explanatory questions based on Pearl and Mackenzieâs Ladder of Causation [141]. This model places explanatory questions into three classes: (1) whatâ questions, such as âWhat event happened? â; (2) how -questions, such as âHow did that event happen? â; and (3) whyâquestions, such as âWhy did that event happen? â. From the perspective of reasoning, whyâquestions are the most challenging, because they use the most sophisticated reasoning. What-questions ask for factual accounts, possibly using associative reasoning to determine, from the observed events, which unobserved events also happened. How questions are also factual, but require interventionist reasoning to determine the set of causes that, if removed, would prevent the event from happening. This may also require associative reasoning. We categorise what if âquestions has how â questions, as they are just a contrast case analysing what would happen under a diï¬erent situation. Whyâquestions are the most challenging, as they require counterfactual rea- soning to undo events and simulate other events that are not factual. This also requires associative and interventionist reasoning.
12
Dennett [36] argues that âwhyâ is ambiguous and that there are two diï¬erent senses of whyâquestion: how come? and what for?. The former asks for a process narrative, without an explanation of what it is for, while the latter asks for a reason, which implies some intentional thought behind the cause. Dennett gives the examples of âwhy are planets spherical?â and âwhy are ball bearings spherical?â. The former asks for an explanation based on physics and chemistry, and is thus a how-comeâquestion, because planets are not round for any reason. The latter asks for an explanation that gives the reason what the designer made ball bearings spherical for : a reason because people design them that way.
Given a whyâquestion, Overton [138] deï¬nes an explanation as a pair consisting of: (1) the explanans: which is the answer to the question; and (2) and the explanandum; which is the presupposition.
2.1.4. Explanation as Abductive Reasoning
As a cognitive process, explanation is closely related to abductive reasoning. Peirce [142] was the ï¬rst author to consider abduction as a distinct form of reasoning, separate from induction and deduction, but which, like induction, went from eï¬ect to cause. His work focused on the diï¬erence between accepting a hypothesis via scientiï¬c experiments (induction), and deriving a hypothesis to explain observed phenomenon (abduction). He deï¬nes the form of inference used in abduction as follows:
The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is reason to suspect that A is true.
Clearly, this is an inference to explain the fact C from the hypothesis A, which is diï¬erent from deduction and induction. However, this does not account for compet- ing hypotheses. Josephson and Josephson [81] describe this more competitive-form of abduction as:
D is a collection of data (facts, observations, givens). H explains D (would, if true, explain D). No other hypothesis can explain D as well as H does. Therefore, H is probably true.
Harman [62] labels this process âinference to the best explanationâ. Thus, one can think of abductive reasoning as the following process: (1) observe some (presumably unexpected or surprising) events; (2) generate one or more hypothesis about these events; (3) judge the plausibility of the hypotheses; and (4) select the âbestâ hypothesis as the explanation [78].
Research in philosophy and cognitive science has argued that abductive reasoning is closely related to explanation. In particular, in trying to understand causes of events, people use abductive inference to determine what they consider to be the âbestâ expla- nation. Harman [62] is perhaps the ï¬rst to acknowledge this link, and more recently, experimental evaluations have demonstrated it [108, 188, 109, 154]. Popper [146] is perhaps the most inï¬uential proponent of abductive reasoning in the scientiï¬c process. He argued strongly for the scientiï¬c method to be based on empirical falsiï¬ability of hypotheses, rather than the classic inductivist view at the time.
13
Early philosophical work considered abduction as some magical process of intuition â something that could not be captured by formalised rules because it did not ï¬t the standard deductive model. However, this changed when artiï¬cial intelligence researchers began investigating abductive reasoning to explain observations, such as in diagnosis (e.g. medical diagnosis, fault diagnosis) [145, 156], intention/plan recognition [24], etc. The necessity to encode the process in a suitable computational form led to axiomatisations, with Pople [145] seeming to be the ï¬rst to do this, and characterisations of how to implement such axiomatisations; e.g. Levesque [97]. From here, the process of abduction as a principled process gained traction, and it is now widely accepted that abduction, induction, and deduction are diï¬erent modes of logical reasoning.
In this paper, abductive inference is not equated directly to explanation, because explanation also refers to the product and the social process; but abductive reasoning does fall into the category of cognitive process of explanation. In Section 4, we survey the cognitive science view of abductive reasoning, in particular, cognitive biases in hypothesis formation and evaluation.
2.1.5. Interpretability and Justiï¬cation
Here, we brieï¬y address the distinction between interpretability, explainability, justi- ï¬cation, and explanation, as used in this article; and as they seem to be used in artiï¬cial intelligence.
Lipton [103] provides a taxonomy of the desiderata and methods for interpretable AI. This paper adopts Liptonâs assertion that explanation is post-hoc interpretability. I use Biran and Cotton [9]âs deï¬nition of interpretability of a model as: the degree to which an observer can understand the cause of a decision. Explanation is thus one mode in which an observer may obtain understanding, but clearly, there are additional modes that one can adopt, such as making decisions that are inherently easier to understand or via introspection. I equate âinterpretabilityâ with âexplainabilityâ.
A justiï¬cation explains why a decision is good, but does not necessarily aim to give an explanation of the actual decision-making process [9].
It is important to understand the similarities and diï¬erences between these terms as one reads this article, because some related research discussed is relevant to explanation only, in particular, Section 5, which discusses how people present explanations to one another; while other sections, in particular Sections 3 and 4 discuss how people generate and evaluate explanations, and explain behaviour of others, so are broader and can be used to create more explainable agents.
2.2. Why People Ask for Explanations
There are many reasons that people may ask for explanations. Curiosity is one primary criterion that humans use, but other pragmatic reasons include examination â for example, a teacher asking her students for an explanation on an exam for the purposes of testing the studentsâ knowledge on a particular topic; and scientiï¬c explanation â asking why we observe a particular environmental phenomenon.
In this paper, we are interested in explanation in AI, and thus our focus is on how intelligent agents can explain their decisions. As such, this section is primarily concerned with why people ask for âeverydayâ explanations of why speciï¬c events occur, rather than explanations for general scientiï¬c phenomena, although this work is still relevant in many cases.
14
It is clear that the primary function of explanation is to facilitate learning [104, 189]. Via learning, we obtain better models of how particular events or properties come about, and we are able to use these models to our advantage. Heider [66] states that people look for explanations to improve their understanding of someone or something so that they can derive stable model that can be used for prediction and control. This hypothesis is backed up by research suggesting that people tend to ask questions about events or observations that they consider abnormal or unexpected from their own point of view [77, 73, 69].
Lombrozo [104] argues that explanations have a role in inference learning precisely because they are explanations, not necessarily just due to the causal information they reveal. First, explanations provide somewhat of a âï¬lterâ on the causal beliefs of an event. Second, prior knowledge is changed by giving explanations; that is, by asking someone to provide an explanation as to whether a particular property is true or false, the explainer changes their perceived likelihood of the claim. Third, explanations that oï¬er fewer causes and explanations that explain multiple observations are considered more believable and more valuable; but this does not hold for causal statements. Wilkenfeld and Lombrozo [188] go further and show that engaging in explanation but failing to arrive at a correct explanation can improve ones understanding. They describe this as âexplaining for the best inferenceâ, as opposed to the typical model of explanation as âinference to the best explanationâ.
Malle [112, Chapter 3], who gives perhaps the most complete discussion of everyday explanations in the context of explaining social action/interaction, argues that people ask for explanations for two reasons:
1. To ï¬nd meaning: to reconcile the contradictions or inconsistencies between ele- ments of our knowledge structures.
2. To manage social interaction: to create a shared meaning of something, and to change othersâ beliefs & impressions, their emotions, or to inï¬uence their actions.
Creating a shared meaning is important for explanation in AI. In many cases, an explanation provided by an intelligent agent will be precisely to do this â to create a shared understanding of the decision that was made between itself and a human observer, at least to some partial level.
Lombrozo [104] and Wilkenfeld and Lombrozo [188] note that explanations have sev- eral functions other than the transfer of knowledge, such as persuasion, learning, or assignment of blame; and that in some cases of social explanation, the goals of the ex- plainer and explainee may be diï¬erent. With respect to explanation in AI, persuasion is surely of interest: if the goal of an explanation from an intelligent agent is to generate trust from a human observer, then persuasion that a decision is the correct one could in some case be considered more important than actually transferring the true cause. For example, it may be better to give a less likely explanation that is more convincing to the explainee if we want them to act in some positive way. In this case, the goals of the explainer (to generate trust) is diï¬erent to that of the explainee (to understand a decision).
15
2.3. Contrastive Explanation
The key insight is to recognise that one does not explain events per se, but that one explains why the puzzling event occurred in the target cases but not in some counterfactual contrast case. â Hilton [72, p. 67]
I will dedicate a subsection to discuss one of the most important ï¬ndings in the philosophical and cognitive science literature from the perspective of explainable AI: contrastive explanation. Research shows that people do not explain the causes for an event per se, but explain the cause of an event relative to some other event that did not occur; that is, an explanation is always of the form âWhy P rather than Q? â, in which P is the target event and Q is a counterfactual contrast case that did not occur, even if the Q is implicit in the question. This is called contrastive explanation.
Some authors refer to Q as the counterfactual case [108, 69, 77]. However, it is impor- tant to note that this is not the same counterfactual that one refers to when determining causality (see Section 2.1.1). For causality, the counterfactuals are hypothetical ânon- causesâ in which the event-to-be-explained does not occur â that is a counterfactual to cause C â, whereas in contrastive explanation, the counterfactuals are hypothetical outcomes â that is, a counterfactual to event E [127].
Lipton [102] refers to the two cases, P and Q, as the fact and the foil respectively; the fact being the event that did occur, and the foil being the event that did not. To avoid confusion, throughout the remainder of this paper, we will adopt this terminology and use counterfactual to refer to the hypothetical case in which the cause C did not occur, and foil to refer to the hypothesised case Q that was expected rather than P .
Most authors in this area argue that all whyâquestions ask for contrastive explana- tions, even if the foils are not made explicit [102, 77, 69, 72, 110, 108], and that people are good at inferring the foil; e.g. from language and tone. For example, given the ques- tion, âWhy did Elizabeth open the door? â, there are many, possibly an inï¬nite number, of foils; e.g. âWhy did Elizabeth open the door, rather than leave it closed? â, âWhy did Elizabeth open the door rather than the window?â, or âWhy did Elizabeth open the door rather than Michael opening it? â. These diï¬erent contrasts have diï¬erent explanations, and there is no inherent one that is certain to be the foil for this question. The negated presupposition not(Elizabeth opens the door) refers to an entire class of foils, including all those listed already. Lipton [102] notes that âcentral requirement for a sensible con- trastive question is that the fact and the foil have a largely similar history, against which the diï¬erences stand out. When the histories are disparate, we do not know where to begin to answer the question.â This implies that people could use the similarity of the history of facts and possible foils to determine what the explaineeâs foil truly is.
It is important that the explainee understands the counterfactual case [69]. For example, given the question âWhy did Elizabeth open the door? â, the answer âBecause she was hotâ is a good answer if the foil is Elizabeth leaving the door closed, but not a good answer if the foil is ârather than turning on the air conditioningâ, because the fact that Elizabeth is hot explains both the fact and the foil.
The idea of contrastive explanation should not be controversial if we accept the argu- ment outlined in Section 2.2 that people ask for explanations about events or observations that they consider abnormal or unexpected from their own point of view [77, 73, 69]. In such cases, people expect to observe a particular event, but then observe another, with the observed event being the fact and the expected event being the foil.
16
Van Bouwel and Weber [175] deï¬ne four types of explanatory question, three of which are contrastive:
Plain fact: | Why does object a have property P? P-contrast: | Why does object a have property P, rather than property Q? O-contrast: Why does object a have property P, while object b has property Q? T-contrast: | Why does object a have property P at time t, but property Q at time t/?
Van Bouwel and Weber note that diï¬erences occur on properties within an object (P-contrast), between objects themselves (O-contrast), and within an object over time (T-contrast). They reject the idea that all âplain factâ questions have an implicit foil, proposing that plain-fact questions require showing details across a ânon-interruptedâ causal chain across time. They argue that plain-fact questions are typically asked due to curiosity, such as desiring to know how certain facts ï¬t into the world, while contrastive questions are typically asked when unexpected events are observed.
Lipton [102] argues that contrastive explanations between a fact P and a foil Q are, in general, easier to derive than âcompleteâ explanations for plain-fact questions about P . For example, consider the arthropod classiï¬cation algorithm in Section 1.4. To be a beetle, an arthropod must have six legs, but this does not cause an arthropod to be a beetle â other causes are necessary. Lipton contends that we could answer the P-contrast question such as âWhy is image J labelled as a Beetle instead of a Spider?â by citing the fact that the arthropod in the image has six legs. We do not need information about eyes, wings, or stingers to answer this, whereas to explain why image J is a spider in a non-contrastive way, we must cite all causes.
The hypothesis that all causal explanations are contrastive is not merely philosophical. In Section 4, we see several bodies of work supporting this, and these provide more detail as to how people select and evaluate explanations based on the contrast between fact and foil.
2.4. Types and Levels of Explanation
The type of explanation provided to a question is dependent on the particular ques- tion asked; for example, asking why some event occurred is diï¬erent to asking under what circumstances it could have occurred; that is, the actual vs. the hypothetical [159]. However, for the purposes of answering whyâquestions, we will focus on a particular subset of philosophical work in this area.
Aristotleâs Four Causes model, also known as the Modes of Explanation model, con- tinues to be foundational for cause and explanation. Aristotle proposed an analytic scheme, classed into four diï¬erent elements, that can be used to provide answers to whyâquestions [60]:
1. Material : The substance or material of which something is made. For example, rubber is a material cause for a car tyre.
2. Formal : The form or properties of something that make it what it is. For example, being round is a formal cause of a car tyre. These are sometimes referred to as categorical explanations.
3. Eï¬cient: The proximal mechanisms of the cause something to change. For exam- ple, a tyre manufacturer is an eï¬cient cause for a car tyre. These are sometimes referred to as mechanistic explanations. 17
4. Final : The end or goal of something. Moving a vehicle is an eï¬cient cause of a car tyre. These are sometimes referred to as functional or teleological explanations.
A single whyâquestion can have explanations from any of these categories. For ex- ample, consider the question: âWhy does this pen contain ink? â. A material explanation is based on the idea that the pen is made of a substance that prevents the ink from leaking out. A formal explanation is that it is a pen and pens contain ink. An eï¬cient explanation is that there was a person who ï¬lled it with ink. A ï¬nal explanation is that pens are for writing, and so require ink.
Several other authors have proposed models similar to Aristotleâs, such as Dennett [35], who proposed that people take three stances towards objects: physical, design, and intention; and Marr [119], building on earlier work with Poggio [120], who deï¬ne the computational, representational, and hardware levels of understanding for computational problems.
Kass and Leake [85] deï¬ne a categorisation of explanations of anomalies into three types: (1) intentional ; (2) material ; and (3) social. The intentional and material cate- gories correspond roughly to Aristotleâs ï¬nal and material categories, however, the social category does not correspond to any particular category in the models of Aristotle, Marr [119], or Dennett [35]. The social category refers to explanations about human behaviour that is not intentionally driven. Kass and Leake give the example of an increase in crime rate in a city, which, while due to intentional behaviour of individuals in that city, is not a phenomenon that can be said to be intentional. While individual crimes are committed with intent, it cannot be said that the individuals had the intent of increasing the crime rate â that is merely an eï¬ect of the behaviour of a group of individuals.
# 2.5. Structure of Explanation
As we saw in Section 2.1.2, causation is a major part of explanation. Earlier accounts of explanation from Hempel and Oppenheim [68] argued for logically deductive models of explanation. Kelley [86] subsequently argued instead that people consider co-variation in constructing explanations, and proposed a statistical model of explanation. However, while inï¬uential, subsequent experimental research uncovered many problems with these models, and currently, both the deductive and statistical models of explanation are no longer considered valid theories of everyday explanation in most camps [114].
Overton [140, 139] deï¬nes a model of scientiï¬c explanation. In particular, Overton [139] deï¬nes the structure of explanations. He deï¬nes ï¬ve categories of properties or objects that are explained in science: (1) theories: sets of principles that form building blocks for models; (2) models: an abstraction of a theory that represents the relationships between kinds and their attributes; (3) kinds: an abstract universal class that supports counterfactual reasoning; (4) entities: an instantiation of a kind; and (5) data: state- ments about activities (e.g. measurements, observations). The relationships between these is shown in Figure 3.
From these categories, Overton [139] provides a crisp deï¬nition of the structure of scientiï¬c explanations. He argues that explanations of phenomena at one level must be relative to and refer to at least one other level, and that explanations between two such levels must refer to all intermediate levels. For example, an arthropod (Entity) has eight legs (Data). Entities of this Kind are spiders, according to the Model of our Theory of arthropods. In this example, the explanation is constructed by appealing to the Model 18
justiï¬es models instantiated by measured by Theories Models Kinds Entities Data uniï¬es submodel of subkind of causes correlates with
Figure 3: Overtonâs ï¬ve categories and four relations in scientiï¬c explanation, reproduced from Overton [139, p. 54, Figure 3.1]
.
of insects, which, in turn, appeals to a particular Theory that underlies that Model. Figure 4 shows the structure of a theory-data explanation, which is the most complex because it has the longest chain of relationships between any two levels.
p o t quality A explains quality B s r a e b s r a e b theory core relation data e r o c j u s t i ï¬ e s d e r u s a e m y b model entity m o d e l s d e t a i t n a t s n y b i e s a b kind X identity kind X
Figure 4: Overtonâs general structure of a theory-data explanation, reproduced from Overton [139, p. 54, Figure 3.2])
With respect to social explanation, Malle [112] argues that social explanation is best understood as consisting of three layers:
1. Layer 1: A conceptual framework that outlines the assumptions people make about
19
human behaviour and explanation.
2. Layer 2: The psychological processes that are used to construct explanations.
3. Layer 3: Language layer that speciï¬es the type of linguistic structures people use in giving explanations.
I will present Malleâs views of these three layers in more detail in the section on social attribution (Section 3), cognitive processes (Section 4, and social explanation (Section 5). This work is collated into Malleâs 2004 book [112].
2.6. Explanation and XAI
This section presents some ideas on how the philosophical work outlined above aï¬ects researchers and practitioners in XAI.
2.6.1. Causal Attribution is Not Causal Explanation
An important concept is the relationship between cause attribution and explanation. Extracting a causal chain and displaying it to a person is causal attribution, not (neces- sarily) an explanation. While a person could use such a causal chain to obtain their own explanation, I argue that this does not constitute giving an explanation. In particular, for most AI models, it is not reasonable to expect a lay-user to be able to interpret a causal chain, no matter how it is presented. Much of the existing work in explainable AI literature is on the causal attribution part of explanation â something that, in many cases, is the easiest part of the problem because the causes are well understood, for- malised, and accessible by the underlying models. In later sections, we will see more on the diï¬erence between attribution and explanation, why existing work in causal attri- bution is only part of the problem of explanation, and insights of how this work can be extended to produce more intuitive explanations.
# 2.6.2. Contrastive Explanation
Perhaps the most important point in this entire section is that explanation is con- trastive (Section 2.3). Research indicates that people request only contrastive explana- tions, and that the cognitive burden of complete explanations is too great.
It could be argued that because models in AI operate at a level of abstraction that is considerably higher than real-world events, the causal chains are often smaller and less cognitively demanding, especially if they can be visualised. Even if one agrees with this, this argument misses a key point: it is not only the size of the causal chain that is important â people seem to be cognitively wired to process contrastive explanations, so one can argue that a layperson will ï¬nd contrastive explanations more intuitive and more valuable.
This is both a challenge and an opportunity in AI. It is a challenge because often a person may just ask âWhy X?â, leaving their foil implicit. Eliciting a contrast case from a human observer may be diï¬cult or even infeasible. Lipton [102] states that the obvious solution is that a non-contrastive question âWhy P? â can be interpreted by default to âWhy P rather than not-P?â. However, he then goes on to show that to answer âWhy P rather than not-P?â is equivalent to providing all causes for P â something that is not so useful. As such, the challenge is that the foil needs to be determined. In some
20
applications, the foil could be elicited from the human observer, however, in others, this may not be possible, and therefore, foils may have to be inferred. As noted later in Section 4.6.3, concepts such as abnormality could be used to infer likely foils, but techniques for HCI, such as eye gaze [164] and gestures could be used to infer foils in some applications.
It is an opportunity because, as Lipton [102] argues, explaining a contrastive question is often easier than giving a full causal attribution because one only needs to understand what is diï¬erent between the two cases, so one can provide a complete explanation without determining or even knowing all of the causes of the fact in question. This holds for computational explanation as well as human explanation. Further, it can be beneï¬cial in a more pragmatic way:
if a person provides a foil, they are implicitly pointing towards the part of the model they do not understand. In Section 4.4, we will see research that outlines how people use contrasts to select explanations that are much simpler than their full counterparts.
Several authors within artiï¬cial intelligence ï¬ag the importance of contrastive ques- tions. Lim and Dey [100] found via a series of user studies on context-aware applications that âWhy not . . . ? â questions were common questions that people asked. Further, several authors have looked to answer contrastive questions. For example, Winikoï¬ [190] considers the questions of âWhy donât you believe . . . ? â and âWhy didnât you do . . . ? â for BDI programs, or Fox et al. [46] who have similar questions in planning, such as âWhy didnât you do something else (that I would have done)?â. However, most existing work considers contrastive questions, but not contrastive explanations; that is, ï¬nding the diï¬erences between the two cases. Providing two complete explanations does not take advantage of contrastive questions. Section 4.4.1 shows that people use the diï¬erence between the fact and foil to focus explanations on the causes relevant to the question, which makes the explanations more relevant to the explainee.
2.6.3. Explanatory Tasks and Levels of Explanation
Researchers and practitioners in explainable AI should understand and adopt a model of âlevels of explanationâ â either one of those outlined above, or some other sensible model. The reason is clear: the answer that is provided to the whyâquestion is strongly linked to the level at which the question is posed.
To illustrate, letâs take a couple of examples and apply them to Aristotleâs modes of explanation model outlined in Section 2.4. Consider our earlier arthropod classiï¬cation algorithm from Section 1.4. At ï¬rst glance, it may seem that such an algorithm resides at the formal level, so should oï¬er explanations based on form. However, this would be erroneous, because that given categorisation algorithm has both eï¬cient/mechanistic components, a reason for being implemented/executed (the ï¬nal mode), and is imple- mented on hardware (the ï¬nal mode). As such, there are explanations for its behaviour at all levels. Perhaps most whyâquestions proposed by human observers about such an algorithm would indeed by at the formal level, such as âWhy is image J in group A instead of group B? â, for which an answer could refer to the particular form of image and the groups A and B. However, in our idealised dialogue, the question âWhy did you infer that the insect in image J had eight legs instead of six? â asks a question about the underlying algorithm for counting legs, so the cause is at the eï¬cient level; that is, it does not ask for what constitutes a spider in our model, but from where the inputs for that model came. Further, the ï¬nal question about classifying the spider as an octopus 21
refers to the ï¬nal level, referring to the algorithms function or goal. Thus, causes in this algorithm occur at all four layers: (1) the material causes are at the hardware level to derive certain calculations; (2) the formal causes determine the classiï¬cation itself; (3) the eï¬cient causes determine such concepts as how features are detected; and (4) ï¬nal causes determine why the algorithm was executed, or perhaps implemented at all.
As a second example, consider an algorithm for planning a robotic search and rescue mission after a disaster. In planning, programs are dynamically constructed, so diï¬erent modes of cause/explanation are of interest compared to a classiï¬cation algorithm. Causes still occur at the four levels: (1) the material level as before describes the hardware computation; (2) the formal level describes the underlying model passed to the planning tool; (3) the mechanistic level describes the particular planning algorithm employed; and (4) the ï¬nal level describes the particular goal or intention of a plan. In such a system, the robot would likely have several goals to achieve; e.g. searching, taking pictures, supplying ï¬rst-aid packages, returning to re-fuel, etc. As such, whyâquestions described at the ï¬nal level (e.g. its goals) may be more common than in the classiï¬cation algorithm example. However, questions related to the model are relevant, or why particular actions were taken rather than others, which may depend on the particular optimisation criteria used (e.g. cost vs. time), and these require eï¬cient/mechanistic explanations.
However, I am not arguing that we, as practitioners, must have explanatory agents capable of giving explanations at all of these levels. I argue that these frameworks are useful for analysing the types of questions explanatory agents one may receive. In Sec- tions 3 and 4, we will see work that demonstrates that for explanations at these diï¬erent levels, people expect diï¬erent types of explanation. Thus, it is important to understand which types of questions refer to which levels in particular instances of technology, that diï¬erent levels will be more useful/likely than others, and that, in research articles on interpretability, it is clear at which level we are aiming to provide explanations.
2.6.4. Explanatory Model of Self
The work outlined in this section demonstrates that an intelligent agent must be able to reason about its own causal model. Consider our image classiï¬cation example. When posed with the question âWhy is image J in group A instead of group B? â, it is non-trivial, in my view, to attribute the cause by using the algorithm that generated the answer. A cleaner solution would be to have a more abstract symbolic model alongside this that records information such as when certain properties are detected and when certain categorisations are made, which can be reasoned over. In other words, the agent requires a model of itâs own decision making â a model of self â that exists merely for the purpose of explanation. This model may be only an approximation of the original model, but more suitable for explanation.
This idea is not new in XAI. In particular, researchers have investigated machine learning models that are uninterpretable, such as neural nets, and have attempted to extract model approximations using more interpretable model types, such as Bayesian networks [63], decision trees [47], or local approximations [157]. However, my argument here is not only for the purpose of interpretability. Even models considered interpretable, such as decision trees, could be accompanied by another model that is speciï¬cally used for explanation. For example, to explain control policies, Hayes and Shah [65] select and annotate particular important state variables and actions that are relevant for expla- nation only. Langley et al. notes that âAn agent must represent content in a way that 22
supports the explanationsâ [93, p. 2].
Thus, to generate meaningful and useful explanations of behaviour, models based on the our understanding of explanation must sit alongside and work with the decision- making mechanisms.
2.6.5. Structure of Explanation
Related to the âmodel of selfâ is the structure of explanation. Overtonâs model of scientiï¬c explanation [139] deï¬nes what I believe to be a solid foundation for the structure of explanation in AI. To provide an explanation along the chain outlined in Figure 4, one would need an explicit explanatory model (Section 2.6.4) of each of these diï¬erent categories for the given system.
For example, the question from our dialogue in Section 1.4 âHow do you know that spiders have eight legs? â, is a question referring not to the causal attribution in the clas- siï¬cation algorithm itself, but is asking: âHow do you know this? â, and thus is referring to how this was learnt â which, in this example, was learnt via another algorithm. Such an approach requires an additional part of the âmodel of selfâ that refers speciï¬cally to the learning, not the classiï¬cation.
Overtonâs model [139] or one similar to it seems necessary for researchers and prac- titioners in explainable AI to frame their thoughts and communicate their ideas.
# 3. Social Attribution â How Do People Explain Behaviour?
Just as the contents of the nonsocial environment are interrelated by certain lawful connections, causal or otherwise, which deï¬ne what can or will hap- pen, we assume that there are connections of similar character between the contents of the social environment. â Heider [66, Chapter 2, pg. 21]
In this section, we outline work on social attribution, which deï¬nes how people at- tribute and (partly) explain behaviour of others. Such work is clearly relevant in many areas of artiï¬cial intelligence. However, research on social attribution laid the ground- work for much of the work outlined in Section 4, which looks at how people generate and evaluate events more generally. For a more detailed survey on this, see McClure [122] and Hilton [70].
3.1. Deï¬nitions
Social attribution is about perception. While the causes of behaviour can be described at a neurophysical level, and perhaps even lower levels, social attribution is concerned not with the real causes of human behaviour, but how other attribute or explain the behaviour of others. Heider [66] deï¬nes social attribution as person perception.
Intentions and intentionality is key to the work of Heider [66], and much of the recent work that has followed his â for example, Dennett [35], Malle [112], McClure [122], Boonzaier et al. [10], Kashima et al. [84]. An intention is a mental state of a person in which they form a commitment to carrying out some particular action or achieving some particular aim. Malle and Knobe [115] note that intentional behaviour therefore is always contrasted with unintentional behaviour, citing that laws of state, rules in sport, etc. all treat intentional actions diï¬erent from unintentional actions because intentional
23
rule breaking is punished more harshly than unintentional rule breaking. They note that, while intentionality can be considered an objective fact, it is also a social construct, in that people ascribe intentions to each other whether that intention is objective or not, and use these to socially interact.
Folk psychology, or commonsense psychology, is the attribution of human behaviour using âeverydayâ terms such as beliefs, desires, intentions, emotions, and personality traits. This ï¬eld of cognitive and social psychology recognises that, while such concepts may not truly cause human behaviour, these are the concepts that humans use to model and predict each othersâ behaviours [112]. In other words, folk psychology does not describe how we think; it describes how we think we think.
In the folk psychological model, actions consist of three parts: (1) the precondition of the action â that is, the circumstances under which it can be successfully executed, such as the capabilities of the actor or the constraints in the environment; (2) the action itself that can be undertaken; and (3) the eï¬ects of the action â that is, the changes that they bring about, either environmentally or socially.
Actions that are undertaken are typically explained by goals or intentions. In much of the work in social science, goals are equated with intentions. For our discussions, we deï¬ne goals as being the end to which a mean contributes, while we deï¬ne intentions as short-term goals that are adopted to achieve the end goals. The intentions have no utility themselves except to achieve positive utility goals. A proximal intention is a near-term intention that helps to achieve some further distal intention or goal. In the survey of existing literature, we will use the term used by the original authors, to ensure that they are interpreted as the authors expected.
# 3.2. Intentionality and Explanation
Heider [66] was the ï¬rst person to experimentally try to identify how people attribute behaviour to others. In their now famous experiment from 1944, Heider and Simmel [67], showed a video containing animated shapes â a small triangle, a large triangle, and a small circle â moving around a screen3, and asked experiment participants to observe the video and then describe the behaviour of the shapes. Figure 5 shows a captured screenshot from this video in which the circle is opening a door to enter into a room. The participantsâ responses described the behaviour anthropomorphically, assigning actions, intentions, emotions, and personality traits to the shapes. However, this experiment was not one on animation, but in social psychology. The aim of the experiment was to demonstrate that people characterise deliberative behaviour using folk psychology.
Heider [66] argued then that, the diï¬erence between object perception â describing causal behaviour of objects â and person perception was the intentions, or motives, of the person. He noted that behaviour in a social situation can have two types of causes: (1) personal (or dispositional ) causality; and (2) impersonal causality, which can subsequently be inï¬uenced by situational factors, such as the environment. This interpretation lead to many researchers reï¬ecting on the person-situation distinction and, in Malleâs view [114], incorrectly interpreting Heiderâs work for decades.
Heider [66] contends that the key distinction between intentional action and non- intentional events is that intentional action demonstrates equiï¬nality, which states that
# 3See the video here: https://www.youtube.com/watch?v=VTNmLt7QX8E.
24
Figure 5: A screenshot of the video used in Heider and Simmelâs seminal study [67].
while the means to realise an intention may vary, the intention itself remains equa-ï¬nal. Thus, if an actor should fail to achieve their intention, they will try other ways to achieve this intention, which diï¬ers from physical causality. Lombrozo [107] provides the example of Romeo and Juliet, noting that had a wall been placed between them, Romeo would have scaled the wall or knocked in down to reach his goal of seeing Juliet. However, iron ï¬laments trying to get to a magnet would not display such equiï¬nality â they would instead be simply blocked by the wall. Subsequent research conï¬rms this distinction [35, 112, 122, 10, 84, 108].
Malle and Pearce [118] break the actions that people will explain into two dimensions: (1) intentional vs. unintentional ; and (2) observable vs. unobservable; thus creating four diï¬erent classiï¬cations (see Figure 6).
Intentional Unintentional Observable Unobservable actions intentional thoughts mere behaviours experiences
Figure 6: Malleâs classiï¬cation of types of events, based on the dimensions of intentionality and observ- ability [112, Chapter 3]
Malle and Pearce [118] performed experiments to conï¬rm this model. As part of these experiments, participants were placed into a room with another participant, and were left for 10 minutes to converse with each other to âget to know one anotherâ, while their conversation was recorded. Malle and Pearce coded participants responses to questions with regards to observability and intentionality. Their results show that actors tend to explain unobservable events more than observable events, which Malle and Pearce argue is because the actors are more aware of their own beliefs, desires, feelings, etc., than of their observable behaviours, such as facial expressions, gestures, postures, etc.). On the other hand, observers do the opposite for the inverse reason. Further, they showed that actors tend to explain unintentional behaviour more than intentional behaviour, again because (they believe) they are aware of their intentions, but not their âunplannedâ unintentional behaviour. Observers tend to ï¬nd both intentional and unintentional behaviour diï¬cult 25
to explain, but will tend to ï¬nd intentional behaviour more relevant. Such a model accounts for the correspondence bias noted by Gilbert and Malone [51], which is the tendency for people to explain othersâ behaviours based on traits rather than situational factors, because the situational factors (beliefs, desires) are invisible.
3.3. Beliefs, Desires, Intentions, and Traits
Further to intentions, research suggest that other factors are important in attribution of social behaviour; in particular, beliefs, desires, and traits.
Kashima et al. [84] demonstrated that people use the folk psychological notions of belief, desire, and intention to understand, predict, and explain human action. In par- ticular, they demonstrated that desires hold preference over beliefs, with beliefs being not explained if they are clear from the viewpoint of the explainee. They showed that people judge that explanations and behaviour âdo not make senseâ when belief, desires, and intentions were inconsistent with each other. This early piece of work is one of the ï¬rst to re-establish Heiderâs theory of intentional behaviour in attribution [66].
However, it is the extensive body of work from Malle [111, 112, 113] that is the most seminal in this space.
3.3.1. Malleâs Conceptual Model for Social Attribution
Malle [112] proposes a model based on Theory of Mind, arguing that people attribute behaviour of others and themselves by assigning particular mental states that explain the behaviour. He oï¬ers six postulates (and sub-postulates) for the foundation of peopleâs folk explanation of behaviour, modelled in the scheme in Figure 7. He argues that these six postulates represent the assumptions and distinctions that people make when attributing behaviour to themselves and others:
Determine intentionality of behavior if unintentional if intentional | offer cause offer EF offer reason offer CHR J belief desire I marked marked â+ unmarked | â~ unmarked
Figure 7: Malleâs conceptual framework for behaviour explanation; reproduced Malle [113, p. 87, Figure 3.3], adapted from Malle [112, p. 119, Figure 5.1]
26
1. People distinguish between intentional and unintentional behaviour.
2. For intentional behaviour, people use three modes of explanation based on the speciï¬c circumstances of the action:
(a) Reason explanations are those explanations that link to the mental states (typically desires and beliefs, but also values) for the act, and the grounds on which they formed an intention.
(b) Causal History of Reason (CHR) explanations are those explanations that use factors that âlay in the backgroundâ of an agentâs reasons (note, not the background of the action), but are not themselves reasons. Such factors can include unconscious motives, emotions, culture, personality, and the context. CHR explanations refer to causal factors that lead to reasons. CHR explanations do not presuppose either subjectivity or rationality. This has three implications. First, they do not require the explainer to take the perspective of the explainee. Second, they can portray the actor as less ra- tionale, by not oï¬ering a rational and intentional reason for the behaviour. Third, they allow the use of unconscious motives that the actor themselves would typically not use. Thus, CHR explanations can make the agent look less rationale and in control than reason explanations.
(c) Enabling factor (EF) explanations are those explanations that explain not the intention of the actor, but instead explain how the intentional action achieved the outcome that it did. Thus, it assumes that the agent had an intention, and then refers to the factors that enabled the agent to successfully carry out the action, such as personal abilities or environmental properties. In essence, it relates to why preconditions of actions were enabled.
3. For unintentional behaviour, people oï¬er just causes, such as physical, mechanistic, or habitual cases.
At the core of Malleâs framework is the intentionality of an act. For a behaviour to be considered intentional, the behaviour must be based on some desire, and a belief that the behaviour can be undertaken (both from a personal and situational perspective) and can achieve the desire. This forms the intention. If the agent has the ability and the awareness that they are performing the action, then the action is intentional.
Linguistically, people make a distinction between causes and reasons; for example, consider âWhat were her reasons for choosing that book? â, vs. âWhat were his causes for falling over? â. The use of âhis causesâ implies that the cause does not belong to the actor, but the reason does.
To give a reason explanation is to attribute intentionality to the action, and to identify the desires, beliefs, and valuings in light of which (subjectivity assumption) and on the grounds of which (rationality assumption) the agent acted. Thus, reasons imply intentionality, subjectivity, and rationality.
3.4. Individual vs. Group Behaviour
Susskind et al. [167] investigated how people ascribe causes to groups rather than individuals, focusing on traits. They provided experimental participants with a set of 27
statements describing behaviours performed by individuals or groups, and were then asked to provide ratings of diï¬erent descriptions of these individuals/groups, such as their intelligence (a trait, or CHR in Malleâs framework), and were asked to judge the conï¬dence of their judgements. Their results showed that as with individuals, partici- pants freely assigned traits to groups, showing that groups are seen as agents themselves. However, they showed that when explaining an individualâs behaviour, the participants were able to produce explanations faster and more conï¬dently than for groups, and that the traits that they assigned to individuals were judged to be less âextremeâ than those assigned to to groups. In a second set of experiments, Susskind et al. showed that people expect more consistency in an individualâs behaviour compared to that of a group. When presented with a behaviour that violated the impression that participants had formed of individuals or groups, the participants were more likely to attribute the individualâs behaviour to causal mechanisms than the groupsâ behaviour.
OâLaughlin and Malle [137] further investigated peopleâs perception of group vs. indi- vidual behaviour, focusing on intentionality of explanation. They investigated the relative agency of groups that consist of âunrelatedâ individuals acting independently (aggregate groups) compared to groups acting together (jointly acting groups). In their study, par- ticipants were more likely to oï¬er CHR explanations than intention explanations for aggregate groups, and more likely to oï¬er intention explanations than CHR explanations for jointly acting groups. For instance, to explain why all people in a department store came to that particular store, participants were more likely oï¬er a CHR explanation, such as that there was a sale on at the store that day. However, to answer the same question for why a group of friends came to the same store place, participants were more likely to oï¬er an explanation that the group wanted to spend the day together shopping â a desire. This may demonstrate that people cannot attribute intentional behaviour to the individuals in an aggregate group, so resort to more causal history explanations.
OâLaughlin and Malleâs [137] ï¬nding about using CHRs to explain aggregate group behaviour is consistent with the earlier work from Kass and Leake [85], whose model of explanation explicitly divided intentional explanations from social explanations, which are explanations about human behaviour that is not intentionally driven (discussed in more detail in Section 2.4). These social explanations account for how people attribute deliberative behaviour to groups without referring to any form of intention.
An intriguing result from OâLaughlin and Malle [137] is that while people attribute less intentionality to aggregate groups than to individuals, they attribute more intention- ality to jointly acting groups than to individuals. OâLaughlin and Malle reason that joint action is highly deliberative, so the group intention is more likely to have been explicitly agreed upon prior to acting, and the individuals within the group would be explicitly aware of this intention compared to the their own individual intentions.
# 3.5. Norms and Morals
Norms have been shown to hold a particular place in social attribution. Burguet and Hilton [15] (via Hilton [70]) showed that norms and abnormal behaviour are important in how people ascribe mental states to one another. For example, Hilton [70] notes that upon hearing the statement âTed admires Paul â, people tend to attribute some trait to Paul as the object of the sentence, such as that Paul is charming and many people would admire him; and even that Ted does not admire many people. However, a counter- normative statement such as âTed admires the rapistâ triggers attributions instead to 28
Ted, explained by the fact that it is non-normative to admire rapists, so Tedâs behaviour is distinctive to others, and is more likely to require an explanation. In Section 4, we will see more on the relationship between norms, abnormal behaviour, and attribution. Uttich and Lombrozo [174] investigate the relationship of norms and the eï¬ect it has on attributing particular mental states, especially with regard to morals. They oï¬er an interesting explanation of the side-eï¬ect eï¬ect, or the Knobe eï¬ect [88], which is the eï¬ect for people to attribute particular mental states (Theory of Mind) based on moral judgement. Knobeâs vignette from his seminal [88] paper is:
The vice-president of a company went to the chairman of the board and said, âWe are thinking of starting a new program. It will help us increase proï¬ts, but it will also harm the environmentâ. The chairman of the board answered, âI donât care at all about harming the environment. I just want to make as much proï¬t as I can. Letâs start the new program.â They started the new program. Sure enough, the environment was harmed.
Knobe then produce a second vignette, which is exactly the same, but the side-eï¬ect of the program was in fact that the environment was helped. When participants were asked if the chairman had intentionally harmed the environment (ï¬rst vignette), 82% of respondents replied yes. However, in the second vignette, only 23% thought that the chairman intentionally helped the environment.
Uttich and Lombrozo [174] hypothesis that the two existing camps aiming to explain this eï¬ect: the Intuitive Moralist and Biased Scientist, do not account for this. Uttich and Lombrozo hypothesise that it is the fact the norms are violated that account for this; that is, rather than moralist judgements inï¬uencing intentionality attribution, it is the more general relationship of conforming (or not) to norms (moral or not). In particular, behaviour that conforms to norms is less likely to change a personâs Theory of Mind (intention) of another person compared to behaviour that violates norms.
Samland and Waldmann [161] further investigate social attribution in the context of norms, looking at permissibility rather than obligation. They gave participants scenarios in which two actors combined to cause an outcome. For example, a department in which only administrative assistants are permitted to take pens from the stationary cupboard. One morning, Professor Smith (not permitted) and an assistant (permitted) each take a pen, and there are no pens remaining. Participants were tasked with rating how strongly each agent caused the outcome. Their results showed that participants rated the action of the non-permitted actor (e.g. Professor Smith) more than three times stronger than the other actor. However, if the outcome was positive instead of negative, such as an intern (not permitted) and a doctor (permitted) both signing oï¬ on a request for a drug for a patient, who subsequently recovers due to the double dose, participants rate the non-permitted behaviour only slightly stronger. As noted by Hilton [70, p. 54], these results indicate that in such settings, people seem to interpret the term cause as meaning âmorally or institutionally responsibleâ.
In a follow-up study, Samland et al. [160] showed that children are not sensitive to norm violating behaviour in the same way that adults are. In particular, while both adults and children correlate cause and blame, children do not distinguish between cases in which the person was aware of the norm, while adults do.
29
3.6. Social Attribution and XAI
This section presents some ideas on how the work on social attribution outlined above aï¬ects researchers and practitioners in XAI.
# 3.6.1. Folk Psychology
While the models and research results presented in this section pertain to the be- haviour of humans, it is reasonably clear that these models have a place in explainable AI. Heider and Simmelâs seminal experiments from 1944 with moving shapes [67] (Sec- tion 3.2) demonstrate unequivocally that people attribute folk psychological concepts such as belief, desire, and intention, to artiï¬cial objects. Thus, as argued by de Graaf and Malle [34], it is not a stretch to assert that people will expect explanations using the same conceptual framework used to explain human behaviours.
This model is particularly promising because many knowledge-based models in delib- erative AI either explicitly build on such folk psychological concepts, such as belief-desire- intention (BDI) models [152], or can be mapped quite easily to them; e.g. in classical-like AI planning, goals represent desires, intermediate/landmark states represent intentions, and the environment model represents beliefs [50].
In addition, the concepts and relationships between actions, preconditions, and prox- imal and distal intentions is similar to those in models such as BDI and planning, and as such, the work on the relationships between preconditions, outcomes, and competing goals, is useful in this area.
# 3.6.2. Malleâs Models
Of all of the work outlined in this section, it is clear that Malleâs model, culminating in his 2004 text book [112], is the most mature and complete model of social attribution to date. His three-layer models provides a solid foundation on which to build explanations of many deliberative systems, in particular, goal-based deliberation systems.
Malleâs conceptual framework provides a suitable framework for characterising diï¬er- ent aspects of causes for behaviour. It is clear that reason explanations will be useful for goal-based reasoners, as discussed in the case of BDI models and goal-directed AI planning, and enabling factor explanations can play a role in how questions and in counterfactual explanations. In Section 4, we will see further work on how to select explanations based on these concepts.
However, the causal history of reasons (CHR) explanations also have a part to play for deliberative agents. In human behaviour, they refer to personality traits and other unconscious motives. While anthropomorphic agents could clearly use CHRs to explain behaviour, such as emotion or personality, they are also valid explanations for non- anthropomorphic agents. For example, for AI planning agents that optimise some metric, such as cost, the explanation that action a was chosen over action b because it had lower cost is a CHR explanation. The fact that the agent is optimising cost is a âpersonality traitâ of the agent that is invariant given the particular plan or goal. Other types of planning systems may instead be risk averse, optimising to minimise risk or regret, or may be âï¬exibleâ and try to help out their human collaborators as much as possible. These types of explanations are CHRs; even if they are not described as personality traits to the explainee. However, one must be careful to ensure these CHRs do not make their agent appear irrational â unless of course, that is the goal one is trying to achieve with the explanation process.
30
Broekens et al. [12] describe algorithms for automatic generation of explanations for BDI agents. Although their work does not build on Malleâs model directly, it shares a similar structure, as noted by the authors, in that their model uses intentions and enabling conditions as explanations. They present three algorithms for explaining be- haviour: (a) oï¬ering the goal towards which the action contributes; (b) oï¬ering the enabling condition of an action; and (c) oï¬ering the next action that is to be performed; thus, the explanadum is explained by oï¬ering a proximal intention. A set of human behavioural experiments showed that the diï¬erent explanations are considered better in diï¬erent circumstances; for example, if only one action is required to achieve the goal, then oï¬ering the goal as the explanation is more suitable than oï¬ering the other two types of explanation, while if it is part of a longer sequence, also oï¬ering a proximal intention is evaluated as being a more valuable explanation. These results reï¬ect those by Malle, but also other results from social and cognitive psychology on the link between goals, proximal intentions, and actions, which are surveyed in Section 4.4.3
# 3.6.3. Collective Intelligence
The research into behaviour attribution of groups (Section 3.4) is important for those working in collective intelligence; areas such as in multi-agent planning [11], computa- tional social choice [26], or argumentation [8]. Although this line of work appears to be much less explored than attributions of individualâs behaviour, the ï¬ndings from Kass and Leake [85], Susskind et al., and in particular OâLaughlin and Malle [137] that people assign intentions and beliefs to jointly-acting groups, and reasons to aggregate groups, indicates that the large body of work on attribution of individual behaviour could serve as a solid foundation for explanation of collective behaviour.
# 3.6.4. Norms and Morals
The work on norms and morals discussed in Section 3.5 demonstrates that normative behaviour, in particular, violation of such behaviour, has a large impact on the ascrip- tion of a Theory of Mind to actors. Clearly, for anthropomorphic agents, this work is important, but as with CHRs, I argue here that it is important for more âtraditionalâ AI as well.
First, the link with morals is important for applications that elicit ethical or so- cial concerns, such as defence, safety-critical applications, or judgements about people. Explanations or behaviour in general that violate norms may give the impression of âim- moral machinesâ â whatever that can mean â and thus, such norms need to be explicitly considered as part of explanation and interpretability.
Second, as discussed in Section 2.2, people mostly ask for explanations of events that they ï¬nd unusual or abnormal [77, 73, 69], and violation of normative behaviour is one such abnormality [73]. Thus, normative behaviour is important in interpretability â a statement that would not surprise those researchers and practitioners of normative artiï¬cial intelligence.
In Section 4, we will see that norms and violation of normal/normative behaviour is also important in the cognitive processes of people asking for, constructing, and evaluat- ing explanations, and its impact on interpretability.
31
# 4. Cognitive Processes â How Do People Select and Evaluate Explanations?
There are as many causes of x as there are explanations of x. Consider how the cause of death might have been set out by the physician as âmultiple haemorrhageâ, by the barrister as ânegligence on the part of the driverâ, by the carriage-builder as âa defect in the brakelock constructionâ, by a civic planner as âthe presence of tall shrubbery at that turningâ. None is more true than any of the others, but the particular context of the question makes some explanations more relevant than others. â Hanson [61, p. 54]
Mill [130] is one of the earliest investigations of cause and explanation, and he argued that we make use of âstatisticalâ correlations to identify cause, which he called the Method of Diï¬erence. He argued that causal connection and explanation selection are essentially arbitrary and the scientiï¬cally/philosophically it is âwrongâ to select one explanation over another, but oï¬ered several cognitive biases that people seem to use, including things like unexpected conditions, precipitating causes, and variability. Such covariation models ideas were dominant in causal attribution, in particular, the work of Kelley [86]. However, many researchers noted that the covariation models failed to explain many observations; for example, people can identify causes between events from a single data point [127, 75]; and therefore, more recently, new theories have displaced them, while still acknowledging that the general idea that people using co-variations is valid.
In this section, we look at these theories, in particular, we survey three types of cognitive processes used in explanation: (1) causal connection, which is the process people use to identify the causes of events; (2) explanation selection, which is the process people use to select a small subset of the identiï¬ed causes as the explanation; and (3) explanation evaluation, which is the processes that an explainee uses to evaluate the quality of an explanation. Most of this research shows that people have certain cognitive biases that they apply to explanation generation, selection, and evaluation.
4.1. Causal Connection, Explanation Selection, and Evaluation
Malle [112] presents a theory of explanation, which breaks the psychological processes used to oï¬er explanations into two distinct groups, outlined in Figure 8:
1. Information processes â processes for devising and assembling explanations. The present section will present related work on this topic.
2. Impression management processes â processes for governing the social interaction of explanation. Section 5 will present related work on this topic.
Malle [112] further splits these two dimensions into two further dimensions, which refer to the tools for constructing and giving explanations, and the explainerâs perspective or knowledge about the explanation.
Taking the two dimensions, there are four items:
1. Information requirements â what is required to give an adequate explanation; for example, one must knows the causes of the explanandum, such as the desires and beliefs of an actor, or the mechanistic laws for a physical cause.
32
~ Information Functional requirements Explanatory tool capacities iT Impression Information processes EXPLANATION management processes Information Pragmatic access Explainer goals
Figure 8: Malleâs process model for behaviour explanation; reproduced from Malle [114, p. 320, Figure 6.6]
2. Information access â what information the explainer has to give the explanation, such as the causes, the desires, etc. Such information can be lacking; for example, the explainer does not know the intentions or beliefs of an actor in order to explain their behaviour.
3. Pragmatic goals â refers to the goal of the the explanation, such as transferring knowledge to the explainee, making an actor look irrational, or generating trust with the explainee.
4. Functional capacities â each explanatory tool has functional capacities that con- strain or dictate what goals can be achieved with that tool.
Malle et al. [117] argue that this theory accounts for apparent paradoxes observed in attribution theory, most speciï¬cally the actor-observer asymmetries, in which actors and observers oï¬er diï¬erent explanations for the same action taken by an actor. They hypothesise that this is due to information asymmetry; e.g. an observer cannot access the intentions of an actor â the intentions must be inferred from the actorâs behaviour. In this section, we ï¬rst look speciï¬cally at processes related to the explainer: informa- tion access and pragmatic goals. When requested for an explanation, people typically do not have direct access to the causes, but infer them from observations and prior knowl- edge. Then, they select some of those causes as the explanation, based on the goal of the explanation. These two process are known as causal connection (or causal inference), which is a processing of identifying the key causal connections to the fact; and explana- tion selection (or casual selection), which is the processing of selecting a subset of those causes to provide as an explanation.
This paper separates casual connection into two parts: (1) abductive reasoning, the cognitive process in which people try to infer causes that explain events by making as- sumptions about hypotheses and testing these; and (2) simulation, which is the cognitive 33
process of simulating through counterfactuals to derive a good explanation. These pro- cesses overlap, but can be somewhat diï¬erent. For example, the former requires the reasoner to make assumptions and test the validity of observations with respect to these assumptions, while in the latter, the reasoner could have complete knowledge of the causal rules and environment, but use simulation of counterfactual cases to derive an explanation. From the perspective of explainable AI, an explanatory agent explaining its decision would not require abductive reasoning as it is certain of the causes of its decisions. An explanatory agent trying to explain some observed events not under its control, such as the behaviour of another agent, may require abductive reasoning to ï¬nd a plausible set of causes.
Finally, when explainees receive explanations, they go through the process of expla- nation evaluation, through which they determine whether the explanation is satisfactory or not. A primary criteria is that the explanation allows the explainee to understand the cause, however, peopleâs cognitive biases mean that they prefer certain types of explana- tion over others.
# 4.2. Causal Connection: Abductive Reasoning
The relationship between explanation and abductive reasoning is introduced in Sec- tion 2.1.4. This section surveys work in cognitive science that looks at the process of abduction. Of particular interest to XAI (and artiï¬cial intelligence in general) is work demonstrating the link between explanation and learning, but also other processes that people use to simplify the abductive reasoning process for explanation generation, and to switch modes of reasoning to correspond with types of explanation.
4.2.1. Abductive Reasoning and Causal Types
Rehder [154] looked speciï¬cally at categorical or formal explanations. He presents the causal model theory, which states that people infer categories of objects by both their features and the causal relationships between features. His experiments show that people categorise objects based their perception that the observed properties were generated by the underlying causal mechanisms. Rehder gives the example that people not only know that birds can ï¬y and bird have wings, but that birds can ï¬y because they have wings. In addition, Rehder shows that people use combinations of features as evidence when assigning objects to categories, especially for features that seem incompatible based on the underlying causal mechanisms. For example, when categorising an animal that cannot ï¬y, yet builds a nest in trees, most people would consider it implausible to categorise it as a bird because it is diï¬cult to build a nest in a tree if one cannot ï¬y. However, people are more likely to categorise an animal that does not ï¬y and builds nests on the ground as a bird (e.g. an ostrich or emu), as this is more plausible; even though the ï¬rst example has more features in common with a bird (building nests in trees).
Rehder [155] extended this work to study how people generalise properties based on the explanations received. When his participants were ask to infer their own explanations using abduction, they were more likely to generalise a property from a source object to a target object if they had more features that were similar; e.g. generalise a property from one species of bird to another, but not from a species of bird to a species of plant. However, given an explanation based on features, this relationship is almost completely eliminated: the generalisation was only done if the features detailed in the explanation
34
were shared between the source and target objects; e.g. bird species A and mammal B both eat the same food, which is explained as the cause for an illness, for example. Thus, the abductive reasoning process used to infer explanations were also used to generalise properties â a parallel seen in machine learning [133].
However, Williams et al. [189] demonstrate that, at least for categorisation in abduc- tive reasoning, the properties of generalisation that support learning can in fact weaken learning by overgeneralising. They gave experimental participants a categorisation task to perform by training themselves on exemplars. They asked one group to explain the categorisations as part of the training, and another to just âthink aloudâ about their task. The results showed that the explanation group more accurately categorised features that had similar patterns to the training examples, but less accurately categorised exceptional cases and those with unique features. Williams et al. argue that explaining (which forces people to think more systematically about the abduction process) is good for fostering generalisations, but this comes at a cost of over-generalisation.
Chin-Parker and Cantelon [28] provide support for the contrastive account of ex- planation (see Section 2.3) in categorisation/classiï¬cation tasks. They hypothesise that contrast classes (foils) are key to providing the context to explanation. They distin- guish between prototypical features of categorisation, which are those features that are typical of a particular category, and diagnostic features, which are those features that are relevant for a contrastive explanation. Participants in their study were asked to ei- ther describe particular robots or explain why robots were of a particular category, and then follow-up on transfer learning tasks. The results demonstrated that participants in the design group mentioned signiï¬cantly more features in general, while participants in the explanation group selectively targeted contrastive features. These results provide empirical support for contrastive explanation in category learning.
# 4.2.2. Background and Discounting
Hilton [73] discusses the complementary processes of backgrounding and discounting that aï¬ect the abductive reasoning process. Discounting is when a hypothesis is deemed less likely as a cause because additional contextual information is added to a competing hypothesis as part of causal connection. It is actually discounted as a cause to the event. Backgrounding involves pushing a possible cause to the background because it is not relevant to the goal, or new contextual information has been presented that make it no longer a good explanation (but still a cause). That is, while it is the cause of an event, it is not relevant to the explanation because e.g. the contrastive foil also has this cause. As noted by Hilton [73], discounting occurs in the context of multiple possible causes â there are several possible causes and the person is trying to determine which causes the fact â, while backgrounding occurs in the context of multiple necessary events â a subset of necessary causes is selected as the explanation. Thus, discounting is part of causal connection, while backgrounding is part of explanation selection.
# 4.2.3. Explanatory Modes
As outlined in Section 2.4, philosophers and psychologists accept that diï¬erent types of explanations exist; for example, Aristotleâs model: material, formal, eï¬cient, and ï¬nal. However, theories of causality have typically argued for only one type of cause, with the two most prominent being dependence theories and transference theories.
35
Lombrozo [107] argues that both dependence theories and transference theories are at least psychologically real, even if only one (or neither) is the true theory. She hy- pothesises that people employ diï¬erent modes of abductive reasoning for diï¬erent modes of cognition, and thus both forms of explanation are valid: functional (ï¬nal) explana- tions are better for phenomena that people consider have dependence relations, while mechanistic (eï¬cient) explanations are better for physical phenomena.
Lombrozo [107] gave experimental participants scenarios in which the explanatory mode was manipulated and isolated using a mix of intentional and accidental/incidental human action, and in a second set of experiments, using biological traits that provide a particular function, or simply cause certain events incidentally. Participants were asked to evaluate diï¬erent causal claims. The results of these experiments show that when events were interpreted in a functional manner, counterfactual dependence was important, but physical connections were not. However, when events were interpreted in a mechanistic manner, both counterfactual dependence and physical dependence were both deemed important. This implies that there is a link between functional causation and dependence theories on the one hand, and between mechanistic explanation and transference theories on the other. The participants also rated the functional explanation stronger in the case that the causal dependence was intentional, as opposed to accidental. Lombrozo [106] studied at the same issue of functional vs. mechanistic explanations for inference in categorisation tasks speciï¬cally. She presented participants with tasks similar to the following (text in square brackets added):
There is a kind of ï¬ower called a holing. Holings typically have brom com- pounds in their stems and they typically bend over as they grow. Scientists have discovered that having brom compounds in their stems is what usually causes holings to bend over as they grow [mechanistic cause]. By bending over, the holingâs pollen can brush against the fur of ï¬eld mice, and spread to neighboring areas [functional cause].
Explanation prompt: Why do holings typically bend over?
They then gave participants a list of questions about ï¬owers; for example: Suppose a ï¬ower has brom compounds in its stem. How likely do you think it is that it bends over? Their results showed that participants who provided a mechanistic explanation from the ï¬rst prompt were more likely to think that the ï¬ower would bend over, and vice- versa for functional causes. Their ï¬ndings shows that giving explanations inï¬uences the inference process, changing the importance of diï¬erent features in the understanding of category membership, and that the importance of features in explanations can impact the categorisation of that feature. In extending work, Lombrozo and Gwynne [109] argue that people generalise better from functional than mechanistic explanations.
4.2.4. Inherent and Extrinsic Features
Prasada and Dillingham [149] and Prasada [148] discuss how peopleâs abductive rea- soning process prioritises certain factors in the formal mode. Prasada contends that âIdentifying something as an instance of a kind and explaining some of its properties in terms of its being the kind of thing it is are not two distinct activities, but a single cognitive activity.â [148, p. 2]
36
Prasada and Dillingham [149] note that people represent relationships between the kinds of things and the properties that they posses. This description conforms with Overtonâs model of the structure of explanation [139] (see Section 2.6.5). Prasada and Dillinghamâs experiments showed that people distinguish between two types of properties for a kind: k-properties, which are the inherent properties of a thing that are due to its kind, and which they call principled connections; and t-properties, which are the extrinsic properties of a thing that are not due to its kind, which they call factual connections. Statistical correlations are examples of factual connections. For instance, a queen bee has a stinger and ï¬ve legs because it is a bee (k-property), but the painted mark seen on almost all domesticated queen bees is because a bee keeper has marked it for ease of identiï¬cation (t-property). K-properties have both principled and factual connections to their kind, whereas t-properties have mere factual connections. They note that k- properties have a normative aspect, in that it is expected that instances of kinds will have their k-properties, and when they do not, they are considered abnormal; for instance, a bee without a stinger.
In their experiments, they presented participants with explanations using diï¬erent combinations of k-properties and t-properties to explain categorisations; for example, âwhy is this a dog?â Their results showed that for formal modes, explanations involv- ing k-properties were considered much better than explanations involving t-properties, and further, that using a thingâs kind to explain why it has a particular property was considered better for explaining k-properties than for explaining t-properties.
Using ï¬ndings from previous studies, Cimpian and Salomon [30] argue that, when asked to explain a phenomenon, such as a feature of an object, peopleâs cognitive biases make them more likely to use inherent features (k-properties) about the object to explain the phenomenon, rather than extrinsic features (t-properties), such as historical factors. An inherent feature is one that characterises âhow an object is constitutedâ [30, p. 465], and therefore they tend to be stable and enduring features. For example, âspiders have eight legsâ is inherent, while âhis parents are scared of spidersâ is not. Asked to explain why they ï¬nd spiders scary, people are more likely to refer to the âlegginessâ of spiders rather than the fact that their parents have arachnophobia, even though studies show that people with arachnophobia are more likely to have family members who ï¬nd spiders scary [33]. Cimpian and Salomon argue that, even if extrinsic information is known, it is not readily accessible by the mental shotgun [82] that people use to retrieve information. For example, looking at spiders, you can see their legs, but not your familyâs fear of them. Therefore, this leads to people biasing explanations towards inherent features rather than extrinsic. This is similar to the correspondence bias discussed in Section 3.2, in which people are more likely to describe peopleâs behaviour on personality traits rather than beliefs, desires, and intentions, because the latter are not readily accessible while the former are stable and enduring. The bias towards inherence is aï¬ected by many factors, such as prior knowledge, cognitive ability, expertise, culture, and age.
4.3. Causal Connection: Counterfactuals and Mutability
To determine the causes of anything other than a trivial event, it is not possible for a person to simulate back through all possible events and evaluate their counterfactual cases. Instead, people apply heuristics to select just some events to mutate. However, this process is not arbitrary. This section looks at several biases used to assess the mutability of events; that is, the degree to which the event can be âundoneâ to consider 37
counterfactual cases. It shows that abnormality (including social abnormality), intention, time and controllability of events are key criteria.
# 4.3.1. Abnormality
Kahneman and Tversky [83] performed seminal work in this ï¬eld, proposing the simulation heuristic. They hypothesise that when answering questions about past events, people perform a mental simulation of counterfactual cases. In particular, they show that abnormal events are mutable: they are the common events that people undo when judging causality. In their experiments, they asked people to identity primary causes in causal chains using vignettes of a car accident causing the fatality of Mr. Jones, and which had multiple necessary causes, including Mr. Jones going through a yellow light, and the teenager driver of the truck that hit Mr. Jonesâ car while under the inï¬uence of drugs. They used two vignettes: one in which Mr. Jones the car took an unusual route home to enjoy the view along the beach (the route version); and one in which he took the normal route home but left a bit early (the time version). Participants were asked to complete an âif onlyâ sentence that undid the fatal accident, imagining that they were a family member of Mr. Jones. Most participants in the route group undid the event in which Mr. Jones took the unusual route home more than those in the time version, while those in the time version undid the event of leaving early more often than those in the route version. That is, the participants tended to focus more on abnormal causes. In particular, Kahneman and Tversky note that people did not simply undo the event with the lowest prior probability in the scenario.
In their second study, Kahneman and Tversky [83] asked the participants to empathise with the family of the teenager driving the truck instead of with Mr. Jones, they found that people more often undid events of the teenage driver, rather Mr. Jones. Thus, the perspective or the focus is important in what types of events people undo.
# 4.3.2. Temporality
Miller and Gunasegaram [131] show that the temporality of events is important, in particular that people undo more recent events than more distal events. For instance, in one of their studies, they asked participants to play the role of a teacher selecting exam questions for a task. In one group, the teacher-ï¬rst group, the participants were told that the students had not yet studied for their exam, while those in the another group, the teacher-second group, were told that the students had already studied for the exam. Those in the teacher-second group selected easier questions than those in the ï¬rst, showing that participants perceived the degree of blame they would be given for hard questions depends on the temporal order of the tasks. This supports the hypothesis that earlier events are considered less mutable than later events.
4.3.3. Controllability and Intent
Girotto et al. [54] investigated mutability in causal chains with respect to control- lability. They hypothesised that actions controllable by deliberative actors are more mutable than events that occur as a result of environmental eï¬ects. They provided par- ticipants with a vignette about Mr. Bianchi, who arrived late home from work to ï¬nd his wife unconscious on the ï¬oor. His wife subsequently died. Four diï¬erent events caused Mr. Bianchiâs lateness: his decision to stop at a bar for a drink on the way home, plus three non-intentional causes, such as delays caused by abnormal traï¬c. Diï¬erent 38
questionnaires were given out with the events in diï¬erent orders. When asked to undo events, participants overwhelmingly selected the intentional event as the one to undo, demonstrating that people mentally undo controllable events over uncontrollable events, irrelevant of the controllable events position in the sequence or whether the event was normal or abnormal. In another experiment, they varied whether the deliberative ac- tions were constrained or unconstrained, in which an event is considered as constrained when they are somewhat enforced by other conditions; for example, Mr. Bianchi going to the bar (more controllable) vs. stopping due to an asthma attack (less controllable). The results of this experiment show that unconstrained actions are more mutable than constrained actions.
# 4.3.4. Social Norms
McCloy and Byrne [121] investigated the mutability of controllable events further, looking at the perceived appropriateness (or the socially normative perception) of the events. They presented a vignette similar to that of Girotto et al. [54], but with several controllable events, such as the main actor stopping to visit his parents, buy a newspaper, and stopping at a fast-food chain to get a burger. Participants were asked to provide causes as well as rate the âappropriatenessâ of the behaviour. The results showed that participants were more likely to indicate inappropriate events as causal; e.g. stopping to buy a burger. In a second similar study, they showed that inappropriate events are traced through both normal and other exceptional events when identifying cause.
# 4.4. Explanation Selection
Similar to causal connection, people do not typically provide all causes for an event as an explanation. Instead, they select what they believe are the most relevant causes. Hilton [70] argues that explanation selection is used for cognitive reasons: causal chains are often too large to comprehend. He provides an example [70, p. 43, Figure 7] show- ing the causal chain for the story of the fatal car accident involving âMr. Jonesâ from Kahneman and Tversky [83]. For a simple story of a few paragraphs, the causal chain consists of over 20 events and 30 causes, all relevant to the accident. However, only a small amount of these are selected as explanations [172].
In this section, we overview key work that investigates the criteria people use for ex- planation selection. Perhaps unsurprisingly, the criteria for selection look similar to that of mutability, with temporality (proximal events preferred over distal events), abnormal- ity, and intention being important, but also the features that are diï¬erent between fact and foil.
4.4.1. Facts and Foils
As noted in Section 2, whyâquestions are contrastive between a fact and a foil. Re- search shows that the two contrasts are the primary way that people select explanations. In particular, to select an explanation from a set of causes, people look at the diï¬erence between the cases of the fact and foil.
Mackie [110] is one of the earliest to argue for explanation selection based on con- trastive criteria, however, the ï¬rst crisp deï¬nition of contrastive explanation seems to come from Hesslow [69]:
39
This theory rests on two ideas. The ï¬rst is that the eï¬ect or the explanan- dum, i.e. the event to be explained, should be construed, not as an objectâs having a certain property, but as a diï¬erence between objects with regard to a certain property. The second idea is that selection and weighting of causes is determined by explanatory relevance. [Emphasis from the original source] â Hesslow [69, p. 24]
Hesslow [69] argues that criteria for selecting explanations are clearly not arbitrary, because people seem to select explanations in similar ways to each other. He deï¬nes an explanan as a relation containing an object a (the fact in our terminology), a set of comparison objects R, called the reference class (the foils), and a property E, which a has but the objects in reference class R does not. For example, a = Spider, R = Beetle, and E = eight legs. Hesslow argues that the contrast between the fact and foil is the primary criteria for explanation selection, and that the explanation with the highest explanatory power should be the one that highlights the greatest number of diï¬erences in the attributes between the target and reference objects.
Lipton [102], building on earlier work in philosophy from Lewis [99], derived similar thoughts to Hesslow [69], without seeming to be aware of his work. He proposed a deï¬nition of contrastive explanation based on what he calls the Diï¬erence Condition:
To explain why P rather than Q, we must cite a causal diï¬erence between P and not-Q, consisting of a cause of P and the absence of a corresponding event in the history of not-Q. â Lipton [102, p. 256]
From an experimental perspective, Hilton and Slugoski [77] were the ï¬rst researchers to both identify the limitations of covariation, and instead propose that contrastive ex- planation is best described as the diï¬erences between the two events (discussed further in Section 4.4.2). More recent research in cognitive science from Rehder [154, 155] supports the theory that people perform causal inference, explanation, and generalisation based on contrastive cases.
Returning to our arthropod example, for the whyâquestion between image J cate- gorised as a ï¬y and image K a beetle, image J having six legs is correctly determined to have no explanatory relevance, because it does not cause K to be categorised as a beetle instead of a ï¬y. Instead, the explanation would cite some other cause, which according to Table 1, would be that the arthropod in image J has ï¬ve eyes, consistent with a ï¬y, while the one in image K has two, consistent with a beetle.
# 4.4.2. Abnormality
Related to the idea of contrastive explanation, Hilton and Slugoski [77] propose the abnormal conditions model, based on observations from legal theorists Hart and Honor´e [64]. Hilton and Slugoski argue that abnormal events play a key role in causal explana- tion. They argue that, while statistical notions of co-variance are not the only method employed in everyday explanations, the basic idea that people select unusual events to explain is valid. Their theory states that explainers use their perceived background knowledge with explainees to select those conditions that are considered abnormal. They give the example of asking why the Challenger shuttle exploded in 1986 (rather than not exploding, or perhaps why most other shuttles do not explode). The explanation that
40
it exploded âbecause of faulty sealsâ seems like a better explanation than âthere was oxygen in the atmosphereâ. The abnormal conditions model accounts for this by noting that an explainer will reason that oxygen is present in the atmosphere when all shuttles launch, so this is not an abnormal condition. On the other hand, most shuttles to not have faulty seals, so this contributing factor was a necessary yet abnormal event in the Challenger disaster.
The abnormal conditions model has been backed up by subsequent experimental studies, such as those by McClure and Hilton [125], McClure et al. [126], and Hilton et al. [76], and more recently, Samland and Waldmann [161], who show that a variety of non-statistical measures are valid foils.
4.4.3. Intentionality and Functionality
Other features of causal chains have been demonstrated to be more important than abnormality.
Hilton et al. [76] investigate the claim from legal theorists Hart and Honor´e [64] that intentional action takes priority of non-intentional action in opportunity chains. Their perspective builds on the abnormal conditions model, noting that there are two important contrasts in explanation selection: (1) normal vs. abnormal; and (2) intentional vs. non- intentional. They argue further that causes will be âtraced throughâ a proximal (more recent) abnormal condition if there is a more distal (less recent) event that is intentional. For example, to explain why someone died, one would explain that the poison they ingested as part of a meal was the cause of death; but if the poison as shown to have been deliberately placed in an attempt to murder the victim, the intention of someone to murder the victim receives priority. In their experiments, they gave participants diï¬erent opportunity chains in which a proximal abnormal cause was an intentional human action, an unintentional human action, or a natural event, depending on the condition to which they were assigned. For example, a cause of an accident was ice on the road, which was enabled by either someone deliberative spraying the road, someone unintentionally placing water on the road, or water from a storm. Participants were asked to rate the explanations. Their results showed that: (1) participants rated intentional action as a better explanation than the other two causes, and non-intentional action better than natural cases; and (2) in opportunity chains, there is little preference for proximal over distal events if two events are of the same type (e.g. both are natural events) â both are seen as necessary.
Lombrozo [107] argues further that this holds for functional explanations in general; not just intentional action. For instance, citing the functional reason that an object exists is preferred to mechanistic explanations.
4.4.4. Necessity, Suï¬ciency and Robustness
Several authors [102, 107, 192] argue that necessity and suï¬ciency are strong criteria for preferred explanatory causes. Lipton [102] argues that necessary causes are preferred to suï¬cient causes. For example, consider mutations in the DNA of a particular species of beetle that cause its wings to grow longer than normal when kept in certain temperatures. Now, consider that there is two such mutations, M1 and M2, and either is suï¬cient to cause the mutation. To contrast with a beetle whose wings would not change, the explanation of temperature is preferred to either of the mutations M1 or M2, because neither M1 nor M2 are individually necessary for the observed event; merely that either 41
M1 or M2. In contrast, the temperature is necessary, and is preferred, even if we know that the cause was M1.
Woodward [192] argues that suï¬ciency is another strong criteria, in that people prefer causes that bring about the eï¬ect without any other cause. This should not be confused with suï¬ciency in the example above, in which either mutation M1 or M2 is suï¬cient in combination with temperature. Woodwardâs argument applies to uniquely suï¬cient causes, rather than cases in which there are multiple suï¬cient causes. For example, if it were found that are third mutation M3 could cause longer wings irrelevant of the temperature, this would be preferred over temperature plus another mutation. This is related to the notation of simplicity discussed in Section 4.5.1.
Finally, several authors [107, 192] argue that robustness is also a criterion for expla- nation selection, in which the extend to which a cause C is considered robust is whether the eï¬ect E would still have occurred if conditions other than C were somewhat diï¬erent. Thus, a cause C1 that holds only in speciï¬c situations has less explanatory value than cause C2, which holds in many other situations.
# 4.4.5. Responsibility
The notions of responsibility and blame are relevant to causal selection, in that an event considered more responsible for an outcome is likely to be judged as a better explanation than other causes. In fact, it relates closely to necessity, as responsibility aims to place a measure of âdegree of necessityâ of causes. An event that is fully responsible outcome for an event is a necessary cause.
Chockler and Halpern [29] modiï¬ed the structural equation model proposed by Halpern and Pearl [58] (see Section 2.1.1) to deï¬ne responsibility of an outcome. Informally, they deï¬ne the responsibility of cause C to event E under a situation based on the minimal number of changes required to the situation to make event E no longer occur. If N is 1 N +1 . the minimal number of changes required, then the responsibility of C causes E is If N = 0, then C is fully responsible. Thus, one can see that an event that is considered more responsible than another requires less changes to prevent E than the other.
While several diï¬erent cognitive models of responsibility attribution have been pro- posed (c.f. [74, 92]), I focus on the model of Chockler and Halpern [29] because, as far I am aware aware, experimental evaluation of the model shows it to be stronger than existing models [48], and because it is a formal model that is more readily adopted in artiï¬cial intelligence.
The structural model approach deï¬nes the responsibility of events, rather than indi- viduals or groups, but one can see that it can be used in group models as well. Gersten- berg and Lagnado [48] show that the model has strong predictive power at attributing responsibility to individuals in groups. They ran a set of experiments in which par- ticipants played a simple game in teams in which each individual was asked to count the number of triangles in an image, and teams won or lost depending on how accurate their collective counts were. After the game, participants rated the responsibility of each player to the outcome. Their results showed that the modiï¬ed structural equation model Chockler and Halpern [29] was more accurate at predicting participants outcomes than simple counterfactual model and the so-called Matching Model, in which the responsibil- ity is deï¬ned as the degree of deviation to the outcome; in the triangle counting game, this would be how far oï¬ the individual was to the actual number of triangles.
42
4.4.6. Preconditions, Failure, and Intentions
An early study into explanation selection in cases of more than one cause was under- taken by Leddo et al. [96]. They conducted studies asking people to rate the probability of diï¬erent factors as causes of events. As predicted by the intention/goal-based theory, goals were considered better explanations than relevant preconditions. However, people also rated conjunctions of preconditions and goals as better explanations of why the event occurred. For example, for the action âFred went to the restaurantâ, participants rated explanations such as âFred was hungryâ more likely than âFred had money in his pocketâ, but further âFred was hungry and had money in his pocketâ as an even more likely explanation, despite the fact the cause itself is less likely (conjoining the two prob- abilities). This is consistent with the well-known conjunction fallacy [173], which shows that people sometimes estimate the probability of the conjunction of two facts higher than either of the individual fact if those two facts are representative of prior beliefs.
However, Leddo et al. [96] further showed that for failed or uncompleted actions, just one cause (goal or precondition) was considered a better explanation, indicating that failed actions are explained diï¬erently. This is consistent with physical causality explanations [106]. Leddo et al. argue that to explain an action, people combine their knowledge of the particular situation with a more general understanding about causal relations. Lombrozo [107] argues similarly that this is because failed actions are not goal-directed, because people do not intend to fail. Thus, people prefer mechanistic explanations for failed actions, rather than explanations that cite intentions.
McClure and Hilton [123] and McClure et al. [124] found that people tend to assign a higher probability of conjoined goal and precondition for a successful action, even though they prefer the goal as the best explanation, except in extreme/unlikely situations; that is, when the precondition is unlikely to be true. They argue that is largely due to the (lack of) controllability of unlikely actions. That is, extreme/unlikely events are judged to be harder to control, and thus actors would be less likely to intentionally select that action unless the unlikely opportunity presented itself. However, for normal and expected actions, participants preferred the goal alone as an explanation instead of the goal and precondition.
In a follow-up study, McClure and Hilton [125] looked at explanations of obstructed vs. unobstructed events, in which an event is obstructed by its precondition being false; for example, âFred wanted a coï¬ee, but did not have enough money to buy oneâ as an explanation for why Fred failed to get a coï¬ee. They showed that while goals are important to both, for obstructed events, the precondition becomes more important than for unobstructed events.
# 4.5. Explanation Evaluation
In this section, we look at work that has investigated the criteria that people use to evaluate explanations. The most important of these are: probability, simplicity, gener- alise, and coherence with prior beliefs.
4.5.1. Coherence, Simplicity, and Generality
Thagard [171] argues that coherence is a primary criteria for explanation. He pro- poses the Theory for Explanatory Coherence, which speciï¬es seven principles of how explanations relate to prior belief. He argues that these principles are foundational prin- ciples that explanations must observe to be acceptable. They capture properties such 43
as if some set of properties P explain some other property Q, then all properties in P must be coherent with Q; that is, people will be more likely to accept explanations if they are consistent with their prior beliefs. Further, he contends that all things being equal, simpler explanations â those that cite fewer causes â and more general expla- nations â those that explain more events â, are better explanations. The model has been demonstrated to align with how humans make judgements on explanations [151].
Read and Marcus-Newhall [153] tested the hypotheses from Thagardâs theory of ex- planatory coherence [171] that people prefer simpler and more general explanations. Par- ticipants were asked to rate the probability and the âqualityâ of explanations with diï¬erent numbers of causes. They were given stories containing several events to be explained, and several diï¬erent explanations. For example, one story was about Cheryl, who is suï¬ering from three medical problems: (1) weight gain; (2) fatigue; and (3) nausea. Dif- ferent participant groups were given one of three types of explanations: (1) narrow : one of Cheryl having stopped exercising (weight gain), has mononucleosis (explains fatigue), or a stomach virus (explains nausea); (2) broad : Cheryl is pregnant (explains all three); or (3) conjunctive: all three from item 1 as the same time. As predicted, participants preferred simple explanations (pregnancy) with less causes than more complex ones (all three conjunctions), and participants preferred explanations that explained more events.
# 4.5.2. Truth and Probability
Probability has two facets in explanation: the probability of the explanation being true; and the use of probability in an explanation. Neither has a much importance as one may expect.
The use of statistical relationships to explain events is considered to be unsatisfying on its own. This is because people desire causes to explain events, not associative rela- tionships. Josephson and Josephson [81] give the example of a bag full of red balls. When selecting a ball randomly from the bag, it must be red, and one can ask: âWhy is this ball red?â. The answer that uses the statistical generalisation âBecause all balls in the bag are redâ is not a good explanation, because it does not explain why that particular ball is red. A better explanation is someone painted it red. However, for the question: âWhy did we observe a red ball coming out of the bagâ, it is a good explanation, be- cause having only red balls in the bag does cause us to select a red one. Josephson and Josephson highlight that the diï¬erence between explaining the fact observed (the ball is red) and explaining the event of observing the fact (a red ball was selected). To explain instances via statistical generalisations, we need to explain the causes of those generali- sations too, not the generalisations themselves. If the reader is not convinced, consider my own example: a student coming to their teacher to ask why they only received 50% on an exam. An explanation that most students scored around 50% is not going to satisfy the student. Adding a cause for why most students only scored 50% would be an improvement. Explaining to the student why they speciï¬cally received 50% is even better, as it explains the cause of the instance itself.
The truth of likelihood of an explanation is considered an important criteria of a good explanation. However, Hilton [73] shows that the most likely or âtrueâ cause is not necessarily the best explanation. Truth conditions4 are a necessary but not suï¬cient
4We use the term truth condition to refer to facts that are either true or considered likely by the
explainee.
44
criteria for the generation of explanations. While a true or likely cause is one attribute of a good explanation, tacitly implying that the most probable cause is always the best explanation is incorrect. As an example, consider again the explosion of the Challenger shuttle (Section 4.4.2), in which a faulty seal was argued to be a better explanation than oxygen in the atmosphere. This is despite the fact the the âsealâ explanation is a likely but not known cause, while the âoxygenâ explanation is a known cause. Hilton argues that this is because the fact that there is oxygen in the atmosphere is presupposed ; that is, the explainer assumes that the explainee already knows this.
McClure [122] also challenges the idea of probability as a criteria for explanations. Their studies found that people tend not to judge the quality of explanations around their probability, but instead around their so-called pragmatic inï¬uences of causal behaviour. That is, people judge explanations on their usefulness, relevance, etc., including via Griceâs maxims of conversation [56] (see Section 5.1.1 for a more detailed discussion of this). This is supported by experiments such as Read and Marcus-Newhall [153] cited above, and the work from Tversky and Kahneman [173] on the conjunction fallacy.
Lombrozo [105] notes that the experiments on generality and simplicity performed by Read and Marcus-Newhall [153] cannot rule out that participants selected simple explanations because they did not have probability or frequency information for events. Lombrozo argues that if participants assumed that the events of stopping exercising, hav- ing mononucleosis, having a stomach virus, and being pregnant are all equally likely, then the probability of the conjunction of any three is much more unlikely than any one com- bined. To counter this, she investigated the inï¬uence that probability has on explanation evaluation, in particular, when simpler explanations are less probable than more complex ones. Based on a similar experimental setup to that of Read and Marcus-Newhall [153], Lombrozo presented experimental participants with information about a patient with several symptoms that could be explained by one cause or several separate causes. In some setups, base rate information about each disease was provided, in which the con- junction of the separate causes was more likely than the single (simpler) cause. Without base-rate information, participants selected the most simple (less likely) explanations. When base-rate information was included, this still occurred, but the diï¬erence was less pronounced. However, the likelihood of the conjunctive scenario had to be signiï¬cantly more likely for it to be chosen. Lombrozoâs ï¬nal experiment showed that this eï¬ect was reduced again if participants were explicitly provided with the joint probability of the two events, rather than in earlier experiments in which they were provided separately.
Preston and Epley [150] show that the value that people assign to their own beliefs â both in terms of probability and personal relevance â correspond with the explanatory power of those beliefs. Participants were each given a particular âbeliefâ that is generally accepted by psychologists, but mostly unknown in the general public, and were then allocated to three conditions: (1) the applications condition, who were asked to list ob- servations that the belief could explain; (2) the explanations condition, who were asked to list observations that could explain the belief (the inverse to the previous condition); and (3) a control condition who did neither. Participants were then asked to consider the probability of that belief being true, and to assign their perceived value of the belief to themselves and society in general. Their results show that people in the applications and explanations condition both assigned a higher probability to the belief being true, demonstrating that if people link beliefs to certain situations, the perceived probability increased. However, for value, the results were diï¬erent: those in the applications condi- 45
tion assigned a higher value than the other two conditions, and those in the explanations condition assigned a lower value than the other two conditions. This indicates that peo- ple assign higher values to beliefs that explain observations, but a lower value to beliefs that can be explained by other observations.
Kulesza et al. [90] investigate the balance between soundness and completeness of explanation. They investigated explanatory debugging of machine learning algorithms making personalised song recommendations. By using progressively simpler models with less features, they trained a recommender system to give less correct recommendations. Participants were given recommendations for songs on a music social media site, based on their listening history, and were placed into one of several treatments. Participants in each treatment would be given a diï¬erent combination of soundness and completeness, where soundness means that the explanation is correct and completeness means that all of the underlying causes are identiï¬ed. For example, one treatment had low soundness but high completeness, while another had medium soundness and medium completeness. Participants were given a list of recommended songs to listen to, along with the (possibly unsound and incomplete) explanations, and were subsequently asked why the song had been recommended. The participantsâ mental models were measured. The results show that sound and complete models were the best for building a correct mental model, but at the expense of cost/beneï¬t. Complete but unsound explanations improved the partic- ipantsâ mental models more than soundness, and gave a better perception of cost/beneï¬t, but reduced trust. Sound but incomplete explanations were the least preferred, resulting in higher costs and more requests for clariï¬cation. Overall, Kulesza et al. concluded that completeness was more important than soundness. From these results, Kulesza et al. [89] list three principles for explainability: (1) Be sound ; (2) Be complete; but (3) Donât over- whelm. Clearly, principles 1 and 2 are at odds with principle 3, indicating that careful design must be put into explanatory debugging systems.
4.5.3. Goals and Explanatory Mode
Vasilyeva et al. [177] show that the goal of explainer is key in how the evaluated explanations, in particular, in relation to the mode of explanation used (i.e. material, formal, eï¬cient, ï¬nal). In their experiments, they gave participants diï¬erent tasks with varying goals. For instance, some participants were asked to assess the causes behind some organisms having certain traits (eï¬cient), others were asked to categorise organisms into groups (formal), and the third group were asked for what reason organisms would have those traits (functional). They provided explanations using diï¬erent modes for parts of the tasks and then asked participants to rate the âgoodnessâ of an explanation provided to them. Their results showed that the goals not only shifted the focus of the questions asked by participants, but also that participants preferred modes of explanation that were more congruent with the goal of their task. This is further evidence that being clear about the question being asked is important in explanation.
# 4.6. Cognitive Processes and XAI
This section presents some ideas on how the work on the cognitive processes of ex- planation aï¬ects researchers and practitioners in XAI.
The idea of explanation selection is not new in XAI. Particularly in machine learning, in which models have many features, the problem is salient. Existing work has primarily
46
looked at selecting which features in the model were important for a decision, mostly built on local explanations [158, 6, 157] or on information gain [90, 89]. However, as far as the authors are aware, there are currently no studies that look at the cognitive biases of humans as a way to select explanations from a set of causes.
# 4.6.1. Abductive Reasoning
Using abductive reasoning to generate explanations has a long history in artiï¬cial intelligence [97], aiming to solve problems such as fault diagnosis [144], plan/intention recognition [24], and generalisation in learning [133]. Findings from such work has paral- lels with many of the results from cognitive science/psychology outlined in this section. Leake [95] provides an excellent overview of the challenges of abduction for everyday ex- planation, and summarises work that addresses these. He notes three of the main tasks that an abductive reasoner must perform are: (1) what to explain about a given situation (determining the question); (2) how to generate explanations (abductive reasoning); and (3) how to evaluate the âbestâ explanation (explanation selection and evaluation). He stresses that determining the goal of the explanation is key to providing a good expla- nation; echoing the social scientistsâ view that the explaineeâs question is important, and that such questions are typically focused on anomalies or surprising observations.
The work from Rehder [154, 155] and Lombrozo [108] show that that explanation is good for learning and generalisation. This is interesting and relevant for XAI, because it shows that individual users should require less explanation the more they interact with a system. First, because they will construct a better mental model of the system and be able to generalise its behaviour (eï¬ectively learning its model). Second, as they see more cases, they should become less surprised by abnormal phenomena, which as noted in Section 4.4.2, is a primary trigger for requesting explanations. An intelligent agent that presents â unprompted â an explanation alongside every decision, runs a risk of providing explanations that become less needed and more distracting over time.
The work on inherent vs. extrinsic features (Section 4.2.4) is relevant for many AI applications, in particular classiï¬cation tasks. In preliminary work, Bekele et al. [7] use the inherence bias [30] to explain person identiï¬cation in images. Their re-identiï¬cation system is tasked with determining whether two images contain the same person, and uses inherent features such as age, gender, and hair colour, as well as extrinsic features such as clothing or wearing a backpack. Their explanations use the inherence bias with the aim of improving the acceptability of the explanation. In particular, when the image is deemed to be of the same person, extrinsic properties are used, while for diï¬erent people, intrinsic properties are used. This work is preliminary and has not yet been evaluated, but it is an excellent example of using cognitive biases to improve explanations.
4.6.2. Mutability and Computation
Section 4.3 studies the heuristics that people use to discount some events over others during mental simulation of causes. This is relevant to some areas of explainable AI because, in the same way that people apply these heuristics to more eï¬ciently search through a causal chain, so to can these heuristics be used to more eï¬ciently ï¬nd causes, while still identifying causes that a human explainee would expect.
The notions of causal temporality and responsibility would be reasonably straight- forward to capture in many models, however, if one can capture concepts such as ab-
47
normality, responsibility intentional, or controllability in models, this provides further opportunities.
4.6.3. Abnormality
Abnormality clearly plays a role in explanation and interpretability. For explanation, it serves as a trigger for explanation, and is a useful criteria for explanation selection. For interpretability, it is clear that ânormalâ behaviour will, on aggregate, be judged more explainable than abnormal behaviour.
Abnormality is a key criteria for explanation selection, and as such, the ability to identify abnormal events in causal chains could improve the explanations that can be supplied by an explanatory agent. While for some models, such as those used for proba- bilistic reasoning, identifying abnormal events would be straightforward, and for others, such as normative systems, they are âbuilt inâ, for other types of models, identifying abnormal events could prove diï¬cult but valuable.
One important note to make is regarding abnormality and its application to ânon- contrastiveâ whyâquestions. As noted in Section 2.6.2, questions of the form âWhy P? â may have an implicit foil, and determining this can improve explanation. In some cases, normality could be used to mitigate this problem. That is, in the case of âWhy P? â, we can interpret this as âWhy P rather than the normal case Q? â [72]. For example, consider the application of assessing the risk of glaucoma [22]. Instead of asking why they were given a positive diagnosis rather than a negative diagnosis, the explanatory again could provide one or more default foils, which would be âstereotypicalâ examples of people who were not diagnosed and whose symptoms were more regular with respect to the general population. Then, the question becomes why was the person diagnosed with glaucoma compared to these default stereotypical cases without glaucoma.
# 4.6.4. Intentionality and Functionality
The work discussed in Section 4.4.3 demonstrates the importance of intentionality and functionality in selecting explanations. As discussed in Section 3.6.1, these concepts are highly relevant to deliberative AI systems, in which concepts such as goals and intentions are ï¬rst-class citizens. However, the importance of this to explanation selection rather than social attribution must be drawn out. In social attribution, folk psychological concepts such as intentions are attributed to agents to identify causes and explanations, while in this section, intentions are used as part of the cognitive process of selecting explanations from a causal chain. Thus, even for a non-deliberative system, labelling causes as intentional could be useful. For instance, consider a predictive model in which some features represent that an intentional event has occurred. Prioritising these may lead to more intuitive explanations.
4.6.5. Perspectives and Controllability
The ï¬nding from Kahneman and Tversky [83] that perspectives change the events people mutate, discussed in Section 4.3, is important in multi-agent contexts. This implies that when explaining a particular agentâs decisions or behaviour, the explanatory agent could focus on undoing actions of that particular agent, rather than others. This is also consistent with the research on controllability discussed in Section 4.3, in that, from the perspective of the agent in question, they can only control their own actions.
48
in generating explainable behaviour, with all others things being equal, agents could select actions that lead to future actions being more constrained, as the subsequent actions are less likely to have counterfactuals undone by the observer.
4.6.6. Evaluation of Explanations
likelihood is not everything. While likely causes are part of good explanations, they do not strongly correlate with explanations that people ï¬nd useful. The work outlined in this section provides three criteria that are at least as equally important: simplicity, generality, and coherence.
For explanation, if the goal of an explanatory agent is to provide the most likely causes of an event, then these three criteria can be used to prioritise among the most likely events. However, if the goal of an explanatory agent is to generate trust between itself and its human observers, these criteria should be considered as ï¬rst-class criteria in explanation generation beside or even above likelihood. For example, providing simpler explanations that increase the likelihood that the observer both understands and accepts the explanation may increase trust better than giving more likely explanations.
For interpretability, similarly, these three criteria can form part of decision-making algorithms; for example, a deliberative agent may opt to select an action that is less likely to achieve its goal, if the action helps towards other goals that the observer knows about, and has a smaller number of causes to refer to.
The selection and evaluation of explanations in artiï¬cial intelligence has been studied in some detail, going back to early work on abductive reasoning, in which explanations with structural simplicity, coherence, or minimality are preferred (e.g. [156, 97]) and the concept of explanatory power of a set of hypotheses is deï¬ned as the set of manifestations those hypotheses account for [1]. Other approaches use probability as the deï¬ning factor to determine the most likely explanation (e.g. [59]). In addition to the cognitive biases of people to discount probability, the probabilistic approaches have the problem that such ï¬ne-grained probabilities are not always available [95]. These selection mechanisms are context-independent and do not account for the explanations as being relevant to the question nor the explainee.
Leake [94], on the other hand, argues for goal-directed explanations in abductive reasoning that explicitly aim to reduce knowledge gaps; speciï¬cally to explain why an observed event is âreasonableâ and to help identify faulty reasoning processes that led to it being surprising. He proposes nine evaluation dimensions for explanations: timeliness, knowability, distinctiveness, predictive power, causal force, independence, repairability, blockability, and desirability. Some of these correspond to evaluation criteria outlined in Section 4.5; for example, distinctiveness notes that a cause that is surprising is of good explanatory value, which equates to the criteria of abnormality.
# 5. Social Explanation â How Do People Communicate Explanations?
Causal explanation is ï¬rst and foremost a form of social interaction. One speaks of giving causal explanations, but not attributions, perceptions, com- prehensions, categorizations, or memories. The verb to explain is a three-
49
place predicate: Someone explains something to someone. Causal ex- planation takes the form of conversation and is thus subject to the rules of conversation. [Emphasis original] â Hilton [72]
This ï¬nal section looks at the communication problem in explanation â something that has been studied little in explainable AI so far. The work outlined in this section asserts that the explanation process does not stop at just selecting an explanation, but considers that an explanation is an interaction between two roles: explainer and explainee (perhaps the same person/agent playing both roles), and that there are certain ârulesâ that govern this interaction.
# 5.1. Explanation as Conversation
Hilton [72] presents the most seminal article on the social aspects of conversation, proposing a conversational model of explanation based on foundational work undertaken by both himself and others. The primary argument of Hilton is that explanation is a conversation, and this is how it diï¬ers from causal attribution. He argues that there are two stages: the diagnosis of causality in which the explainer determines why an action/event occurred; and the explanation, which is the social process of conveying this to someone. The problem is then to âresolve a puzzle in the explaineeâs mind about why the event happened by closing a gap in his or her knowledgeâ [72, p. 66].
The conversational model argues that good social explanations must be relevant. This means that they must answer the question that is asked â merely identifying causes does not provide good explanations, because many of the causes will not be relevant to the questions; or worst still, if the âmost probableâ causes are selected to present to the explainee, they will not be relevant to the question asked. The information that is communicated between explainer and explainee should conform to the general rules of cooperative conversation [56], including being relevant to the explainee themselves, and what they already know.
Hilton [72] terms the second stage explanation presentation, and argues that when an explainer presents an explanation to an explainee, they are engaged in a conversation. As such, they tend to follow basic rules of conversation, which Hilton argues are captured by Griceâs maxims of conversation [56]: (a) quality; (b) quantity; (c) relation; and (d) manner. Coarsely, these respectively mean: only say what you believe; only say as much as is necessary; only say what is relevant; and say it in a nice way.
These maxims imply that the shared knowledge between explainer and explainee are presuppositions of the explanations, and the other factors are the causes that should be explained; in short, the explainer should not explain any causes they think the explainee already knows (epistemic explanation selection).
Previous sections have presented the relevant literature about causal connection (Sec- tions 3 and 4) and explanation selection (Sections 4). In the remainder of this subsection, we describe Griceâs model and present related research that analyses how people select explanations relative to subjective (or social) viewpoints, and present work that supports Hiltonâs conversational model of explanation [72].
5.1.1. Logic and Conversation
Griceâs maxims [56] (or the Gricean maxims) are a model for how people engage in cooperative conversation. Grice observes that conversational statements do not occur in 50
isolation: they are often linked together, forming a cooperative eï¬ort to achieve some goal of information exchange or some social goal, such as social bonding. He notes then that a general principle that one should adhere to in conversation is the cooperative principle: âMake your conversational contribution as much as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged â [56, p. 45].
For this, Grice [56] distinguishes four categories of maxims that would help to achieve the cooperative principle:
1. Quality: Make sure that the information is of high quality â try to make your contribution one that is true. This contains two maxims: (a) do not say things that you believe to be false; and (b) do not say things for which you do not have suï¬cient evidence.
2. Quantity: Provide the right quantity of information. This contains two maxims: (a) make your contribution as informative as is required; and (b) do not make it more informative than is required.
3. Relation: Only provide information that is related to the conversation. This con- sists of a single maxim: (a) Be relevant. This maxim can be interpreted as a strategy for achieving the maxim of quantity.
4. Manner : Relating to how one provides information, rather than what is provided. This consists of the âsupermaximâ of âBe perspicuousâ, but according to Grice, is broken into âvariousâ maxims such as: (a) avoid obscurity of expression; (b) avoid ambiguity; (c) be brief (avoid unnecessary prolixity); and (d) be orderly.
Grice [56] argues that for cooperative conversation, one should obey these maxims, and that people learn such maxims as part of their life experience. He further links these maxims to implicature, and shows that it is possible to violate some maxims while still being cooperative, in order to either not violate one of the other maxims, or to achieve some particular goal, such as to implicate something else without saying it. Irony and metaphors are examples of violating the quality maxims, but other examples, such as: Person A: âWhat did you think of the food they served? â; Person B: âWell, it was certainly healthyâ, violates the maxim of manner, but is implying perhaps that Person B did not enjoy the food, without them actually saying so.
Following from the claim that explanations are conversations, Hilton [72] argues that explanations should follow these maxims. The quality and quantity categories present logical characterisations of the explanations themselves, while the relation and manner categories deï¬ne how they explanations should be given.
5.1.2. Relation & Relevance in Explanation Selection
Of particular interest here is research to support these Gricean maxims; in particular, the related maxims of quantity and relevance, which together state that the speaker should only say what is necessary and relevant. In social explanation, research has shown that people select explanations to adhere to these maxims by considering the particular question being asked by the explainee, but also by giving explanations that the explainee does not already accept as being true.: To quote Hesslow:
51
What are being selected are essentially questions, and the causal selection that follows from this is determined by the straightforward criterion of explanatory relevance. â [69, p. 30]
In Section 4.4.1, we saw evidence to suggest that the diï¬erence between the fact and foil for contrastive whyâquestions are the relevant causes for explanation. In this section, we review work on the social aspects of explanation selection and evaluation.
Epistemic Relevance. Slugoski et al. [165] present evidence of Gricean maxims in expla- nation, and of support for the idea of explanation as conversation. They argue that the form of explanation must take into account its function as an answer to a speciï¬ed whyâ question, and that this should take part within a conversational framework, including the context of the explainee. They gave experimental participants information in the form of a police report about an individual named George who had been charged with assault after a school ï¬ght. This information contained information about George himself, and about the circumstances of the ï¬ght. Participants were then paired with another âpar- ticipantâ (played by a researcher), were told that the other participant had either: (a) information about George; (2) the circumstances of the ï¬ght; or (c) neither; and were asked to answer why George had assaulted the other person. The results showed partic- ipants provided explanations that are tailored to their expectations of what the hearer already knows, selecting single causes based on abnormal factors of which they believe the explainee is unaware; and that participants change their explanations of the same event when presenting to explainees with diï¬ering background knowledge.
Jaspars and Hilton [80] and Hilton [73] both argue that such results demonstrate that, as well as being true or likely, a good explanation must be relevant to both the question and to the mental model of the explainee. Byrne [16] oï¬ers a similar argument in her computational model of explanation selection, noting that humans are model-based, not proof-based, so explanations must be relevant to a model.
Halpern and Pearl [59] present an elegant formal model of explanation selection based on epistemic relevance. This model extends their work on structural causal models [59], discussed in Section 2.1.1. They deï¬ne an explanation as a fact that, if found to be true, would constitute an actual cause of a speciï¬c event.
Recall from Section 2.1.1 structural causal models [58] contain variables and functions between these variables. A situation is a unique assignment from variables to values. Halpern and Pearl [59] then deï¬ne an epistemic state as a set of situations, one for each possible situation that the explainee considers possible. Explaining the causes of an event then becomes providing the values for those variables that remove some situations from the epistemic state such that the cause of the event can be uniquely identiï¬ed. They then further show how to provide explanations that describe the structural model itself, rather than just the values of variables, and how to reason when provided with probability distributions over events. Given a probabilistic model, Halpern and Pearl Informally, this states formally deï¬ne the explanatory power of partial explanations. that explanation C1 has more explanatory power explanation C2 for explanandum E if and only if providing C1 to the explainee increases the prior probability of E being true more than providing C2 does.
Dodd and Bradshaw [38] demonstrates that the perceived intention of a speaker is important in implicature. Just as leading questions in eyewitness reports can have an
52
eï¬ect on the judgement of the eyewitness, so to it can aï¬ect explanation. They showed that the meaning and presuppositions that people infer from conversational implicatures depends heavily on the perceived intent or bias of the speaker. In their experiments, they asked participants to assess, among other things, the causes of a vehicle accident, with the account of the accident being given by diï¬erent parties: a neutral bystander vs. the driver of the vehicle. Their results show that the bystanderâs information is more trusted, but also that incorrect presuppositions are recalled as âfactsâ by the participants if the account was provided by the neutral source, but not the biased source; even if they observed the correct facts to begin with. Dodd and Bradshaw argue that this is because the participants ï¬ltered the information relative to their perceived intention of the person providing the account.
The Dilution Eï¬ect. Tetlock and Boettger [169] investigated the eï¬ect of implicature with respect to the information presented, particularly its relevance, showing that when presented with additional, irrelevant information, peopleâs implicatures are diluted. They performed a series of controlled experiments in which participants were presented with in- formation about an individual David, and were asked to make predictions about Davidâs future; for example, what his grade point average (GPA) would be. There were two control groups and two test groups. In the control groups, people were told David spent either 3 or 31 hours studying each week (which we will call groups C3 and C31), while in the diluted group test groups, subjects were also provided with additional irrelevant information about David (groups T3 and T31). The results showed that those in the diluted T3 group predicted a higher GPA than those in the undiluted C3 group, while those in the diluted T31 group predicted a lower GPA than those in the undiluted C31 group. Tetlock and Boettger argued that this is because participants assumed the irrel- evant information may have indeed been relevant, but its lack of support for prediction led to less extreme predictions. This study and studies on which it built demonstrate the importance of relevance in explanation.
In a further study, Tetlock et al. [170] explicitly controlled for conversational maxims, by informing one set of participants that the information displayed to them was chosen at random from the history of the individual. Their results showed that the dilution eï¬ect disappeared when conversational maxims were deactivated, providing further evidence for the dilution eï¬ect.
Together, these bodies of work and those on which they build demonstrate that Griceâs maxims are indeed important in explanation for several reasons; notably that they are a good model for how people expect conversation to happen. Further, while it is clear that providing more information than necessary not only would increase the cognitive load of the explainee, but that it dilutes the eï¬ects of the information that is truly important.
# 5.1.3. Argumentation and Explanation
Antaki and Leudar [3] extend Hiltonâs conversational model [72] from dialogues to arguments. Their research shows that a majority of statements made in explanations are actually argumentative claim-backings; that is, justifying that a particular cause indeed did hold (or was thought to have held) when a statement is made. Thus, explanations are used to both report causes, but also to back claims, which is an argument rather than just a question-answer model. They extend the conversational model to a wider class of contrast cases. As well as explaining causes, one must be prepared to defend a particular 53
claim made in a causal explanation. Thus, explanations extend not just to the state of aï¬airs external to the dialogue, but also to the internal attributes of the dialogue itself. An example on the distinction between explanation and argument provided by Antaki and Leudar [3, p. 186] is âThe water is hot because the central heating is onâ. The distinction lies on whether the speaker believes that the hearer believes that the water is hot or not. If it is believed that the speaker believes that the water is hot, then the central heating being on oï¬ers an explanation: it contrasts with a case in which the water is not hot. If the speaker believes that the hearer does not believe the water is hot, then this is an argument that the water should indeed be hot; particularly if the speaker believes that the hearer believes that the central heating is on. The speaker is thus trying to persuade the hearer that the water is hot. However, the distinction is not always so clear because explanations can have argumentative functions.
# 5.1.4. Linguistic structure
Malle et al. [116] argue that the linguistic structure of explanations plays an important role in interpersonal explanation. They hypothesise that some linguistic devices are used not to change the reason, but to indicate perspective and to manage impressions. They asked experimental participants to select three negative and three positive intentional actions that they did recently that were outside of their normal routine. They then asked participants to explain why they did this, and coded the answers. Their results showed several interesting ï¬ndings.
First, explanations for reasons can be provided in two diï¬erent ways: marked or unmarked. An unmarked reason is a direct reason, while a marked reason has a mental state marker attached. For example, to answer the question âWhy did she go back into the houseâ, the explanations âThe key is still in the houseâ and âShe thinks the key is still in the houseâ both give the same reason, but with diï¬erent constructs that are used to give diï¬erent impressions: the second explanation gives an impression that the explainee may not be in agreement with the actor.
Second, people use belief markers and desire markers; for example, âShe thinks the key is in the houseâ and âShe wants the key to be in her pocketâ respectively. In general, dropping ï¬rst-person markings, that is, a speaker dropping âI/we believeâ, is common in conversation and the listeners automatically infer that this is a belief of the speaker. For example, âThe key is in the houseâ indicates a belief on the behalf of the speaker and inferred to mean âI believe the key is in the houseâ [116]5.
However, for third-person perspective, this is not the case. The unmarked version of explanations, especially belief markers, generally imply some sort of agreement from the explainer: âShe went back in because the key is in the houseâ invites the explainee to infer that the actor and the explainer share the belief that the key is in the house. Whereas, âShe went back in because she believes the key is in the houseâ is ambiguous â it does not (necessarily) indicate the belief of the speaker. The reason: âShe went back in because she mistakenly believes the key is in the houseâ oï¬ers no ambiguity of the speakerâs belief.
Malle [112, p. 169, Table 6.3] argues that diï¬erent markers sit on a scale between being distancing to being embracing. For example, âshe mistakenly believesâ is more
5Malle [112, Chapter 4] also brieï¬y discusses valuings as markers, such as âShe likesâ, but notes that
these are rarely dropped in reasons.
54
distancing than âshe jumped to the conclusionâ â, while âshe realisesâ is embracing. Such constructs aim not to provide diï¬erent reasons, but merely allow the speaker to form impressions about themselves and the actor.
# 5.2. Explanatory Dialogue
If we accept the model of explanation as conversation, then we may ask whether there are particular dialogue structures for explanation. There has been a collection of such articles ranging from dialogues for pragmatic explanation [176] to deï¬nitions based on transfer of understanding [179]. However, the most relevant for the problem of explanation in AI is a body of work lead largely by Walton.
Walton [180] proposed a dialectical theory of explanation, putting forward similar ideas to that of Antaki and Leudar [3] in that some parts of an explanatory dialogue require the explainer to provide backing arguments to claims. In particular, he argues that such an approach is more suited to âeverydayâ or interpersonal explanation than models based on scientiï¬c explanation. He further argues that such models should be combined with ideas of explanation as understanding, meaning that social explanation is about transferring knowledge from explainer to explainee. He proposes a series of conditions on the dialogue and its interactions as to when and how an explainer should transfer knowledge to an explainee.
In a follow-on paper, Walton [182] proposes a formal dialogue model called CE, based on an earlier persuasion dialogue [184], which deï¬nes the conditions on how a explanatory dialogue commences, rules for governing the locutions in the dialogue, rules for governing the structure or sequence of the dialogue, success rules and termination rules.
Extending on this work further [182], Walton [183] describes an improved formal dia- logue system for explanation, including a set of speech act rules for practical explanation, consisting of an opening stage, exploration stage, and closing stage. In particular, this paper focuses on the closing stage to answer the question: how do we know that an explanation has âï¬nishedâ ? Scriven [162] argues that to test someoneâs understanding of a topic, merely asking them to recall facts that have been told to them is insuï¬cient â we should also be able to answer new questions that demonstrate generalisation of and inference from what has been learnt: an examination.
To overcome this, Walton proposes the use of examination dialogues [181] as a method for the explainer to determine whether the explainee has correctly understood the ex- planation â that is, the explainer has a real understanding, not merely a perceived (or claimed) understanding. Walton proposes several rules for the closing stage of the exam- ination dialogue, including a rule for terminating due to âpractical reasonsâ, which aim to solve the problem of the failure cycle, in which repeated explanations are requested, and thus the dialogue does not terminate.
Arioua and Croitoru [4] formalise Waltonâs work on explanation dialogue [183], ground- ing it in a well-known argumentation framework [147]. In addition, they provide for- malisms of commitment stores and understanding stores for maintaining what each party in the dialogue is committed to, and what they already understand. This is necessary to prevent circular arguments. They further deï¬ne how to shift between diï¬erent dia- logues in order to enable nested explanations, in which an explanation produces a new whyâquestion, but also to shift from an explanation to an argumentation dialogue, which supports nested argument due to a challenge from an explainee, as noted by Antaki and
55
Leudar [3]. The rules deï¬ne when this dialectical shift can happen, when it can return to the explanation, and what the transfer of states is between these; that is, how the explanation state is updated after a nested argument dialogue.
5.3. Social Explanation and XAI
This section presents some ideas on how research from social explanation aï¬ects researchers and practitioners in XAI.
# 5.3.1. Conversational Model
The conversational model of explanation according to Hilton [72], and its subsequent extension by Antaki and Leudar [3] to consider argumentation, are appealing and useful models for explanation in AI. In particular, they are appealing because of its general- ity â they can be used to explain human or agent actions, emotions, physical events, algorithmic decisions, etc. It abstracts away from the cognitive processes of causal attri- bution and explanation selection, and therefore does not commit to any particular model of decision making, of how causes are determined, how explanations are selected, or even any particular mode of interaction.
One may argue that in digital systems, many explanations would be better done in a visual manner, rather than a conversational manner. However, the models of Hilton [72], Antaki and Leudar [3], and Walton [183] are all independent of language. They deï¬ne interactions based on questions and answers, but these need not be verbal. Questions could be asked by interacting with a visual object, and answers could similarly be pro- vided in a visual way. While Griceâs maxim are about conversation, they apply just as well to other modes of interaction. For instance, a good visual explanation would display only quality explanations that are relevant and relate to the question â these are exactly Griceâs maxims.
I argue that, if we are to design and implement agents that can truly explain them- selves, in many scenarios, the explanation will have to be interactive and adhere to maxims of communication, irrelevant of the media used. For example, what should an explanatory agent do if the explainee does not accept a selected explanation?
# 5.3.2. Dialogue
Waltonâs explanation dialogues [180, 182, 183], which build on well-accepted mod- els from argumentation, are closer to the notion of computational models than that of Hilton [72] or Antaki and Leudar [3]. While Walton also abstracts away from the cog- nitive processes of causal attribution and explanation selection, his dialogues are more idealised ways of how explanation can occur, and thus make certain assumptions that may be reasonable for a model, but of course, do not account for all possible interactions. However, this is appealing from an explainable AI perspective because it is clear that the interactions between an explanatory agent and an explainee will need to be scoped to be computationally tractable. Waltonâs models provide a nice step towards implementing Hiltonâs conversational model.
Arioua and Croitoruâs formal model for explanation [4] not only brings us one step closer to a computational model, but also nicely brings together the models of Hilton [72] and Antaki and Leudar [3] for allowing arguments over claims in explanations. Such formal models of explanation could work together with concepts such as conversation policies [55] to implement explanations.
56
The idea of interactive dialogue XAI is not new. In particular, a body of work by Cawsey [17, 18, 19] describes EDGE: a system that generates natural-language dialogues for explaining complex principles. Cawseyâs work was novel because it was the ï¬rst to investigate discourse within an explanation, rather than discourse more generally. Due to the complexity of explanation, Cawsey advocates context-speciï¬c, incremental explanation, interleaving planning and execution of an explanation dialogue. EDGE separates content planning (what to say) from dialogue planning (organisation of the Interruptions attract their own sub-dialog. The ï¬ow of the dialogue is interaction). context dependent, in which context is given by: (1) the current state of the discourse relative to the goal/sub-goal hierarchy; (2) the current focus of the explanation, such as which components of a device are currently under discussion; and (3) assumptions about the userâs knowledge. Both the content and dialogue are inï¬uenced by the context. The dialogue is planned using a rule-based system that break explanatory goals into sub-goals and utterances. Evaluation of EDGE [19] is anecdotal, based on a small set of people, and with no formal evaluation or comparison.
At a similar time, Moore and Paris [134] devised a system for explanatory text gener- ation within dialogues that also considers context. They explicitly reject the notion that schemata can be used to generate explanations, because they are too rigid and lack the intentional structure to recover from failures or misunderstandings in the dialogue. Like Cawseyâs EDGE system, Moore and Paris explicitly represent the userâs knowledge, and plan dialogues incrementally. The two primary diï¬erences from EDGE is that Moore and Parisâs system explicitly models the eï¬ects that utterances can have on the hearerâs mental state, providing ï¬exibility that allows recovery from failure and misunderstand- ing; and that the EDGE system follows an extended explanatory plan, including probing questions, which are deemed less appropriate in Moore and Parisâs application area of advisory dialogues. The focus of Cawseyâs and Moore and Parisâs work are in applica- tions such as intelligent tutoring, rather than on AI that explains itself, but many of the lessons and ideas generalise.
EDGE and other related research on interactive explanation considers only verbal dialogue. As noted above, abstract models of dialogue such as those proposed by Walton [183] may serve as a good starting point for multi-model interactive explanations.
# 5.3.3. Theory of Mind
is required to provide meaningful explanations. However, for social explanation, a Theory of Mind is also required. Clearly, as part of a dialog, an explanatory agent should at least keep track of what has already been explained, which is a simple model of other and forms part of the explanatory context. However, if an intelligent agent is operating with a human explainee in a particular environment, it could may have access to more complete models of other, such as the otherâs capabilities and their current beliefs or knowledge; and even the explaineeâs model of the explanatory agent itself. If it has such a model, the explanatory agent can exploit this by tailoring the explanation to the human observer. Halpern and Pearl [59] already considers a simpliï¬ed idea of this in their model of explanation, but other work on epistemic reasoning and planning [42, 135] and planning for interactive dialogue [143] can play a part here. These techniques will be made more powerful if they are aligned with user modelling techniques used in HCI [44].
57
While the idea of Theory of Mind in AI is not new; see for example [178, 37]; itâs application to explanation has not been adequately explored. Early work on XAI took the idea of dialogue and user modelling seriously. For example, Cawseyâs EDGE system, described in Section 5.3.2, contains a speciï¬c user model to provide better context for interactive explanations [20]. Cawsey argues that the user model must be integrated closely with explanation model to provide more natural dialogue. The EDGE user model consists of two parts: (1) the knowledge that the user has about a phenomenon; and (2) their âlevel of expertiseâ; both of which can be updated during the dialogue. EDGE uses dialogues questions to build a user model, either explicitly, using questions such as âDo you known X?â or âWhat is the value of Y?â, or implicitly, such as when a user asks for clariï¬cation. EDGE tries to guess other indirect knowledge using logical inference from this direct knowledge. This knowledge is then used to tailor explanation to the speciï¬c person, which is an example of using epistemic relevance to select explanations. Cawsey was not the ï¬rst to consider user knowledge; for example, Weinerâs BLAH system [185] for incremental explanation also had a simple user model for knowledge that is used to tailor explanation, and Weiner refers to Griceâs maxim of quality to justify this.
More recently, Chakraborti et al. [21] discuss preliminary work in this area for ex- plaining plans. Their problem deï¬nition consists of two planning models: the explainer and the explainee; and the task is to align the two models by minimising some criteria; for example, the number of changes. This is an example of using epistemic relevance to tailor an explanation. Chakraborti et al. class this as contrastive explanation, because the explanation contrasts two models. However, this is not the same use of the term âcontrastiveâ as used in social science literature (see Section 2.3), in which the contrast is an explicit foil provided by the explainee as part of a question.
# 5.3.4. Implicature
It is clear that in some settings, implicature can play an important role. Reasoning about implications of what the explainee says could support more succinct explanations, but just as importantly, those designing explanatory agents must also keep in mind what people could infer from the literal explanations â both correctly and incorrectly.
Further to this, as noted by Dodd and Bradshaw [38], people interpret explanations relative to the intent of the explainer. This is important for explainable AI because one of the main goals of explanation is to establish trust of people, and as such, explainees will be aware of this goal. It is clear that we should quite often assume from the outset that trust levels are low. If explainees are sceptical of the decisions made by a system, it is not diï¬cult to imagine that they will also be sceptical of explanations provided, and could interpret explanations as biased.
# 5.3.5. Dilution
Finally, it is important to focus on dilution. As noted in the introduction of this paper, much of the work in explainable AI is focused on causal attributions. The work outlined in Section 4 shows that this is only part of the problem. While presenting a casual chain may allow an explainee to ï¬ll in the gaps of their own knowledge, there is still a likely risk that the less relevant parts of the chain will dilute those parts that are crucial to the particular question asked by the explainee. Thus, this again emphasises the importance of explanation selection and relevance.
58
5.3.6. Social and Interactive Explanation
The recent surge in explainable AI has not (yet) truly adopted the concept socially- interactive explanation, at least, relative to the ï¬rst wave of explainable AI systems I hypothesise that this is such as that by Cawsey [20] and Moore and Paris [134]. largely due to the nature of the task being explained. Most recent research is concerned with explainable machine learning, whereas early work explained symbolic models such as expert systems and logic programs. This inï¬uences the research in two ways: (1) recent research focuses on how to abstract and simplify uninterpretable models such as neural nets, whereas symbolic approaches are relatively more interpretable and need less abstraction in general; and (2) an interactive explanation is a goal-based endeavour, which lends itself more naturally to symbolic approaches. Given that early work on XAI was to explain symbolic approaches, the authors of such work would have more intuitively seen the link to interaction. Despite this, others in the AI community have recently re-discovered the importance of social interaction for explanation; for example, [186, 163], and have noted that this is a problem that requires collaboration with HCI researchers.
# 6. Conclusions
In this paper, I have argued that explainable AI can beneï¬t from existing models of how people deï¬ne, generate, select, present, and evaluate explanations. I have reviewed what I believe are some of the most relevant and important ï¬ndings from social science research on human explanation, and have provide some insight into how this work can be used in explainable AI.
In particular, we should take the four major ï¬ndings noted in the introduction into account in our explainable AI models: (1) whyâquestions are contrastive; (2) explanations are selected (in a biased manner); (3) explanations are social; and (4) probabilities are not as important as causal links. I acknowledge that incorporating these ideas are not feasible for all applications, but in many cases, they have the potential to improve explanatory agents. I hope and expect that readers will also ï¬nd other useful ideas from this survey. It is clear that adopting this work into explainable AI is not a straightforward step. From a social science viewpoint, these models will need to be reï¬ned and extended to pro- vide good explanatory agents, which requires researchers in explainable AI to work closely with researchers from philosophy, psychology, cognitive science, and human-computer interaction. Already, projects of this type are underway, with impressive results; for example, see [91, 89, 157].
# Acknowledgements
The author would like to thank Denis Hilton for his review on an earlier draft of this paper, pointers to several pieces of related work, and for his many insightful discussions on the link between explanation in social sciences and artiï¬cial intelligence. The author would also like to thank several others for critical input of an earlier draft: Natasha Goss, Michael Winikoï¬, Gary Klein, Robert Hoï¬man, and the anonymous reviewers; and Darryn Reid for his discussions on the link between self, trust, and explanation.
This work was undertaken while the author was on sabbatical at the Universit´e de Toulouse Capitole, and was partially funded by Australian Research Council DP160104083
59
Catering for individualsâ emotions in technology development and and a Sponsored Re- search Collaboration grant from the Commonwealth of Australia Defence Science and Technology Group and the Defence Science Institute, an initiative of the State Govern- ment of Victoria.
# References
[1] D. Allemang, M. C. Tanner, T. Bylander, J. R. Josephson, Computational Complexity of Hypoth- esis Assembly, in: IJCAI, vol. 87, 1112â1117, 1987.
[2] J. Angwin, J. Larson, S. Mattu, L. Kirchner, Machine bias, ProPublica, May 23. [3] C. Antaki, I. Leudar, Explaining in conversation: Towards an argument model, European Journal
of Social Psychology 22 (2) (1992) 181â194.
[4] A. Arioua, M. Croitoru, Formalizing explanatory dialogues, in: International Conference on Scal- able Uncertainty Management, Springer, 282â297, 2015.
[5] J. L. Aronson, On the grammar of âcauseâ, Synthese 22 (3) (1971) 414â430. [6] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, K.-R. M ËAËzller, How to explain individual classiï¬cation decisions, Journal of Machine Learning Research 11 (Jun) (2010) 1803â1831.
[7] E. Bekele, W. E. Lawson, Z. Horne, S. Khemlani, Human-level explanatory biases for person re-identiï¬cation .
[8] P. Besnard, A. Hunter, Elements of argumentation, vol. 47, MIT press Cambridge, 2008. [9] O. Biran, C. Cotton, Explanation and justiï¬cation in machine learning: A survey, in: IJCAI 2017
Workshop on Explainable Artiï¬cial Intelligence (XAI), 8â13, 2017.
[10] A. Boonzaier, J. McClure, R. M. Sutton, Distinguishing the eï¬ects of beliefs and preconditions: The folk psychology of goals and actions, European Journal of Social Psychology 35 (6) (2005) 725â740.
[11] R. I. Brafman, C. Domshlak, From One to Many: Planning for Loosely Coupled Multi-Agent Systems., in: International Conference on Automated Planning and Scheduling, 28â35, 2008. [12] J. Broekens, M. Harbers, K. Hindriks, K. Van Den Bosch, C. Jonker, J.-J. Meyer, Do you get it? User-evaluated explainable BDI agents, in: German Conference on Multiagent System Technolo- gies, Springer, 28â39, 2010.
[13] S. Bromberger, Whyâquestions, in: R. G. Colodny (Ed.), Mind and Cosmos: Essays in Contem- porary Science and Philosophy, Pittsburgh University Press, Pittsburgh, 68â111, 1966.
[14] B. Buchanan, E. Shortliï¬e, Rule-based expert systems: the MYCIN experiments of the Stanford Heuristic Programming Project, Addison-Wesley, 1984.
[15] A. Burguet, D. Hilton, Eï¬ets de contexte sur lâexplication causale, in: M. B. et A. Trognon (Ed.), Psychologie Sociale et Communication, Paris: Dunod, 219â228, 2004.
[16] R. M. Byrne, The Construction of Explanations, in: AI and Cognitive Scienceâ90, Springer, 337â 351, 1991.
[17] A. Cawsey, Generating Interactive Explanations., in: AAAI, 86â91, 1991. [18] A. Cawsey, Explanation and interaction: the computer generation of explanatory dialogues, MIT
press, 1992.
[19] A. Cawsey, Planning interactive explanations, International Journal of Man-Machine Studies 38 (2) (1993) 169â199.
[20] A. Cawsey, User modelling in interactive explanations, User Modeling and User-Adapted Interac- tion 3 (1993) 221â247.
[21] T. Chakraborti, S. Sreedharan, Y. Zhang, S. Kambhampati, Plan explanations as model rec- onciliation: Moving beyond explanation as soliloquy, in: Proceedings of IJCAI, URL https: //www.ijcai.org/proceedings/2017/0023.pdf, 2017.
[22] K. Chan, T.-W. Lee, P. A. Sample, M. H. Goldbaum, R. N. Weinreb, T. J. Sejnowski, Compar- ison of machine learning and traditional classiï¬ers in glaucoma diagnosis, IEEE Transactions on Biomedical Engineering 49 (9) (2002) 963â974.
[23] B. Chandrasekaran, M. C. Tanner, J. R. Josephson, Explaining control strategies in problem solving, IEEE Expert 4 (1) (1989) 9â15.
[24] E. Charniak, R. Goldman, A probabilistic model of plan recognition, in: Proceedings of the ninth National conference on Artiï¬cial intelligence-Volume 1, AAAI Press, 160â165, 1991.
60
[25] J. Y. Chen, K. Procci, M. Boyce, J. Wright, A. Garcia, M. Barnes, Situation awareness-based agent transparency, Tech. Rep. ARL-TR-6905, U.S. Army Research Laboratory, 2014.
[26] Y. Chevaleyre, U. Endriss, J. Lang, N. Maudet, A short introduction to computational social International Conference on Current Trends in Theory and Practice of Computer choice, in: Science, Springer, 51â69, 2007.
[27] S. Chin-Parker, A. Bradner, Background shifts aï¬ect explanatory style: how a pragmatic theory of explanation accounts for background eï¬ects in the generation of explanations, Cognitive Processing 11 (3) (2010) 227â249.
[28] S. Chin-Parker, J. Cantelon, Contrastive Constraints Guide Explanation-Based Category Learning, Cognitive science 41 (6) (2017) 1645â1655.
[29] H. Chockler, J. Y. Halpern, Responsibility and blame: A structural-model approach, Journal of Artiï¬cial Intelligence Research 22 (2004) 93â115.
[30] A. Cimpian, E. Salomon, The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism, Behavioral and Brain Sciences 37 (5) (2014) 461â480.
[31] A. Cooper, The inmates are running the asylum: Why high-tech products drive us crazy and how to restore the sanity, Sams Indianapolis, IN, USA, 2004.
[32] DARPA, Explainable Artiï¬cial Intelligence (XAI) Program, http://www.darpa.mil/program/ explainable-artificial-intelligence, full solicitation at http://www.darpa.mil/attachments/ DARPA-BAA-16-53.pdf, 2016.
# ristie
[33] G. C. Davey, Characteristics of individuals with fear of spiders, Anxiety Research 4 (4) (1991) 299â314.
[34] M. M. de Graaf, B. F. Malle, How People Explain Action (and Autonomous Intelligent Systems Should Too), in: AAAI Fall Symposium on Artiï¬cial Intelligence for Human-Robot Interaction, 2017.
[35] D. C. Dennett, The intentional stance, MIT press, 1989. [36] D. C. Dennett, From bacteria to Bach and back: The evolution of minds, WW Norton & Company,
2017.
[37] F. Dignum, R. Prada, G. J. Hofstede, From autistic to social agents, in: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, IFAAMAS, 1161â1164, 2014.
[38] D. H. Dodd, J. M. Bradshaw, Leading questions and memory: Pragmatic constraints, Journal of Memory and Language 19 (6) (1980) 695.
[39] P. Dowe, Wesley Salmonâs process theory of causality and the conserved quantity theory, Philos- ophy of Science 59 (2) (1992) 195â216.
[40] T. Eiter, T. Lukasiewicz, Complexity results for structure-based causality, Artiï¬cial Intelligence 142 (1) (2002) 53â89.
[41] T. Eiter, T. Lukasiewicz, Causes and explanations in the structural-model approach: Tractable cases, Artiï¬cial Intelligence 170 (6-7) (2006) 542â580.
[42] R. Fagin, J. Halpern, Y. Moses, M. Vardi, Reasoning about knowledge, vol. 4, MIT press Cam- bridge, 1995.
[43] D. Fair, Causation and the Flow of Energy, Erkenntnis 14 (3) (1979) 219â250. [44] G. Fischer, User modeling in humanâcomputer interaction, User modeling and user-adapted in-
teraction 11 (1-2) (2001) 65â86.
[45] J. Fox, D. Glasspool, D. Grecu, S. Modgil, M. South, V. Patkar, Argumentation-based inference and decision makingâA medical perspective, IEEE intelligent systems 22 (6).
[46] M. Fox, D. Long, D. Magazzeni, Explainable Planning, in: IJCAI 2017 Workshop on Explainable Artiï¬cial Intelligence (XAI), URL https://arxiv.org/pdf/1709.10256, 2017.
[47] N. Frosst, G. Hinton, Distilling a Neural Network Into a Soft Decision Tree, arXiv e-prints 1711.09784, URL https://arxiv.org/abs/1711.09784.
[48] T. Gerstenberg, D. A. Lagnado, Spreading the blame: The allocation of responsibility amongst multiple agents, Cognition 115 (1) (2010) 166â171.
# Peterson
[49] T. Gerstenberg, M. F. Peterson, N. D. Goodman, D. A. Lagnado, J. B. Tenenbaum, Eye-tracking causality, Psychological science 28 (12) (2017) 1731â1744.
[50] M. Ghallab, D. Nau, P. Traverso, Automated Planning: theory and practice, Elsevier, 2004. [51] D. T. Gilbert, P. S. Malone, The correspondence bias, Psychological bulletin 117 (1) (1995) 21. [52] C. Ginet, In defense of a non-causal account of reasons explanations, The Journal of Ethics 12 (3-4)
(2008) 229â237.
[53] L. Giordano, C. Schwind, Conditional logic of actions and causation, Artiï¬cial Intelligence 157 (1- 61
2) (2004) 239â279.
[54] V. Girotto, P. Legrenzi, A. Rizzo, Event controllability in counterfactual thinking, Acta Psycho- logica 78 (1) (1991) 111â133.
[55] M. Greaves, H. Holmback, J. Bradshaw, What is a conversation policy?, in: Issues in Agent Communication, Springer, 118â131, 2000.
[56] H. P. Grice, Logic and conversation, in: Syntax and semantics 3: Speech arts, New York: Academic Press, 41â58, 1975.
[57] J. Y. Halpern, Axiomatizing causal reasoning, Journal of Artiï¬cial Intelligence Research 12 (2000) 317â337.
[58] J. Y. Halpern, J. Pearl, Causes and explanations: A structural-model approach. Part I: Causes, The British Journal for the Philosophy of Science 56 (4) (2005) 843â887.
[59] J. Y. Halpern, J. Pearl, Causes and explanations: A structural-model approach. Part II: Explana- tions, The British Journal for the Philosophy of Science 56 (4) (2005) 889â911.
[60] R. J. Hankinson, Cause and explanation in ancient Greek thought, Oxford University Press, 2001. [61] N. R. Hanson, Patterns of discovery: An inquiry into the conceptual foundations of science, CUP
Archive, 1965.
[62] G. H. Harman, The inference to the best explanation, The philosophical review 74 (1) (1965) 88â95.
[63] M. Harradon, J. Druce, B. Ruttenberg, Causal Learning and Explanation of Deep Neural Net- works via Autoencoded Activations, arXiv e-prints 1802.00541, URL https://arxiv.org/abs/ 1802.00541.
[64] H. L. A. Hart, T. Honor´e, Causation in the Law, OUP Oxford, 1985. [65] B. Hayes, J. A. Shah, Improving Robot Controller Transparency Through Autonomous Policy Explanation, in: Proceedings of the 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), 2017.
[66] F. Heider, The psychology of interpersonal relations, New York: Wiley, 1958. [67] F. Heider, M. Simmel, An experimental study of apparent behavior, The American Journal of
Psychology 57 (2) (1944) 243â259.
[68] C. G. Hempel, P. Oppenheim, Studies in the Logic of Explanation, Philosophy of Science 15 (2) (1948) 135â175.
[69] G. Hesslow, The problem of causal selection, Contemporary science and natural explanation: Commonsense conceptions of causality (1988) 11â32.
[70] D. Hilton, Social Attribution and Explanation, in: Oxford Handbook of Causal Reasoning, Oxford University Press, 645â676, 2017.
[71] D. J. Hilton, Logic and causal attribution, in: Contemporary science and natural explanation: Commonsense conceptions of causality, New York University Press, 33â65, 1988.
[72] D. J. Hilton, Conversational processes and causal explanation, Psychological Bulletin 107 (1) (1990) 65â81.
[73] D. J. Hilton, Mental models and causal explanation: Judgements of probable cause and explanatory relevance, Thinking & Reasoning 2 (4) (1996) 273â308.
[74] D. J. Hilton, J. McClure, B. Slugoski, Counterfactuals, conditionals and causality: A social psy- chological perspective, in: D. R. Mande, D. J. Hilton, P. Catellani (Eds.), The psychology of counterfactual thinking, London: Routledge, 44â60, 2005.
[75] D. J. Hilton, J. McClure, R. M. Sutton, Selecting explanations from causal chains: Do statistical principles explain preferences for voluntary causes?, European Journal of Social Psychology 40 (3) (2010) 383â400.
[76] D. J. Hilton, J. L. McClure, R. Slugoski, Ben, The Course of Events: Counterfactuals, Causal Sequences and Explanation, in: D. R. Mandel, D. J. Hilton, P. Catellani (Eds.), The Psychology of Counterfactual Thinking, Routledge, 2005.
[77] D. J. Hilton, B. R. Slugoski, Knowledge-based causal attribution: The abnormal conditions focus model, Psychological review 93 (1) (1986) 75.
[78] R. R. Hoï¬man, G. Klein, Explaining explanation, part 1: theoretical foundations, IEEE Intelligent Systems 32 (3) (2017) 68â73.
[79] D. Hume, An enquiry concerning human understanding: A critical edition, vol. 3, Oxford Univer- sity Press, 2000.
[80] J. M. Jaspars, D. J. Hilton, Mental models of causal reasoning, in: The social psychology of knowledge, Cambridge University Press, 335â358, 1988.
[81] J. R. Josephson, S. G. Josephson, Abductive inference: Computation, philosophy, technology, Cambridge University Press, 1996.
62
[82] D. Kahneman, Thinking, fast and slow, Macmillan, 2011. [83] D. Kahneman, A. Tversky, The simulation heuristic, in: P. S. D. Kahneman, A. Tversky (Eds.), Judgment under Uncertainty: Heuristics and Biases, New York: Cambridge University Press, 1982.
[84] Y. Kashima, A. McKintyre, P. Cliï¬ord, The category of the mind: Folk psychology of belief, desire, and intention, Asian Journal of Social Psychology 1 (3) (1998) 289â313.
[85] A. Kass, D. Leake, Types of Explanations, Tech. Rep. ADA183253, DTIC Document, 1987. [86] H. H. Kelley, Attribution theory in social psychology, in: Nebraska symposium on motivation,
University of Nebraska Press, 192â238, 1967.
[87] H. H. Kelley, Causal schemata and the attribution process, General Learning Press, Morristown, NJ, 1972.
[88] J. Knobe, Intentional action and side eï¬ects in ordinary language, Analysis 63 (279) (2003) 190â 194.
[89] T. Kulesza, M. Burnett, W.-K. Wong, S. Stumpf, Principles of explanatory debugging to per- sonalize interactive machine learning, in: Proceedings of the 20th International Conference on Intelligent User Interfaces, ACM, 126â137, 2015.
[90] T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, W.-K. Wong, Too much, too little, or just right? Ways explanations impact end usersâ mental models, in: Visual Languages and Human- Centric Computing (VL/HCC), 2013 IEEE Symposium on, IEEE, 3â10, 2013.
[91] T. Kulesza, S. Stumpf, W.-K. Wong, M. M. Burnett, S. Perona, A. Ko, I. Oberst, Why-oriented end-user debugging of naive Bayes text classiï¬cation, ACM Transactions on Interactive Intelligent Systems (TiiS) 1 (1) (2011) 2.
[92] D. A. Lagnado, S. Channon, Judgments of cause and blame: The eï¬ects of intentionality and foreseeability, Cognition 108 (3) (2008) 754â770.
[93] P. Langley, B. Meadows, M. Sridharan, D. Choi, Explainable Agency for Intelligent Autonomous Systems, in: Proceedings of the Twenty-Ninth Annual Conference on Innovative Applications of Artiï¬cial Intelligence, AAAI Press, 2017.
[94] D. B. Leake, Goal-Based Explanation Evaluation, Cognitive Science 15 (4) (1991) 509â545. [95] D. B. Leake, Abduction, experience, and goals: A model of everyday abductive explanation,
Journal of Experimental & Theoretical Artiï¬cial Intelligence 7 (4) (1995) 407â428.
[96] J. Leddo, R. P. Abelson, P. H. Gross, Conjunctive explanations: When two reasons are better than one, Journal of Personality and Social Psychology 47 (5) (1984) 933.
[97] H. J. Levesque, A knowledge-level account of abduction, in: IJCAI, 1061â1067, 1989. [98] D. Lewis, Causation, The Journal of Philosophy 70 (17) (1974) 556â567. [99] D. Lewis, Causal explanation, Philosophical Papers 2 (1986) 214â240.
[100] B. Y. Lim, A. K. Dey, Assessing demand for intelligibility in context-aware applications, in: Pro- ceedings of the 11th international conference on Ubiquitous computing, ACM, 195â204, 2009.
[101] M. P. Linegang, H. A. Stoner, M. J. Patterson, B. D. Seppelt, J. D. Hoï¬man, Z. B. Crittendon, J. D. Lee, Human-automation collaboration in dynamic mission planning: A challenge requiring an ecological approach, Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50 (23) (2006) 2482â2486.
[102] P. Lipton, Contrastive explanation, Royal Institute of Philosophy Supplement 27 (1990) 247â266. [103] Z. C. Lipton, The mythos of model interpretability, arXiv preprint arXiv:1606.03490 . [104] T. Lombrozo, The structure and function of explanations, Trends in Cognitive Sciences 10 (10)
(2006) 464â470.
[105] T. Lombrozo, Simplicity and probability in causal explanation, Cognitive psychology 55 (3) (2007) 232â257.
[106] T. Lombrozo, Explanation and categorization: How âwhy?â informs âwhat?â, Cognition 110 (2) (2009) 248â253.
[107] T. Lombrozo, Causalâexplanatory pluralism: How intentions, functions, and mechanisms inï¬uence causal ascriptions, Cognitive Psychology 61 (4) (2010) 303â332.
[108] T. Lombrozo, Explanation and abductive inference, Oxford handbook of thinking and reasoning (2012) 260â276.
[109] T. Lombrozo, N. Z. Gwynne, Explanation and inference: mechanistic and functional explanations guide property generalization, Frontiers in human neuroscience 8 (2014) 700.
[110] J. L. Mackie, The cement of the universe, Oxford, 1980. [111] B. F. Malle, How people explain behavior: A new theoretical framework, Personality and Social
Psychology Review 3 (1) (1999) 23â48.
[112] B. F. Malle, How the mind explains behavior: Folk explanations, meaning, and social interaction, 63
MIT Press, 2004.
[113] B. F. Malle, Attribution theories: How people make sense of behavior, Theories in Social Psychol- ogy (2011) 72â95.
[114] B. F. Malle, Time to Give Up the Dogmas of Attribution: An Alternative Theory of Behavior Explanation, Advances in Experimental Social Psychology 44 (1) (2011) 297â311.
[115] B. F. Malle, J. Knobe, The folk concept of intentionality, Journal of Experimental Social Psychol- ogy 33 (2) (1997) 101â121.
[116] B. F. Malle, J. Knobe, M. J. OâLaughlin, G. E. Pearce, S. E. Nelson, Conceptual structure and social functions of behavior explanations: Beyond personâsituation attributions, Journal of Per- sonality and Social Psychology 79 (3) (2000) 309.
[117] B. F. Malle, J. M. Knobe, S. E. Nelson, Actor-observer asymmetries in explanations of behavior: new answers to an old question, Journal of Personality and Social Psychology 93 (4) (2007) 491. [118] B. F. Malle, G. E. Pearce, Attention to behavioral events during interaction: Two actor-observer gaps and three attempts to close them, Journal of Personality and Social Psychology 81 (2) (2001) 278â294.
[119] D. Marr, Vision: A computational investigation into the human representation and processing of visual information, Inc., New York, NY, 1982.
[120] D. Marr, T. Poggio, From understanding computation to understanding neural circuitry, AI Memos AIM-357, MIT, 1976.
[121] R. McCloy, R. M. Byrne, Counterfactual thinking about controllable events, Memory & Cognition 28 (6) (2000) 1071â1078.
[122] J. McClure, Goal-based explanations of actions and outcomes, European Review of Social Psy- chology 12 (1) (2002) 201â235.
[123] J. McClure, D. Hilton, For you canât always get what you want: When preconditions are better explanations than goals, British Journal of Social Psychology 36 (2) (1997) 223â240.
[124] J. McClure, D. Hilton, J. Cowan, L. Ishida, M. Wilson, When rich or poor people buy expensive Is the question how or why?, Journal of Language and Social Psychology 20 (2001) objects: 229â257.
[125] J. McClure, D. J. Hilton, Are goals or preconditions better explanations? It depends on the question, European Journal of Social Psychology 28 (6) (1998) 897â911.
[126] J. L. McClure, R. M. Sutton, D. J. Hilton, The Role of Goal-Based Explanations, in: Social judgments: Implicit and explicit processes, vol. 5, Cambridge University Press, 306, 2003. [127] A. L. McGill, J. G. Klein, Contrastive and counterfactual reasoning in causal judgment, Journal
of Personality and Social Psychology 64 (6) (1993) 897.
[128] P. Menzies, H. Price, Causation as a secondary quality, The British Journal for the Philosophy of Science 44 (2) (1993) 187â203.
[129] J. E. Mercado, M. A. Rupp, J. Y. Chen, M. J. Barnes, D. Barber, K. Procci, Intelligent agent transparency in humanâagent teaming for Multi-UxV management, Human Factors 58 (3) (2016) 401â415.
[130] J. S. Mill, A system of logic: The collected works of John Stuart Mill, vol. III, 1973. [131] D. T. Miller, S. Gunasegaram, Temporal order and the perceived mutability of events: Implications
for blame assignment, Journal of personality and social psychology 59 (6) (1990) 1111.
[132] T. Miller, P. Howe, L. Sonenberg, Explainable AI: Beware of Inmates Running the Asylum, in: IJCAI 2017 Workshop on Explainable Artiï¬cial Intelligence (XAI), 36â42, URL http://people. eng.unimelb.edu.au/tmiller/pubs/explanation-inmates.pdf, 2017.
[133] T. M. Mitchell, R. M. Keller, S. T. Kedar-Cabelli, Explanation-based generalization: A unifying view, Machine learning 1 (1) (1986) 47â80.
[134] J. D. Moore, C. L. Paris, Planning text for advisory dialogues: Capturing intentional and rhetorical information, Computational linguistics 19 (4) (1993) 651â694.
[135] C. Muise, V. Belle, P. Felli, S. McIlraith, T. Miller, A. R. Pearce, L. Sonenberg, Planning Over Multi-Agent Epistemic States: A Classical Planning Approach, in: B. Bonet, S. Koenig (Eds.), Proceedings of AAAI 2015, 1â8, 2015.
[136] G. Nott, âExplainable Artiï¬cial Intelligenceâ: Cracking open the black box of AI, Computer World https://www.computerworld.com.au/article/617359/.
[137] M. J. OâLaughlin, B. F. Malle, How people explain actions performed by groups and individuals, Journal of Personality and Social Psychology 82 (1) (2002) 33.
in: D. B. L. Thomas Roth-Berghofer, Nava Tintarev (Ed.), Proceedings of the 6th International Explanation-Aware Computing (Ex- aCt) workshop, 41â50, 2011.
64
[139] J. A. Overton, Explanation in Science, Ph.D. thesis, The University of Western Ontario, 2012. [140] J. A. Overton, âExplainâ in scientiï¬c discourse, Synthese 190 (8) (2013) 1383â1405. [141] J. Pearl, D. Mackenzie, The Book of Why: The New Science of Cause and Eï¬ect, Hachette UK,
2018.
[142] C. S. Peirce, Harvard lectures on pragmatism, Collected Papers v. 5, 1903. [143] R. Petrick, M. E. Foster, Using General-Purpose Planning for Action Selection in Human-Robot Interaction, in: AAAI 2016 Fall Symposium on Artiï¬cial Intelligence for Human-Robot Interaction, 2016.
[144] D. Poole, Normality and Faults in logic-based diagnosis., in: IJCAI, vol. 89, 1304â1310, 1989. [145] H. E. Pople, On the mechanization of abductive logic, in: IJCAI, vol. 73, 147â152, 1973. [146] K. Popper, The logic of scientiï¬c discovery, Routledge, 2005. [147] H. Prakken, Formal systems for persuasion dialogue, The Knowledge Engineering Review 21 (02)
(2006) 163â188.
[148] S. Prasada, The scope of formal explanation, Psychonomic Bulletin & Review (2017) 1â10. [149] S. Prasada, E. M. Dillingham, Principled and statistical connections in common sense conception,
Cognition 99 (1) (2006) 73â112.
[150] J. Preston, N. Epley, Explanations versus applications: The explanatory power of valuable beliefs, Psychological Science 16 (10) (2005) 826â832.
[151] M. Ranney, P. Thagard, Explanatory coherence and belief revision in naive physics, in: Proceedings of the Tenth Annual Conference of the Cognitive Science Society, 426â432, 1988.
[152] A. S. Rao, M. P. Georgeï¬, BDI agents: From theory to practice., in: ICMAS, vol. 95, 312â319, 1995.
[153] S. J. Read, A. Marcus-Newhall, Explanatory coherence in social explanations: A parallel dis- tributed processing account, Journal of Personality and Social Psychology 65 (3) (1993) 429. [154] B. Rehder, A causal-model theory of conceptual representation and categorization, Journal of
Experimental Psychology: Learning, Memory, and Cognition 29 (6) (2003) 1141.
[155] B. Rehder, When similarity and causality compete in category-based property generalization, Memory & Cognition 34 (1) (2006) 3â16.
[156] R. Reiter, A theory of diagnosis from ï¬rst principles, Artiï¬cial intelligence 32 (1) (1987) 57â95. [157] M. T. Ribeiro, S. Singh, C. Guestrin, Why Should I Trust You?: Explaining the Predictions of Any Classiï¬er, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 1135â1144, 2016.
# [158] M. Robnik-ËSikonja, I. Kononenko, Explaining classiï¬cations for individual instances, IEEE Trans-
actions on Knowledge and Data Engineering 20 (5) (2008) 589â600.
[159] W. C. Salmon, Four decades of scientiï¬c explanation, University of Pittsburgh press, 2006. [160] J. Samland, M. Josephs, M. R. Waldmann, H. Rakoczy, The role of prescriptive norms and knowledge in childrenâs and adultsâ causal selection, Journal of Experimental Psychology: General 145 (2) (2016) 125.
in: P. Bello, M. Guarini, M. McShane, B. Scassellati (Eds.), Proceedings of the 36th Annual Conference of the Cognitive Science Society, Cognitive Science Society, 1359â1364, 2014.
[162] M. Scriven, The concept of comprehension: From semantics to software, in: J. B. Carroll, R. O. Freedle (Eds.), Language comprehension and the acquisition of knowledge, Washington: W. H. Winston & Sons, 31â39, 1972.
[163] Z. Shams, M. de Vos, N. Oren, J. Padget, Normative Practical Reasoning via Argumentation and Dialogue, in: Proceedings of the 25th International Joint Conference on Artiï¬cial Intelligence (IJCAI-16), AAAI Press, 2016.
[164] R. Singh, T. Miller, J. Newn, L. Sonenberg, E. Velloso, F. Vetere, Combining Planning with Gaze for Online Human Intention Recognition, in: Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, 2018.
[165] B. R. Slugoski, M. Lalljee, R. Lamb, G. P. Ginsburg, Attribution in conversational context: Eï¬ect of mutual knowledge on explanation-giving, European Journal of Social Psychology 23 (3) (1993) 219â238.
[166] K. Stubbs, P. Hinds, D. Wettergreen, Autonomy and common ground in human-robot interaction: A ï¬eld study, IEEE Intelligent Systems 22 (2) (2007) 42â50.
[167] J. Susskind, K. Maurer, V. Thakkar, D. L. Hamilton, J. W. Sherman, Perceiving individuals and groups: expectancies, dispositional inferences, and causal attributions, Journal of Personality and Social Psychology 76 (2) (1999) 181.
[168] W. R. Swartout, J. D. Moore, Explanation in second generation expert systems, in: Second 65
Generation Expert Systems, Springer, 543â585, 1993.
[169] P. E. Tetlock, R. Boettger, Accountability: a social magniï¬er of the dilution eï¬ect, Journal of Personality and Social Psychology 57 (3) (1989) 388.
[170] P. E. Tetlock, J. S. Learner, R. Boettger, The dilution eï¬ect: judgemental bias, conversational convention, or a bit of both?, European Journal of Social Psychology 26 (1996) 915â934.
[171] P. Thagard, Explanatory coherence, Behavioral and Brain Sciences 12 (03) (1989) 435â467. [172] T. Trabasso, J. Bartolone, Story understanding and counterfactual reasoning, Journal of Experi-
mental Psychology: Learning, Memory, and Cognition 29 (5) (2003) 904.
[173] A. Tversky, D. Kahneman, Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment, Psychological Review 90 (4) (1983) 293.
[174] K. Uttich, T. Lombrozo, Norms inform mental state ascriptions: A rational explanation for the side-eï¬ect eï¬ect, Cognition 116 (1) (2010) 87â100.
[175] J. Van Bouwel, E. Weber, Remote causes, bad explanations?, Journal for the Theory of Social Behaviour 32 (4) (2002) 437â449.
[176] B. C. Van Fraassen, The pragmatics of explanation, American Philosophical Quarterly 14 (2) (1977) 143â150.
[177] N. Vasilyeva, D. A. Wilkenfeld, T. Lombrozo, Goals Aï¬ect the Perceived Quality of Explanations., in: D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, P. P. Maglio (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society, Cognitive Science Society, 2469â2474, 2015.
[178] F. B. von der Osten, M. Kirley, T. Miller, The minds of many: opponent modelling in a stochastic game, in: Proceedings of the 25th International Joint Conference on Artiï¬cial Intelligence (IJCAI), AAAI Press, 3845â3851, 2017.
[179] G. H. Von Wright, Explanation and understanding, Cornell University Press, 1971. [180] D. Walton, A new dialectical theory of explanation, Philosophical Explorations 7 (1) (2004) 71â89. [181] D. Walton, Examination dialogue: An argumentation framework for critically questioning an
expert opinion, Journal of Pragmatics 38 (5) (2006) 745â777.
[182] D. Walton, Dialogical Models of Explanation, in: Proceedings of the International Explanation- Aware Computing (ExaCt) workshop, 1â9, 2007.
[183] D. Walton, A dialogue system speciï¬cation for explanation, Synthese 182 (3) (2011) 349â374. [184] D. N. Walton, Logical Dialogue â Games and Fallacies, University Press of America, Lanham,
Maryland, 1984.
[185] J. Weiner, BLAH, a system which explains its reasoning, Artiï¬cial intelligence 15 (1-2) (1980) 19â48.
[186] D. S. Weld, G. Bansal, Intelligible Artiï¬cial Intelligence, arXiv e-prints 1803.04263, URL https: //arxiv.org/pdf/1803.04263.pdf.
[187] A. Wendt, On constitution and causation in international relations, Review of International Studies 24 (05) (1998) 101â118.
[188] D. A. Wilkenfeld, T. Lombrozo, Inference to the best explanation (IBE) versus explaining for the best inference (EBI), Science & Education 24 (9-10) (2015) 1059â1077.
[189] J. J. Williams, T. Lombrozo, B. Rehder, The hazards of explanation: Overgeneralization in the face of exceptions, Journal of Experimental Psychology: General 142 (4) (2013) 1006.
[190] M. Winikoï¬, Debugging Agent Programs with Why?: Questions, in: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS â17, IFAAMAS, 251â259, 2017.
[191] J. Woodward, Making things happen: A theory of causal explanation, Oxford University Press, 2005.
[192] J. Woodward, Sensitive and insensitive causation, The Philosophical Review 115 (1) (2006) 1â50.
66 | {
"id": "1606.03490"
} |
1706.06905 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | # Learnable pooling with Context Gating for video classiï¬cation
# Antoine Miech, Ivan Laptev and Josef Sivic https://github.com/antoine77340/LOUPE
AbstractâCurrent methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We ï¬rst explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classiï¬cation. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
8 1 0 2 r a M 5 ]
Index TermsâMachine learning, Computer vision, Neural networks, Video analysis.
# 1 INTRODUCTION
] V C . s c [ 2 v 5 0 9 6 0 . 6 0 7 1 : v i X r a
Groundtruth: Barcebue - Grilling - Machine - Food - Wood - Cooking Top 6 scores:Food (97.5%) - Wood (74.9%) - Barbecue (60.0%) - Cooking (50.1%) - Barbecue grill (27.9%) - Table (27.4%) Groundtruth: Tree - Christmas Tree - Christmas Decoration - Christmas Top 6 scores: Christmas (87.7%) - Christmas decoration (40.1%) - Origami (23.0%) - Paper (15.2%) - Tree (13.9%) - Christmas Tree (7.4%)
Understanding and recognizing video content is a major chal- lenge for numerous applications including surveillance, personal assistance, smart homes, autonomous driving, stock footage search and sports video analysis. In this work, we address the problem of multi-label video classiï¬cation for user-generated videos on the Internet. The analysis of such data involves several challenges. Internet videos have a great variability in terms of content and quality (see Figure 1). Moreover, user-generated labels are typi- cally incomplete, ambiguous and may contain errors.
Current approaches for video analysis typically represent videos by features extracted from consecutive frames, followed by feature aggregation over time. Example methods for feature extraction include deep convolutional neural networks (CNNs) pre-trained on static images [1], [2], [3], [4]. Representations of motion and appearance can be obtained from CNNs pre-trained for video frames and short video clips [5], [6], as well as hand-crafted video features [7], [8], [9]. Other more advanced models employ hierarchical spatio-temporal convolutional architectures [5], [10], [11], [12], [13], [14] to both extract and temporally aggregate video features at the same time.
Common methods for temporal feature aggregation include simple averaging or maximum pooling as well as more sophis- ticated pooling techniques such as VLAD [15] or more recently recurrent models (LSTM [16] and GRU [17]). These techniques, however, may be suboptimal. Indeed, simple techniques such as average or maximum pooling may become inaccurate for long sequences. Recurrent models are frequently used for temporal aggregation of variable-length sequences [18], [19] and often outperform simpler aggregation methods, however, their training remains cumbersome. As we show in Section 5, training recurrent
Fig. 1: Two example videos from the Youtube-8M V2 dataset together with the ground truth and top predicted labels. Predictions colored as green are labels from the groundtruth annotation.
models requires relatively large amount of data. Moreover, re- current models can be sub-optimal for processing of long video sequences during GPU training. It is also not clear if current models for sequential aggregation are well-adapted for video representation. Indeed, our experiments with training recurrent models using temporally-ordered and randomly-ordered video frames show similar results.
A. Miech, I. Laptev and J. Sivic are with Inria, WILLOW, Departement dâInformatique de lâ ´Ecole Normale Sup´erieure, PSL Research University, ENS/INRIA/CNRS UMR 8548, Paris, France E-mail: {antoine.miech, ivan.laptev, josef.sivic}@inria.fr J. Sivic is also with Czech Institute of Informatics, Robotics and Cybernet- ics, Czech Technical University in Prague.
Another research direction is to exploit traditional orderless aggregation techniques based on clustering approaches such as Bag-of-visual-words [20], [21], Vector of Locally aggregated
1
Descriptors (VLAD) [15] or Fisher Vectors [22]. It has been recently shown that integrating VLAD as a differentiable module in a neural network can signiï¬cantly improve the aggregated rep- resentation for the task of place retrieval [23]. This has motivated us to integrate and enhance such clustering-based aggregation techniques for the task of video representation and classiï¬cation.
Contributions. In this work we make the following contributions: (i) we introduce a new state-of-the-art architecture aggregating video and audio features for video classiï¬cation, (ii) we introduce the Context Gating layer, an efï¬cient non-linear unit for modeling interdependencies among network activations, and (iii) we ex- perimentally demonstrate beniï¬ts of clustering-based aggregation techniques over LSTM and GRU approaches for the task of video classiï¬cation.
Results. We evaluate our method on the large-scale multi-modal Youtube-8M V2 dataset containing about 8M videos and 4716 unique tags. We use pre-extracted visual and audio features provided with the dataset [19] and demonstrate improvements obtained with the Context Gating as well as by the combination of learnable poolings. Our method obtains top performance, out of more than 650 teams, in the Youtube-8M Large-Scale Video Understanding challenge1. Compared to the common recurrent models, our models are faster to train and require less training data. Figure 1 illustrates some qualitative results of our method.
2 RELATED WORK This work is related to previous methods for video feature extrac- tion, aggregation and gating reviewed below.
# 2.1 Feature extraction
Successful hand-crafted representations [7], [8], [9] are based on local histograms of image and motion gradient orientations extracted along dense trajectories [9], [24]. More recent methods extract deep convolutional neural network activations computed from individual frames or blocks of frames using spatial [6], [25], [26], [27] or spatio-temporal [5], [10], [11], [12], [13], [14] convolutions. Convolutional neural networks can be also applied separately on the appearance channel and the pre-computed mo- tion ï¬eld channel resulting in the, so called, two-stream represen- tations [6], [11], [14], [26], [28]. As our work is motivated by the Youtube-8M large-scale video understanding challenge [19], we will assume for the rest of the paper that features are provided (more details are provided in Section 5). This work mainly focuses on the temporal aggregation of given features.
# 2.2 Feature aggregation
Video features are typically extracted from individual frames or short video clips. The remaining question is: how to aggregate video features over the entire and potentially long video? One way to achieve this is to employ recurrent neural networks, such as long short-term memory (LSTM) [16] or gated recurrent unit (GRU) [17]), on top of the extracted frame-level features to capture the temporal structure of video into a single representation [18], [29], [30], [31], [32]. Hierarchical spatio-temporal convolution architectures [5], [10], [11], [12], [13], [14] can also be viewed
1. https://www.kaggle.com/c/youtube8m
VIDEO FEATURES AUDIO FEATURES INPUT FEATURES FEATURES POOLING CLASSIFICATION
Fig. 2: Overview of our network architecture for video classiï¬ca- tion (the âLate Concatâ variant). FC denotes a Fully-Connected layer. MoE denotes the Mixture-of-Experts classiï¬er [19].
as a way to both extract and aggregate temporal features at the same time. Other methods capture only the distribution of features in the video, not explicitly modeling their temporal ordering. The simplest form of this approach is the average or maximum pooling of video features [33] over time. Other commonly used methods include bag-of-visual-words [20], [21], Vector of Locally aggre- gated Descriptors (VLAD) [15] or Fisher Vector [22] encoding. Application of these techniques to video include [7], [8], [9], [34], [35]. Typically, these methods [31], [36] rely on an unsupervised learning of the codebook. However, the codebook can also be learned in a discriminative manner [34], [37], [38] or the entire encoding module can be included within the convolutional neural network architecture and trained in the end-to-end manner [23]. This type of end-to-end trainable orderless aggregation has been recently applied to video frames in [26]. Here we extend this work by aggregating visual and audio inputs, and also investigate multiple orderless aggregations.
# 2.3 Gating
Gating mechanisms allow multiplicative interaction between a given input feature X and a gate vector with values in between 0 and 1. They are commonly used in recurrent neural network models such as LSTM [16] and GRU [17] but have so far not been exploited in conjunction with other non-temporal aggrega- tion strategies such as Fisher Vectors (FV), Vector of Locally Aggregated Descriptors (VLAD) or bag-of-visual-words (BoW). Our work aims to ï¬ll this gap and designs a video classiï¬ca- tion architecture combining non-temporal aggregation with gating mechanisms. One of the motivations for this choice is the recent Gated Linear Unit (GLU) [39], which has demonstrated signiï¬cant improvements in natural language processing tasks.
Our gating mechanism initially reported in [40] is also related to the parallel work on Squeeze-and-Excitation architectures [41], that has suggested gated blocks for image classiï¬cation tasks and have demonstrated excellent performance on the ILSVRC 2017 image classiï¬cation challenge.
# 3 NETWORK ARCHITECTURE
Our architecture for video classiï¬cation is illustrated in Fig- ure 2 and contains three main modules. First, the input features are extracted from video and audio signals. Next, the pooling module aggregates the extracted features into a single compact (e.g. 1024-dimensional) representation for the entire video. This
2
pooling module has a two-stream architecture treating visual and audio features separately. The aggregated representation is then enhanced by the Context Gating layer (section 3.1). Finally, the classiï¬cation module takes the resulting representation as input and outputs scores for a pre-deï¬ned set of labels. The classiï¬cation module adopts the Mixture-of-Experts [42] classiï¬er as described in [19], followed by another Context Gating layer.
# 3.1 Context Gating
The Context Gating (CG) module transforms the input feature representation X into a new representation Y as
Y = Ï(W X + b) ⦠X, (1)
where X â Rn is the input feature vector, Ï is the element- wise sigmoid activation and ⦠is the element-wise multiplication. W â RnÃn and b â Rn are trainable parameters. The vector of weights Ï(W X + b) â [0, 1] represents a set of learned gates applied to the individual dimensions of the input feature X.
The motivation behind this transformation is two-fold. First, we wish to introduce non-linear interactions among activations of the input representation. Second, we wish to recalibrate the strengths of different activations of the input representation through a self-gating mechanism. The form of the Context Gating layer is inspired by the Gated Linear Unit (GLU) introduced re- cently for language modeling [39] that considers a more complex class of transformations given by Ï(W1X + b1) ⦠(W2X + b2), with two sets of learnable parameters W1, b1 and W2, b2. Compared to the the Gated Linear Unit [39], our Context Gating in (1) (i) reduces the number of learned parameters as only one set of weights is learnt, and (ii) re-weights directly the input vector X (instead of its linear transformation) and hence is suitable for situations where X has a speciï¬c meaning, such the score of a class label, that is preserved by the layer. As shown in Figure 2, we use Context Gating in the feature pooling and classiï¬cation modules. First, we use CG to transform the feature vector before passing it to the classiï¬cation module. Second, we use CG after the classiï¬cation layer to capture the prior structure of the output label space. Details are provided below.
# 3.2 Relation to residual connections
Residual connections have been introduced in [1]. They demon- strate faster and better training of deep convolutional neural networks as well as better performance for a variety of tasks. Residual connections can be formulated as
Y = f (W X + b) + X, (2)
where X are the input features, (W, b) the learnable parameters of the linear mapping (or it can be a convolution), f is a non- linearity (typically Rectiï¬er Linear Unit as expressed in [1]). One advantage of residual connections is the possibility of gradient propagation directly into X during training, avoiding the vanish- ing gradient problem. To show this, the gradient of the residual connection can be written as:
âY = â(f (W X + b)) + âX. (3)
One can notice that the gradient âY is the sum of the gradient of the previous layer âX and the gradient â(f (W X + b)). The
Input Gates x Input Output Snow DO.9x a â Tree 0.1Xe3 â a Ski 0.9X ma â> x x
Fig. 3: Illustration of Context Gating that down-weights visual activations of Tree for a skiing scene.
vanishing gradient problem is overcome thanks to the term âX, which allows the gradient to backpropagate directly from Y to X without decreasing in the norm. A similar effect is observed with Context Gating which has the following gradient equation:
âY = â(Ï(W X + b)) ⦠X + Ï(W X + b) ⦠âX.
In this case, the term âX is weighted by activations Ï(W X + b). Hence, for dimensions where Ï(W X +b) are close to 1, gradients are directly propagated from Y to X. In contrast, for values close to 0 the gradient propagation is vanished. This property is valuable as it allows to stack several non-linear layers and avoid vanishing gradient problems.
# 3.3 Motivation for Context Gating
Our goal is to predict human-generated tags for a video. Such tags typically represent only a subset of objects and events which are most relevant to the context of the video. To mimic this behavior and to suppress irrelevant labels, we introduce the Context Gating module both to re-weight the features and the output labels of our architecture.
Capturing dependencies among features. Context Gating can help creating dependencies between visual activations. Take an example of a skiing video showing a skiing person, snow and trees. While network activations for the Tree features might be high, trees might be less important in the context of skiing where people are more likely to comment about the snow and skiing rather than the forest. Context Gating can learn to down-weight visual activations for Tree when it co-occurs with visual activations for Ski and Snow as illustrated in Figure 3.
Capturing prior structure of the output space. Context Gating can also create dependencies among output class scores when applied to the classiï¬cation layer of the network. This makes it possible to learn a prior structure on the output probability space, which can be useful in modeling biases in label annotations.
4 LEARNABLE POOLING METHODS Within our video classiï¬cation architecture described above, we investigate several types of learnable pooling models, which we describe next. Previous successful approaches [18], [19] employed recurrent neural networks such as LSTM or GRU for the encoding of the sequential features. We chose to focus on non-recurrent aggregation techniques. This is motivated by several factors: ï¬rst, recurrent models are computationally demanding for long tem- poral sequences as it is not possible to parallelize the sequential computation. Moreover, it is not clear if treating the aggregation problem as a sequence modeling problem is necessary. As we show in our experiments, there is almost no change in performance if we shufï¬e the frames in a random order as almost all of the
3
relevant signal relies on the static visual cues. All we actually need to do is to ï¬nd a way to efï¬ciently remember all of the relevant visual cues. We will ï¬rst review the NetVLAD [23] aggregation module and then explain how we can exploit the same idea to imitate Fisher Vector and Bag-of-visual-Words aggregation scheme.
# 4.1 NetVLAD aggregation
The NetVLAD [23] architecture has been proposed for place recognition to reproduce the VLAD encoding [15], but in a differ- entiable manner, where the clusters are tuned via backpropagation instead of using k-means clustering. It was then extended to action recognition in video [26]. The main idea behind NetVLAD is to write the descriptor xi hard assignment to the cluster k as a soft assignment:
evr ai tby ag(xi) =
where (wj)j and (bj)j are learnable parameters. In other words, the soft assignment ak(xi) of descriptor xi to cluster k measures on a scale from 0 to 1 how close the descriptor xi is to cluster k. In the hard assignment way, ak(xi) would be equal to 1 if xi closest cluster is cluster k and 0 otherwise. For the rest of the paper, ak(xi) will deï¬ne soft assignment of descriptor xi to cluster k. If we write cj, j â [1, K] the j-th learnable cluster, the NetVLAD descriptor can be written as
= Saute VLAD(j,k â ck(J)); (6)
which computes the weighted sum of residuals xi â ck of descrip- tors xi from learnable anchor point ck in cluster k.
# 4.2 Beyond NetVLAD aggregation
By exploiting the same cluster soft-assignment idea, we can also imitate similar operations as the traditional Bag-of-visual- words [20], [21] and Fisher Vectors [22] in a differentiable manner. For bag-of-visual-words (BOW) encoding, we use soft- assignment of descriptors to visual word clusters [23], [43] to obtain a differentiable representation. The differentiable BOW representation can be written as:
N BOW(k) = 3° ax(ai). (7) i=1
Notice that the exact bag-of-visual-words formulation is repro- duced if we replace the soft assignment values by its hard assignment equivalent. This formulation is closely related to the Neural BoF formulation [44], but differs in the way of computing the soft assignment. In detail, [44] performs a softmax operation over the computed L2 distances between the descriptors and the cluster centers, whereas we use soft-assignment given by eq. (5) where parameters w are learnable without explicit relation to computing L2 distance to cluster centers. It also relates to [45] that uses a recurrent neural network to perform the aggregation. The advantage of BOW aggregation over NetVLAD is that it aggregates a list of feature descriptors into a much more compact representation, given a ï¬xed number of clusters. The drawback is that signiï¬cantly more clusters are needed to obtain a rich representation of the aggregated descriptors.
Inspired by Fisher Vector [22] encoding, we also experimented with modifying the NetVLAD architecture to allow learning of second order feature statistics within the clusters. We will denote this representation as NetFV (for Net Fisher Vectors) as it aims at imitating the standard Fisher Vector encoding [22]. Reusing the previously established soft assignment notation, we can write the NetFV representation as
FVI(j,k = Saul i (ena) (8) J) FV2(j,k) =
where F V 1 is capturing the ï¬rst-order statistics, F V 2 is capturing ck, k â [1, K] are the learnable the second-order statistics, clusters and Ïk, k â [1, K] are the clustersâ diagonal covariances. To deï¬ne Ïk, k â [1, K] as positive, we ï¬rst randomly initialize their value with a Gaussian noise with unit mean and small variance and then take the square of the values during training so that they stays positive. In the same manner as NetVLAD, ck and Ïk are learnt independently from the parameters of the soft- assignment ak. This formulation differs from [38], [46] as we are not exactly reproducing the original Fisher Vectors. Indeed the parameters ak(xi), ck and Ïk are decoupled from each other. As opposed to [38], [46], these parameters are not related to a Gaussian Mixture Model but instead are trained in a discriminative manner.
Finally, we have also investigated a simpliï¬cation of the original NetVLAD architecture that averages the actual descriptors instead of residuals, as ï¬rst proposed by [47]. We call this variant NetRVLAD (for Residual-less VLAD). This simpliï¬cation requires less parameters and computing operations (about half in both cases). The NetRVLAD descriptor can be written as
» ay, (x; )x RVLAD(j,k) = a; (j). (10)
More information about our Tensorï¬ow [48] implementation of these different aggregation models can be found at: https://github. com/antoine77340/LOUPE
5 EXPERIMENTS This section evaluates alternative architectures for video aggrega- tion and presents results on the Youtube-8M [19] dataset.
# 5.1 Youtube-8M Dataset
The Youtube-8M dataset [19] is composed of approximately 8 millions videos. Because of the large scale of the dataset, visual and audio features are pre-extracted and provided with the dataset. Each video is labeled with one or multiple tags referring to the main topic of the video. Figure 5 illustrates examples of videos with their annotations. The original dataset is divided into training, validation and test subsets with 70%, 20% and 10% of videos, respectively. In this work we keep around 20K videos for the validation, the remaining samples from the original training and validation subsets are used for training. This choice was made to obtain a larger training set and to decrease the validation time. We have noticed that the performance on our validation set was comparable (0.2%-0.3% higher) to the test performance evaluated on the Kaggle platform. As we have no access to
4
Method GAP Average pooling + Logistic Regression 71.4% 74.1% Average pooling + MoE + CG LSTM (2 Layers) GRU (2 Layers) 81.7% 82.0% BoW (4096 Clusters) NetFV (128 Clusters) NetRVLAD (256 Clusters) NetVLAD (256 Clusters) 81.6% 82.2% 82.3% 82.4% Gated BoW (4096 Clusters) Gated NetFV (128 Clusters) Gated NetRVLAD (256 Clusters) Gated NetVLAD (256 Clusters) 82.0% 83.0% 83.1% 83.2%
TABLE 1: Performance comparison for individual aggregation schemes. Clustering-based methods are compared with and with- out Context Gating.
the test labels, most results in this section are reported for our validation set. We report evaluation using the Global Average Precision (GAP) metric at top 20 as used in the Youtube-8M Kaggle competition (more details about the metric can be found at: https://www.kaggle.com/c/youtube8m#evaluation).
# 5.2 Implementation details
In the Youtube 8M competition dataset [19] video and audio features are provided for every second of the input video. The visual features consist of ReLU activations of the last fully- connected layer from a publicly available2 Inception network trained on Imagenet. The audio features are extracted from a CNN architecture trained for audio classiï¬cation [49]. PCA and whitening are then applied to reduce the dimension to 1024 for the visual features and 128 for the audio features. More details on feature extraction are available in [19].
All of our models are trained using the Adam algorithm [50] and mini-batches with data from around 100 videos. The learning rate is initially set to 0.0002 and is then decreased exponentially with the factor of 0.8 every 4M samples. We use gradient clipping and batch normalization [51] before each non-linear layer.
For the clustering-based pooling models, i.e. BoW, NetVLAD, NetRVLAD and NetFV, we randomly sample N features with replacement from each video. N is ï¬xed for all videos at training and testing. As opposed to the original version of NetVLAD [23], we did not pre-train the codebook with a k-means initialization as we did not notice any improvement by doing so. For training of recurrent models, i.e. LSTM and GRU, we process features in the temporal order. We have also experimented with the random sam- pling of frames for LSTM and GRU which performs surprisingly similarly.
All our models are trained with the cross entropy loss. Our im- plementation uses the TensorFlow framework [48]. Each training is performed on a single NVIDIA TITAN X (12Gb) GPU.
# 5.3 Model evaluation
We evaluate the performance of individual models in Table 1. To enable a fair comparison, all pooled representations have the same size of 1024 dimensions. The âGatedâ versions for the clustering-based pooling methods include CG layers as described
2. https://www.tensorï¬ow.org/tutorials/image recognition
After pooling After MoE GAP 82.2% 82.4% 82.7% Gated Linear Unit Context Gating 82.7% Context Gating 83.0% - Gated Linear Unit Context Gating - - - Context Gating
TABLE 2: Context Gating ablation study. There is no GLU layer after MoE as GLU does not output probabilities.
Method NetVLAD NetFV GRU LSTM 81.9% 81.2% 82.2% 81.7% 82.4% 82.2% 82.1% 81.1%
TABLE 3: Evaluation of audio-video fusion methods (Early and Late Concat).
in Section 3.1. Using CG layers together with GRU and LSTM has decreased the performance in our experiments.
From Table 1 we can observe a signiï¬cant increase of perfor- mance provided by all learnt aggregation schemes compared to the Average pooling baselines. Interestingly, the NetVLAD and NetFV representations based on the temporally-shufï¬ed feature pooling outperforms the temporal models (GRU and LSTM). Finally, we can note a consistent increase in performance provided by the Context Gating for all clustering-based pooling methods.
# 5.4 Context Gating ablation study
Table 2 reports an ablation study evaluating the effect of Context Gating on the NetVLAD aggregation with 128 clusters. The addi- tion of CG layers in the feature pooling and classiï¬cation modules gives a signiï¬cant increase in GAP. We have observed a similar behavior for NetVLAD with 256 clusters. We also experimented with replacing the Context Gating by the GLU [39] after pooling. To make the comparison fair, we added a Context Gating layer just after the MoE. Despite being less complex than GLU, we observe that CG also performs better. We note that the improvement of 0.8% provided by CG is similar to the improvement of the best non-gated model (NetVLAD) over LSTM in Table 1.
# 5.5 Video-Audio fusion
In addition to the late fusion of audio and video streams (Late Concat) described in Section 3, we have also experimented with a simple concatenation of original audio and video features into a single vector, followed by the pooling and classiï¬cation modules in a âsingle stream mannerâ (Early Concat). Results in Table 3 illustrate the effect of the two fusion schemes for different pooling methods. The two-stream audio-visual architecture with the late fusion improves performance for the clustering-based pooling methods (NetVLAD and NetFV). On the other hand, the early fusion scheme seems to work better for GRU and LSTM aggrega- tions. We have also experimented with replacing the concatenation fusion of audio-video features by their outer product. We found this did not work well compared to the concatenation mainly due to the high dimensionality of the resulting output. To alleviate this issue, we tried to reduce the output dimension using the multi-modal compact bilinear pooling approach [52] but found the resulting models underï¬tting the data.
5
85, â Gated NetVLAD â NetVLAD â LSTM â Average pooling 80 75 GAP 70 65 10° 10° 10â Number of samples
Fig. 4: The GAP performance of the different main models when varying the dataset size.
# 5.6 Generalization
One valuable feature of the Youtube-8M dataset is the large scale of annotated data (almost 10 millions videos). More common annotated video datasets usually have sizes several orders of mag- nitude lower, ranging from 10k to 100k samples. With the large- scale dataset at hand we evaluate the inï¬uence of the amount of training data on the performance of different models. To this end, we experimented with training different models: Gated NetVLAD, NetVLAD, LSTM and average pooling based model on multiple randomly sampled subsets of the Youtube 8M dataset. We have experimented with subsets of 70K, 150K, 380K and 1150K samples. For each subset size, we have trained models using three non-overlapping training subsets and measured the variance in performance. Figure 4 illustrates the GAP performance of each model when varying the training size. The error bars represent the variance observed when training the models on the three different training subsets. We have observed low and consistent GAP variance for different models and training sizes. Despite the LSTM model has less parameters (around 40M) compared to NetVLAD (around 160M) and Gated NetVLAD (around 180M), NetVLAD and Gated NetVLAD models demonstrate better generalization than LSTM when trained from a lower number of samples. The Context Gating module still helps generalizing better the basic NetVLAD based architecture when having sufï¬cient number of samples (at least 100k samples). We did not show results with smaller dataset sizes as the results for all models were drastically dropping down. This is mainly due to the fact that the task is a multi-label prediction problem with a large pool of roughly 5000 labels. As these labels have a long-tail distribution, decreasing the dataset size to less than 30k samples would leave many labels with no single training example. Thus, it would not be clear if the drop of performance is due to the aggregation technique or a lack of training samples for rare classes.
# 5.7 Ensembling
We explore the complementarity of different models and con- sider their combination through ensembling. Our ensemble con- sists of several independently trained models. The ensembling
Approach Ensemble size GAP Ours (Full) Ours (Light) 25 7 85.0 84.7 Wang et al. [53] Li et al. [54] Chen et al. [55] Skalic et al. [56] 75 57 134 75 84.6 84.5 84.2 84.2
TABLE 4: Ensemble model sizes of the top ranked teams (out of 655) from the Youtube 8M kaggle competition.
averages label prediction scores of selected models. We have observed the increased effect of ensembling when combining diverse models. To choose models, we follow a simple greedy approach: we start with the best performing model and choose the next model by maximizing the GAP of the ensemble on the validation set. Our ï¬nal ensemble used in the Youtube 8M challenge contains 25 models. A seven models ensemble is enough to reach the ï¬rst place with a GAP on the private test set of 84.688. These seven models correspond to: Gated NetVLAD (256 clusters), Gated NetFV (128 clusters), Gated BoW (4096 Clusters), BoW (8000 Clusters), Gated NetRVLAD (256 Clus- ters), GRU (2 layers, hidden size: 1200) and LSTM (2 layers, hidden size: 1024). Our code to reproduce this ensemble is avail- able at: https://github.com/antoine77340/Youtube-8M-WILLOW. To obtain more diverse models for the ï¬nal 25 ensemble, we also added all the non-Gated models, varied the number of clusters or varied the size of the pooled representation.
Table 4 shows the ensemble size of the other top ranked approaches, out of 655 teams, from the Youtube-8M kaggle chal- lenge. Besides showing the best performance at the competition, we also designed a smaller set of models that ensemble more efï¬- ciently than others. Indeed, we need much less models in our en- semble than the other top performing approaches. Full ranking can be found at: https://www.kaggle.com/c/youtube8m/leaderboard.
6 CONCLUSIONS We have addressed the problem of large-scale video tagging and explored trainable variants of classical pooling methods (BoW, VLAD, FV) for the temporal aggregation of audio and visual features. In this context we have observed NetVLAD, NetFV and BoW to outperform more common temporal models such as LSTM and GRU. We have also introduced the Context Gating mechanism and have shown its beneï¬t for the trainable versions of BoW, VLAD and FV. The ensemble of our individual models has been shown to improve the performance further, enabling our method to win the Youtube 8M Large-Scale Video Understand- ing challenge. Our TensorFlow toolbox LOUPE is available for download from [57] and includes implementations of the Context Gating as well as learnable pooling modules used in this work.
ACKNOWLEDGMENTS The authors would like to thank Jean-Baptiste Alayrac and Relja Arandjelovi´c for valuable discussions as well as the Google team for providing the Youtube-8M Tensorï¬ow Starter Code. This work has also been partly supported by ERC grants ACTIVIA (no. 307574) and LEAP (no. 336845), CIFAR Learning in Machines & Brains program, ESIF, OP Research, development and education IMPACT No. CZ.02.1.01/0.0/0.0/15 003/0000468 Project and a Google Research Award.
6
Groundtruth: Dish - Cuisine - Food - Sauce ani â + fei Groundtruth: Barcebue - Grilling - Machine - Food - Wood - Cookin: Top 4 scores: Food (99.7%) - Cooking (78.4%) = Cuisine (69.9%) Top 6 scores: Food (97.5%) - Wood (74.9%) - Barbecue (60.0%) py ° - : - Dish (44.6%) - Cooking (50.1%) - Barbecue grill (27.9%) - Table (27.4%) fous Heit WWW.GEARBEST.C Groundtruth: Gadget - Iphone 3G - Mobile Phone - Smartphone Groundtruth: Festival - Musician - Parade - Marching Band - Computer Monitor - Telephone - University - Musical Ensemble - American Football - Stadium Top 6 scores: Mobile phone (99.9%) - Smartphone (99.7%) - Gadget (89.3%) Top 9 scores: Parade (99.7%) - Musical Ensemble (99.7%) - Marching Band - Telephone (49.0%) - Camera (5.2%) - Microsoft Lumia (3.3%) (98.9%) - Musician (65.9%) - Festival (59.7%) - University (9.1%) - School (9.1%) - Military Band (8.8%) - Stadium (8.7%) Groundtruth: Paper - Food - Art Groundtruth: Concert - Musician - Musical Ensemble - Drummer - Orchestra Top 4 scores: Paper (61.6%) - Art (51.9%) - Hat (13.5%) - Paint (10.2%) Top 5 scores: Concert (99.6%) - Musician (99.6%) - Musical Ensemble (92.8%) - Orchestra (89.0%) - Drummer (80.0%) Groundtruth: Radio-controlled aircraft - Vehicle - North America P-51 Mustang Groundtruth: Skateboard - Skateboarding - Outdoor recreation - Airplane - Model Aircraft - Landing - Radio-controlled model Top 3 scores: Skateboarding (100%) - Skateboard (98.2%) - Outdoor recreation Top 7 scores: Vehicle (100%) - Airplane (99.9%) - Radio-controlled model (97.0%) (99.6%) - Model aircraft (98.8%) - Aircraft (98.7%) - Radio-controlled aircraft (94.0%) - Motorsport (55.3%) Groundtruth: Tree - Christmas Tree - Christmas Decoration - Christmas - Four Wheel Drive . Top 6 scores: Christmas (87.7%) - Christmas decoration (40.1%) - Origami Top 6 scores: Car (100%) - Vehicle (100%) - Sport Utility Vehicle (97.0%) (23.0%) - Paper (15.2%) - Tree (13.9%) - Christmas Tree (7.4%) - Dacia Duster (30.1%) - Fiat Automobiles (30.0%) - Volkswagen Beetles (12.0%) Groundtruth: Car - Vehicle - Sport Utility Vehicle - Dacia Duster - Renault
Fig. 5: Qualitative results from our best single model (Gated NetVLAD). We show both groundtruth labels (in green) from the Youtube 8M dataset and the top predictions of the Gated NetVLAD model.
7
# REFERENCES
[1] K. He, X. Zhang, S. Ren, and J. Sun, âDeep Residual Learning for Image Recognition,â in CVPR, 2016.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬cation with deep convolutional neural networks,â in NIPS, 2012.
[3] K. Simonyan and A. Zisserman, âVery deep convolutional networks for large-scale image recognition,â in ICLR, 2015.
[4] C. Szegedy, S. Ioffe, and V. Vanhoucke, âInception-v4, inception- connections on learning,â resnet and the arXiv:1602.07261v1, 2016. impact of residual
[5] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, âLearning spatiotemporal features with 3d convolutional networks,â in ICCV, 2015. [6] C. Feichtenhofer, A. Pinz, and A. Zisserman, âConvolutional two-stream
C. Feichtenhofer, A. Pinz, and A. Zisserman, âConvolutional two-stream network fusion for video action recognition,â in CVPR, 2016.
network fusion for video action recognition,â in CVPR, 2016. I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, âLearning realistic human actions from movies,â in CVPR, 2008.
[8] C. Sch¨uldt, I. Laptev, and B. Caputo, âRecognizing human actions: a local svm approach,â in ICPR, 2004.
[9] H. Wang and C. Schmid, âAction Recognition with Improved Trajecto- ries,â in ICCV, 2013.
[10] M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, and A. Baskurt, âSe- quential deep learning for human action recognition,â Human Behavior Understanding, pp. 29â39, 2011.
[11] J. Carreira and A. Zisserman, âQuo vadis, action recognition? a new model and the kinetics dataset,â in CVPR, 2017.
[12] C. Feichtenhofer, A. Pinz, and R. P. Wildes, âSpatiotemporal multiplier networks for video action recognition,â in CVPR, 2017.
[13] S. Ji, W. Xu, M. Yang, and K. Yu, â3D Convolutional Neural Networks for Human Action Recognition,â in PAMI, 2013.
[14] G. Varol, I. Laptev, and C. Schmid, âLong-term Temporal Convolutions for Action Recognition,â PAMI, 2017.
[15] H. Jegou, M. Douze, C. Schmid, and P. Perez, âAggregating local descriptors into a compact image representation,â in CVPR, 2010. [16] S. Hochreiter and J. Schmidhuber, âLong short-term memory.â in Neural
Computing, 1997.
[17] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio, âOn the Prop- erties of Neural Machine Translation: Encoder-Decoder Approaches,â arXiv preprint arXiv:1409.1259, 2014.
[18] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venu- gopalan, K. Saenko, and T. Darrell, âLong-term recurrent convolu- tional networks for visual recognition and description,â arXiv preprint arXiv:1411.4389, 2014.
[19] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadara- jan, and S. Vijayanarasimhan, âYoutube-8m: A large-scale video classi- ï¬cation benchmark,â arXiv preprint arXiv:1609.08675, 2016.
[20] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, âVisual categorization with bags of keypoints,â in ECCV Workshop, 2004. [21] J. Sivic and A. Zisserman, âVideo google: A text retrieval approach to
object matching in videos,â in ICCV, 2003.
[22] F. Perronnin and C. Dance, âFisher kernels on visual vocabularies for image categorization,â in CVPR, 2007.
[23] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, âNetVLAD: CNN architecture for weakly supervised place recognition,â in CVPR, 2016.
[24] C. R. de Souza, A. Gaidon, E. Vig, and A. M. L´opez, âSympathy for the details: Dense trajectories and hybrid classiï¬cation architectures for action recognition,â in ECCV, 2016.
[25] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, âLarge-scale video classiï¬cation with convolutional neural networks,â in CVPR, 2014, pp. 1725â1732.
[26] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell, âActionvlad: Learning spatio-temporal aggregation for action classiï¬cation,â in CVPR, 2017.
[27] L. Wang, Y. Qiao, and X. Tang, âAction recognition with trajectory- pooled deep-convolutional descriptors,â in CVPR, 2015, pp. 4305â4314. [28] K. Simonyan and A. Zisserman, âTwo-stream convolutional networks for
action recognition in videos,â in ICLR, 2014, pp. 568â576.
[29] F. Basura, E. Gavves, J. M. Oramas, A. Ghodrati, and T. Tuytelaars, âModeling video evolution for action recognition,â in CVPR, 2015. [30] M. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat, and M. Greg, âA Hierarchical Deep Temporal Model for Group Activity Recognition,â in CVPR, 2016.
[31] G. Lev, G. Sadeh, B. Klein, and L. Wolf, âRnn ï¬sher vectors for action recognition and image annotation,â in ECCV, 2016.
[32] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, âBeyond short snippets: Deep networks for video classiï¬cation,â in CVPR, 2015.
[33] L. Wang, Y. Xiong, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, âTemporal segment networks: Towards good practices for deep action recognition,â in ECCV, 2016.
[34] X. Peng, L. Wang, Y. Qiao, and Q. Peng, âBoosting VLAD with Supervised Dictionary Learning and High-Order Statistics,â in ECCV, 2014.
[35] Z. Xu, Y. Yang, and A. G. Hauptmann, âA Discriminative CNN Video Representation for Event Detection,â in CVPR, 2015.
[36] F. Perronnin and D. Larlus, âFisher Vectors Meet Neural Networks: A Hybrid Classiï¬cation Architecture,â in CVPR, 2015.
[37] X. Peng, C. Zou, Y. Qiao, and Q. Peng, âAction recognition with stacked ï¬sher vectors,â in ECCV, 2014.
[38] V. Sydorov, M. Sakurada, and C. H. Lampert, âDeep ï¬sher kernels and end to end learning of the Fisher kernel GMM parameters,â in CVPR, 2014.
[39] Y. N. Dauphin, F. Angela, M. Auli, and D. Grangier, âLanguage modeling with gated convolutional networks,â in arXiv preprint arXiv:1612.08083, 2016.
[40] A. Miech, I. Laptev, and J. Sivic, âLearnable pooling with context gating for video classiï¬cation,â arXiv preprint arXiv:1706.06905, 2017. [41] J. Hu, L. Shen, and G. Sun, âSqueeze-and-excitation networks,â arXiv
preprint arXiv:1709.01507, 2017.
[42] M. I. Jordan, âHierarchical mixtures of experts and the em algorithm,â Neural Computation, 1994.
[43] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, âLost in quantization: Improving particular object retrieval in large scale image databases,â in CVPR, 2008.
[44] N. Passalis and A. Tefas, âLearning neural bag-of-features for large scale image retrieval,â IEEE Trans. Cybernetics, 2017.
[45] A. Richard and J. Gall, âA bag-of-words equivalent recurrent neural network for action recognition,â in BMVC, 2015.
[46] K. Simonyan, A. Vedaldi, and A. Zisserman, âDeep ï¬sher networks for large-scale image classiï¬cation,â in NIPS, 2013.
[47] M. Douze, J. Revaud, C. Schmid, and H. J´egou, âStable hyper-pooling and query expansion for event detection,â in ICCV, 2013.
[48] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wat- tenberg, M. Wicke, Y. Yu, and X. Zheng, âTensorï¬ow: Large-scale machine learning on heterogeneous distributed systems,â arXiv preprint arXiv:1603.04467, 2015.
[49] S. Hershey, S. Chaudhuri, D. P. W. Ellis, J. F. Gemmeke, A. Jansen, C. Moore, M. Plakal, D. Platt, R. A. Saurous, B. Seybold, M. Slaney, R. Weiss, and K. Wilson, âCNN architectures for large-scale audio classiï¬cation,â in International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
[50] D. P. Kingma and J. Ba, âAdam: A method for stochastic optimization,â in ICLR, 2015.
[51] S. Ioffe and C. Szegedy, âBatch normalization: Accelerating deep network training by reducing internal covariate,â arXiv preprint arXiv:1502.03167, 2015.
[52] Y. Gao, O. Beijbom, N. Zhang, and T. Darrell, âCompact bilinear pooling,â in CVPR, 2016.
[53] H.-D. Wang, T. Zhang, and J. Wu, âThe monkeytyping solution to the youtube-8m video understanding challenge,â arXiv preprint arXiv:1706.05150, 2017.
[54] F. Li, C. Gan, X. Liu, Y. Bian, X. Long, Y. Li, Z. Li, J. Zhou, and S. Wen, âTemporal modeling approaches for large-scale youtube-8m video understanding,â arXiv preprint arXiv:1707.04555, 2017.
[55] S. Chen, X. Wang, Y. Tang, X. Chen, Z. Wu, and Y.-G. Jiang, âAg- gregating frame-level features for large-scale video classiï¬cation,â arXiv preprint arXiv:1707.00803, 2017.
[56] M. Skalic, M. Pekalski, and X. E. Pan, âDeep learning methods for efï¬cient large scale video labeling,â arXiv preprint arXiv:1706.04572, 2017.
[57] A. Miech, âLOUPE tensorï¬ow toolbox for learnable pooling module,â https://github.com/antoine77340/LOUPE, 2017.
8 | {
"id": "1502.03167"
} |
1706.06978 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | 8 1 0 2
p e S 3 1 ] L M . t a t s [
4 v 8 7 9 6 0 . 6 0 7 1 : v i X r a
# Deep Interest Network for Click-Through Rate Prediction
Guorui Zhou, Chengru Song, Xiaoqiang Zhu Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai Alibaba Group {guorui.xgr,chengru.scr,xiaoqiang.zxq,zhuhan.zh,fanying.fy,maxiao.ma,yanghui.yyh,junqi.jjq,lihan.hl,jingshi.gk}@ alibaba-inc.com
ABSTRACT Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embed- ding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer per- ceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length repre- sentation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture userâs diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this chal- lenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with re- spect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of pro- posed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully de- ployed in the online display advertising system in Alibaba, serving the main traffic.
# CCS CONCEPTS ⢠Information systems â Display advertising; Recommender systems;
# KEYWORDS Click-Through Rate Prediction, Display Advertising, E-commerce
1 INTRODUCTION In cost-per-click (CPC) advertising system, advertisements are ranked by the eCPM (effective cost per mille), which is the product of the bid price and CTR (click-through rate), and CTR needs to be predicted by the system. Hence, the performance of CTR prediction model has a direct impact on the final revenue and plays a key role in the advertising system. Modeling CTR prediction has received much attention from both research and industry community.
These methods follow a similar Embedding&MLP paradigm: large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into fully connected layers (also known as multilayer perceptron, MLP) to learn the nonlinear relations among features. Compared with commonly used logistic regression model [19], these deep learning methods can reduce a lot of feature engineering jobs and enhance the model capability greatly. For simplicity, we name these methods Embedding&MLP in this paper, which now have become popular on CTR prediction task.
However, the user representation vector with a limited dimen- sion in Embedding&MLP methods will be a bottleneck to express userâs diverse interests. Take display advertising in e-commerce site as an example. Users might be interested in different kinds of goods simultaneously when visiting the e-commerce site. That is to say, user interests are diverse. When it comes to CTR prediction task, user interests are usually captured from user behavior data. Embedding&MLP methods learn the representation of all interests for a certain user by transforming the embedding vectors of user behaviors into a fixed-length vector, which is in an euclidean space where all usersâ representation vectors are. In other words, diverse interests of the user are compressed into a fixed-length vector, which limits the expressive ability of Embedding&MLP methods. To make the representation capable enough for expressing userâs diverse interests, the dimension of the fixed-length vector needs to be largely expanded. Unfortunately, it will dramatically enlarge the size of learning parameters and aggravate the risk of overfitting under limited data. Besides, it adds the burden of computation and storage, which may not be tolerated for an industrial online system. On the other hand, it is not necessary to compress all the diverse interests of a certain user into the same vector when predicting a candidate ad because only part of userâs interests will influence his/her action (to click or not to click). For example, a female swim- mer will click a recommended goggle mostly due to the bought of bathing suit rather than the shoes in her last weekâs shopping list. Motivated by this, we propose a novel model: Deep Interest Network (DIN), which adaptively calculates the representation vec- tor of user interests by taking into consideration the relevance of historical behaviors given a candidate ad. By introducing a local activation unit, DIN pays attentions to the related user interests by soft-searching for relevant parts of historical behaviors and takes a weighted sum pooling to obtain the representation of user interests with respect to the candidate ad. Behaviors with higher relevance to the candidate ad get higher activated weights and dominate the representation of user interests. We visualize this phenomenon in the experiment section. In this way, the representation vector of
Recently, inspired by the success of deep learning in computer vision [14] and natural language processing [1], deep learning based methods have been proposed for CTR prediction task [3, 4, 21, 26].
user interests varies over different ads, which improves the expres- sive ability of model under limited dimension and enables DIN to better capture userâs diverse interests.
Training industrial deep networks with large scale sparse fea- tures is of great challenge. For example, SGD based optimization methods only update those parameters of sparse features appearing in each mini-batch. However, adding with traditional â2 regular- ization, the computation turns to be unacceptable, which needs to calculate L2-norm over the whole parameters (with size scaling up to billions in our situation) for each mini-batch. In this paper, we develop a novel mini-batch aware regularization where only parameters of non-zero features appearing in each mini-batch par- ticipate in the calculation of L2-norm, making the computation acceptable. Besides, we design a data adaptive activation function, which generalizes commonly used PReLU[12] by adaptively adjust- ing the rectified point w.r.t. distribution of inputs and is shown to be helpful for training industrial networks with sparse features. The contributions of this paper are summarized as follows:
⢠We point out the limit of using fixed-length vector to express userâs diverse interests and design a novel deep interest network (DIN) which introduces a local activation unit to adaptively learn the representation of user interests from historical behaviors w.r.t. given ads. DIN can improve the expressive ability of model greatly and better capture the diversity characteristic of user interests.
⢠We develop two novel techniques to help training industrial deep networks: i) a mini-batch aware regularizer, which saves heavy computation of regularization on deep networks with huge number of parameters and is helpful for avoiding overfitting, ii) a data adaptive activation function, which generalizes PReLU by considering the distribution of inputs and shows well performance.
⢠We conduct extensive experiments on both public and Al- ibaba datasets. Results verify the effectiveness of proposed DIN and training techniques. Our code1 is publicly avail- able. The proposed approaches have been deployed in the commercial display advertising system in Alibaba, one of worldâs largest advertising platform, contributing significant improvement to the business.
In this paper we focus on the CTR prediction modeling in the scenario of display advertising in e-commerce industry. Methods discussed here can be applied in similar scenarios with rich user behaviors, such as personalized recommendation in e-commerce sites, feeds ranking in social networks etc.
The rest of the paper is organized as follows. We discuss related work in section 2 and introduce the background about characteristic of user behavior data in display advertising system of e-commerce site in section 3. Section 4 and 5 describe in detail the design of DIN model as well as two proposed training techniques. We present experiments in section 6 and conclude in section 7.
2 RELATEDWORK The structure of CTR prediction model has evolved from shallow to deep. At the same time, the number of samples and the dimension
1Experiment code on two public datasets https://github.com/zhougr1993/DeepInterestNetwork is available on GitHub:
2
of features used in CTR model have become larger and larger. In order to better extract feature relations to improve performance, several works pay attention to the design of model structure.
As a pioneer work, NNLM [2] learns distributed representation for each word, aiming to avoid curse of dimension in language modeling. This method, often referred to as embedding, has inspired many natural language models and CTR prediction models that need to handle large-scale sparse inputs.
LS-PLM [9] and FM [20] models can be viewed as a class of net- works with one hidden layer, which first employs embedding layer on sparse inputs and then imposes specially designed transforma- tion functions for target fitting, aiming to capture the combination relations among features.
Deep Crossing [21], Wide&Deep Learning [4] and YouTube Rec- ommendation CTR model [3] extend LS-PLM and FM by replacing the transformation function with complex MLP network, which enhances the model capability greatly. PNN[5] tries to capture high-order feature interactions by involving a product layer after embedding layer. DeepFM[10] imposes a factorization machines as "wide" module in Wide&Deep [4] with no need of feature engineer- ing. Overall, these methods follow a similar model structure with combination of embedding layer (for learning the dense represen- tation of sparse features) and MLP (for learning the combination relations of features automatically). This kind of CTR prediction model reduces the manual feature engineering jobs greatly. Our base model follows this kind of model structure. However in appli- cations with rich user behaviors, features are often contained with variable-length list of ids, e.g., searched terms or watched videos in YouTube recommender system [3]. These models often transform corresponding list of embedding vectors into a fixed-length vector via sum/average pooling, which causes loss of information. The proposed DIN tackles it by adaptively learning the representation vector w.r.t. given ad, improving the expressive ability of model.
Attention mechanism originates from Neural Machine Transla- tion (NMT) field [1]. NMT takes a weighted sum of all the annota- tions to get an expected annotation and focuses only on information relevant to the generation of next target word. A recent work, Deep- Intent [26] applies attention in the context of search advertising. Similar to NMT, they use RNN[24] to model text, then learn one global hidden vector to help paying attention on the key words in each query. It is shown that the use of attention can help capturing the main intent of query or ad. DIN designs a local activation unit to soft-search for relevant user behaviors and takes a weighted sum pooling to obtain the adaptive representation of user interests with respect to a given ad. The user representation vector varies over different ads, which is different from DeepIntent in which there is no interaction between ad and user.
We make code publicly available, and further show how to suc- cessfully deploy DIN in one of the worldâs largest advertising sys- tems with novel developed techniques for training large scale deep networks with hundreds of millions of parameters.
3 BACKGROUND In e-commerce sites, such as Alibaba, advertisements are natural goods. In the rest of this paper, without special declaration, we re- gard ads as goods. Figure 1 briefly illustrates the running procedure
User History Behaviors
Figure 1: Illustration of running procedure of display adver- tising system in Alibaba, in which user behavior data plays important roles.
of display advertising system in Alibaba, which consists of two main stages: i) matching stage which generates list of candidate ads relevant to the visiting user via methods like collaborative filtering, ii) ranking stage which predicts CTR for each given ad and then selects top ranked ones. Everyday, hundreds of millions of users visit the e-commerce site, leaving us with lots of user behavior data which contributes critically in building matching and ranking mod- els. It is worth mentioning that users with rich historical behaviors contain diverse interests. For example, a young mother has browsed goods including woolen coat, T-shits, earrings, tote bag, leather handbag and childrenâs coat recently. These behavior data give us hints about her shopping interests. When she visits the e-commerce site, system displays a suitable ad to her, for example a new hand- bag. Obviously the displayed ad only matches or activates part of interests of this mother. In summary, interests of user with rich behaviors are diverse and could be locally activated given certain ads. We show later in this paper making use of these characteristics plays important role for building CTR prediction model.
4 DEEP INTEREST NETWORK Different from sponsored search, users come into display adver- tising system without explicitly expressed intentions. Effective ap- proaches are required to extract user interests from rich historical behaviors when building the CTR prediction model. Features that depict users and ads are the basic elements in the CTR modeling of advertisement system. Making use of these features reasonably and mining information from them are critical.
4.1 Feature Representation Data in industrial CTR prediction tasks is mostly in a multi-group categorial form, for example, [weekday=Friday, gender=Female, visited_cate_ids={Bag,Book}, ad_cate_id=Book], which is normally transformed into high-dimensional sparse binary features via en- coding [4, 19, 21]. Mathematically, encoding vector of i-th feature group is formularized as t; ¢ RXâ. Kj denotes the dimensionality of feature group i, which means feature group i contains K; unique ids. t;Lj] is the j-th element of t; and t;[j] ⬠{0, 1}. we ti[j] = k. Vec- tor t; with k = 1 refers to one-hot encoding and k > 1 refers to multi-hot encoding. Then one instance can be represent as x= (t7, tT, ALT in a group-wise manner, where M is num- ber of feature groups, Dy K; = K, K is dimensionality of the entire feature space. In this way, the aforementioned instance with
3
Table 1: Statistics of feature sets used in the display adver- tising system in Alibaba. Features are composed of sparse binary vectors in the group-wise manner.
Category Feature Group Dimemsionality Type #Nonzero Ids per Instance User Profile Features User Behavior Features Ad Features gender age_level ... visited_goods_ids visited_shop_ids visited_cate_ids goods_id shop_id cate_id ... 2 â¼ 10 ... â¼ 109 â¼ 107 â¼ 104 â¼ 107 â¼ 105 â¼ 104 ... one-hot one-hot ... multi-hot multi-hot multi-hot one-hot one-hot one-hot ... 1 1 ... â¼ 103 â¼ 103 â¼ 102 1 1 1 ... Context Features pid time ... â¼ 10 â¼ 10 ... one-hot one-hot ... 1 1 ...
four groups of features are illustrated as:
[0,0,0,0,1,0,0] âe-â-"â weekday=Friday
[0,0,0,0,1,0,0] [0,1] _ââ [0,..1,.1,..0) [0,..,1,...,0]
# [0,1] _ââ gender=Female
# visited_cate_ids={Bag,Book} [0,..1,.1,..0)
ad_cate_id=Book [0,..,1,...,0]
weekday=Friday gender=Female visited_cate_ids={Bag,Book} ad_cate_id=Book
The whole feature set used in our system is described in Table 1. It is composed of four categories, among which user behavior features are typically multi-hot encoding vectors and contain rich information of user interests. Note that in our setting, there are no combination features. We capture the interaction of features with deep neural network.
4.2 Base Model(Embedding&MLP) Most of the popular model structures [3, 4, 21] share a similar Embedding&MLP paradigm, which we refer to as base model, as shown in the left of Fig.2. It consists of several parts:
Embedding layer. As the inputs are high dimensional binary vectors, embedding layer is used to transform them into low di- mensional dense representations. For the i-th feature group of ti , ] â RDÃKi represent the i-th embed- let Wi = [wi j , ..., wi Ki ding dictionary, where wi â RD is an embedding vector with di- j mensionality of D. Embedding operation follows the table lookup mechanism, as illustrated in Fig.2.
⢠If ti is one-hot vector with j-th element ti [j] = 1, the em- bedded representation of ti is a single embedding vector ei = wi j .
⢠If ti is multi-hot vector with ti [j] = 1 for j â {i1, i2, ..., ik }, the embedded representation of ti is a list of embedding vectors: {ei1 , ei2 , ...eik } = {wi i1
Pooling layer and Concat layer. Notice that different users have different numbers of behaviors. Thus the number of non-zero values for multi-hot behavioral feature vector ti varies across in- stances, causing the lengths of the corresponding list of embedding vectors to be variable. As fully connected networks can only handle fixed-length inputs, it is a common practice [3, 4] to transform the list of embedding vectors via a pooling layer to get a fixed-length vector:
# ei = pooling(ei1 , ei2 , ...eik ).
ei = pooling(ei1 , ei2 , ...eik ). (1)
(Output) â [Softmax (2) (Pret (200) â] â_âs eon (Concat & Flatten) _â_ âSUM Pooling) --/ = (Concat) â on icat] (Concat) " He 6.6 @00 User Profile Goods 1 Features Embedding Layer @00 eco 46.6 ~ Goods N Context: Features 00-0 User Profile Features Goods 2 dita Candidate User Behaviors Ad Base Model Output) (Softmax (2) PRELU/Dice (80) PReLU/Dice (200) | Activation Weight 1 Linear GD] [PRelu/Dice Ge) âme soon (Concat & Flatten) (SUM Pooling) res a Inputs from User Inputs from Ad Activation Unit ® Goods 1 Weight |Goods 2 Weight Goods N Weight [Activation } âAetivation Unit t [Activation Unit t Unit, t ® Product © Goods 10 © shop 1D © Cate 1D © Other ID é } b4d4 (Fe layer eee Candidate Context User Béhaviors Ad Features Deep Interest Network oan (Concat} =e (Coneat) = (Concat) $46 Goods 1 Goods 2 Goods N
Figure 2: Network Architecture. The left part illustrates the network of base model (Embedding&MLP). Embeddings of cate_id, shop_id and goods_id belong to one goods are concatenated to represent one visited goods in userâs behaviors. Right part is our proposed DIN model. It introduces a local activation unit, with which the representation of user interests varies adaptively given different candidate ads.
Two most commonly used pooling layers are sum pooling and average pooling, which apply element-wise sum/average operations to the list of embedding vectors.
Both embedding and pooling layers operate in a group-wise manner, mapping the original sparse features into multiple fixed- length representation vectors. Then all the vectors are concatenated together to obtain the overall representation vector for the instance.
MLP. Given the concatenated dense representation vector, fully connected layers are used to learn the combination of features auto- matically. Recently developed methods [4, 5, 10] focus on designing structures of MLP for better information extraction.
Loss. The objective function used in base model is the negative log-likelihood function defined as:
Le-= sw (x,yeS log p(x) + (1â y)log(1âp(x))), (2)
training data and add the burden of computation and storage, which may not be tolerated for an industrial online system.
Is there an elegant way to represent userâs diverse interests in one vector under limited dimension? The local activation char- acteristic of user interests gives us inspiration to design a novel model named deep interest network(DIN). Imagine when the young mother mentioned above in section 3 visits the e-commerce site, she finds the displayed new handbag cute and clicks it. Letâs dissect the driving force of click action. The displayed ad hits the related interests of this young mother by soft-searching her historical be- haviors and finding that she had browsed similar goods of tote bag and leather handbag recently. In other words, behaviors related to displayed ad greatly contribute to the click action. DIN simulates this process by paying attention to the representation of locally activated interests w.r.t. given ad. Instead of expressing all userâs diverse interests with the same vector, DIN adaptively calculate the representation vector of user interests by taking into considera- tion the relevance of historical behaviors w.r.t. candidate ad. This representation vector varies over different ads.
where S is the training set of size N , with x as the input of the network and y â {0, 1} as the label, p(x) is the output of the network after the softmax layer, representing the predicted probability of sample x being clicked.
4.3 The structure of Deep Interest Network Among all those features of Table 1, user behavior features are critically important and play key roles in modeling user interests in the scenario of e-commerce applications.
The right part of Fig.2 illustrates the architecture of DIN. Com- pared with base model, DIN introduces a novel designed local acti- vation unit and maintains the other structures the same. Specifically, activation units are applied on the user behavior features, which performs as a weighted sum pooling to adaptively calculate user representation vU given a candidate ad A, as shown in Eq.(3)
Base model obtains a fixed-length representation vector of user interests by pooling all the embedding vectors over the user behav- ior feature group, as Eq.(1). This representation vector stays the same for a given user, in regardless of what candidate ads are. In this way, the user representation vector with a limited dimension will be a bottleneck to express userâs diverse interests. To make it capable enough, an easy method is to expand the dimension of embedding vector, which unfortunately will increase the size of learning parameters heavily. It will lead to overfitting under limited
H H wy (A) = f(VA, C1, â¬25 «1 CH) = » a(ej, vale; = » wjej, (3) j=l j=l
where {e1, e2, ..., eH } is the list of embedding vectors of behaviors of user U with length of H , vA is the embedding vector of ad A. In this way, vU (A) varies over different ads. a(·) is a feed-forward network with output as the activation weight, as illustrated in Fig.2. Apart from the two input embedding vectors, a(·) adds the out
4
product of them to feed into the subsequent network, which is an explicit knowledge to help relevance modeling.
Local activation unit of Eq.(3) shares similar ideas with attention methods which are developed in NMT task[1]. However different from traditional attention method, the constraint of >); w; = 1 is relaxed in Eq.(3), aiming to reserve the intensity of user interests. That is, normalization with softmax on the output of a(-) is aban- doned. Instead, value of 5); wi is treated as an approximation of the intensity of activated user interests to some degree. For exam- ple, if one userâs historical behaviors contain 90% clothes and 10% electronics. Given two candidate ads of T-shirt and phone, T-shirt activates most of the historical behaviors belonging to clothes and may get larger value of vy (higher intensity of interest) than phone. Traditional attention methods lose the resolution on the numerical scale of vy by normalizing of the output of a(-).
We have tried LSTM to model user historical behavior data in the sequential manner. But it shows no improvement. Different from text which is under the constraint of grammar in NLP task, the sequence of user historical behaviors may contain multiple concurrent interests. Rapid jumping and sudden ending over these interests causes the sequence data of user behaviors to seem to be noisy. A possible direction is to design special structures to model such data in a sequence way. We leave it for future research.
5 TRAINING TECHNIQUES In the advertising system in Alibaba, numbers of goods and users scale up to hundreds of millions. Practically, training industrial deep networks with large scale sparse input features is of great challenge. In this section, we introduce two important techniques which are proven to be helpful in practice.
5.1 Mini-batch Aware Regularization Overfitting is a critical challenge for training industrial networks. For example, with addition of fine-grained features, such as fea- tures of goods_ids with dimensionality of 0.6 billion (including visited_дoods_ids of user and дoods_id of ad as described in Ta- ble 1), model performance falls rapidly after the first epoch during training without regularization, as the dark green line shown in Fig.4 in later section 6.5. It is not practical to directly apply tradi- tional regularization methods, such as â2 and â1 regularization, on training networks with sparse inputs and hundreds of millions of parameters. Take â2 regularization as an example. Only parameters of non-zero sparse features appearing in each mini-batch needs to be updated in the scenario of SGD based optimization methods without regularization. However, when adding â2 regularization it needs to calculate L2-norm over the whole parameters for each mini-batch, which leads to extremely heavy computations and is unacceptable with parameters scaling up to hundreds of millions.
p(s) * pls) 1 ' ee ol 7 1â F) PReLU Dice
Figure 3: Control function of PReLU and Dice.
5
In this paper, we introduce an efficient mini-batch aware reg- ularizer, which only calculates the L2-norm over the parameters of sparse features appearing in each mini-batch and makes the computation possible. In fact, it is the embedding dictionary that contributes most of the parameters for CTR networks and arises the difficulty of heavy computation. Let W â RDÃK denote parameters of the whole embedding dictionary, with D as the dimensionality of the embedding vector and K as the dimensionality of feature space. Expand the â2 regularization on W over samples
22 wy 12 LF *D) nn 12(W) = IIWI =D llwl= lw ylB, @ n jl (x, y)eS J=1 J
where wj ⬠R? is the j-th embedding vector, I(x; # 0) denotes if the instance x has the feature id j, and nj denotes the number of occurrence for feature id j in all samples. Eq.(4) can be transformed into Eq.(5) in the mini-batch aware manner
L(W) = & So I(x; #0) 2 5 aw)= D1 >) Dd) So Ivyll, (8) j=1 m=1 (x, yEBm J
j=1 m=1 (x, yEBm J where B denotes the number of mini-batches, 8, denotes the m-th mini-batch. Let amj = maxx, y)eg,, [xj # 0) denote if there is at least one instance having the feature id j in mini-batch B;,. Then Eq.(5) can be approximated by
K B LAW) = >" >) j amj 2 a7 lw ylld- (6) jai ma
In this way, we derive an approximated mini-batch aware version of â2 regularization. For the m-th mini-batch, the gradient w.r.t. the embedding weights of feature j is
1 OLX), Y) 4 @mni Wi TWiT 1Bml wj (7) . ead (x, yEBm ow "
in which only parameters of features appearing in m-th mini-batch participate in the computation of regularization.
5.2 Data Adaptive Activation Function PReLU [12] is a commonly used activation function
f (s) = s α s if s > 0 if s ⤠0. = p(s) · s + (1 â p(s)) · α s, (8)
where s is one dimension of the input of activation function f (·) and p(s) = I (s > 0) is an indicator function which controls f (s) to switch between two channels of f (s) = s and f (s) = αs. α in the second channel is a learning parameter. Here we refer to p(s) as the control function. The left part of Fig.3 plots the control function of PReLU. PReLU takes a hard rectified point with value of 0, which may be not suitable when the inputs of each layer follow different distributions. Take this into consideration, we design a novel data adaptive activation function named Dice,
f (s) = p(s) · s + (1 â p(s)) · αs, p(s) = 1 + e 1 â s âE[s ] â V ar [s ]+ϵ (9)
with the control function to be plotted in the right part of Fig.3. In the training phrase, E[s] and V ar [s] is the mean and variance of input in each mini-batch. In the testing phrase, E[s] and V ar [s] is calculated by moving averages E[s] and V ar [s] over data. ϵ is a small constant which is set to be 10â8 in our practice.
Dice can be viewed as a generalization of PReLu. The key idea of Dice is to adaptively adjust the rectified point according to dis- tribution of input data, whose value is set to be the mean of input. Besides, Dice controls smoothly to switch between the two channels. When E(s) = 0 and V ar [s] = 0, Dice degenerates into PReLU.
6 EXPERIMENTS In this section, we present our experiments in detail, including datasets, evaluation metric, experimental setup, model comparison and the corresponding analysis. Experiments on two public datasets with user behaviors as well as a dataset collected from the display advertising system in Alibaba demonstrate the effectiveness of proposed approach which outperforms state-of-the-art methods on the CTR prediction task. Both the public datasets and experiment codes are made available1.
6.1 Datasets and Experimental Setup Amazon Dataset2. Amazon Dataset contains product reviews and metadata from Amazon, which is used as benchmark dataset[13, 18, 23]. We conduct experiments on a subset named Electronics, which contains 192,403 users, 63,001 goods, 801 categories and 1,689,188 samples. User behaviors in this dataset are rich, with more than 5 reviews for each users and goods. Features include goods_id, cate_id, user reviewed goods_id_list and cate_id_list. Let all behaviors of a user be (b1, b2, . . . , bk , . . . , bn ), the task is to predict the (k+1)-th reviewed goods by making use of the first k reviewed goods. Training dataset is generated with k = 1, 2, . . . , n â 2 for each user. In the test set, we predict the last one given the first n â 1 reviewed goods. For all models, we use SGD as the optimizer with exponential decay, in which learning rate starts at 1 and decay rate is set to 0.1. The mini-batch size is set to be 32.
MovieLens Dataset3. MovieLens data[11] contains 138,493 users, 27,278 movies, 21 categories and 20,000,263 samples. To make it suitable for CTR prediction task, we transform it into a binary clas- sification data. Original user rating of the movies is continuous value ranging from 0 to 5. We label the samples with rating of 4 and 5 to be positive and the rest to be negative. We segment the data into training and testing dataset based on userID. Among all 138,493 users, of which 100,000 are randomly selected into training set (about 14,470,000 samples) and the rest 38,493 into the test set (about 5,530,000 samples). The task is to predict whether user will rate a given movie to be above 3(positive label) based on histori- cal behaviors. Features include movie_id, movie_cate_id and user rated movie_id_list, movie_cate_id_list. We use the same optimizer, learning rate and mini-batch size as described on Amazon Dataset. Alibaba Dataset. We collected traffic logs from the online dis- play advertising system in Alibaba, of which two weeksâ samples are used for training and samples of the following day for testing. The size of training and testing set is about 2 billions and 0.14 billion respectively. For all the deep models, the dimensionality of embed- ding vector is 12 for the whole 16 groups of features. Layers of MLP is set by 192 Ã 200 Ã 80 Ã 2. Due to the huge size of data, we set the mini-batch size to be 5000 and use Adam[15] as the optimizer. We
# 2http://jmcauley.ucsd.edu/data/amazon/ 3https://grouplens.org/datasets/movielens/20m/
6
# Table 2: Statistics of datasets used in this paper.
Dataset Users Goodsa Categories Samples Amazon(Electro). MovieLens. Alibaba. 192,403 138,493 60 million 63,001 27,278 0.6 billion 801 21 100,000 1,689,188 20,000,263 2.14 billion
a For MovieLens dataset, goods refer to be movies.
apply exponential decay, in which learning rate starts at 0.001 and decay rate is set to 0.9.
The statistics of all the above datasets is shown in Table 2. Volume of Alibaba Dataset is much larger than both Amazon and MovieLens, which brings more challenges.
# 6.2 Competitors
⢠LR[19]. Logistic regression (LR) is a widely used shallow model before deep networks for CTR prediction task. We implement it as a weak baseline.
⢠BaseModel. As introduced in section4.2, BaseModel follows the Embedding&MLP architecture and is the base of most of subsequently developed deep networks for CTR modeling. It acts as a strong baseline for our model comparison.
⢠Wide&Deep[4]. In real industrial applications, Wide&Deep model has been widely accepted. It consists of two parts: i) wide model, which handles the manually designed cross product features, ii) deep model, which automatically ex- tracts nonlinear relations among features and equals to the BaseModel. Wide&Deep needs expertise feature engineering on the input of the "wide" module. We follow the practice in [10] to take cross-product of user behaviors and candidates as wide inputs. For example, in MovieLens dataset, it refers to the cross-product of user rated movies and candidate movies. ⢠PNN[5]. PNN can be viewed as an improved version of BaseModel by introducing a product layer after embedding layer to capture high-order feature interactions.
⢠DeepFM[10]. It imposes a factorization machines as "wide" module in Wide&Deep saving feature engineering jobs.
6.3 Metrics In CTR prediction field, AUC is a widely used metric[8]. It measures the goodness of order by ranking all the ads with predicted CTR, including intra-user and inter-user orders. An variation of user weighted AUC is introduced in [7, 13] which measures the goodness of intra-user order by averaging AUC over users and is shown to be more relevant to online performance in display advertising system. We adapt this metric in our experiments. For simplicity, we still refer it as AUC. It is calculated as follows:
7,
7, #impression; x AUC; AUC = Lis timpression; x AUC: (10) ap > DL, #impression;
where n is the number of users, #impressioni and AUCi are the number of impressions and AUC corresponding to the i-th user.
Besides, we follow [25] to introduce Relalmpr metric to measure relative improvement over models. For a random guesser, the value of AUC is 0.5. Hence Relalmpr is defined as below: AUC(measured model) â 0.5
AUC(measured model) â 0.5 1 100%. 11 AUC(base model) â 0.5 * (1) Relalmpr =
Train Loss 0235 0.260 Test Loss Test AUC No Goods id 0.230 0.255 0.225 0.220 Goods id without Reg Goods id Dropout Goods id Filter Goods id with Reg in Difacto 059) Goods id with MBA Reg 0.250 Loss Loss 0215 No Goods id Goods id without Reg Goods id Dropout 0265 Goods id Filter Goods id with Reg in Difacto Goods id with MBA Reg ooo 0.240, 0210 0.205 AUC No Goods id Goods id without Reg Goods id Dropout Goods id Filter Goods id with Reg in Difacto Goods id with MBA Reg 05 T 15 2 0 05 Epoch 15 2 0 05 1 15 2 Epoch Epoch
Figure 4: Performances of BaseModel with different regularizations on Alibaba Dataset. Training with fine-grained дoods_ids features without regularization encounters serious overfitting after the first epoch. All the regularizations show improvement, among which our proposed mini-batch aware regularization performs best. Besides, well trained model with дoods_ids features gets higher AUC than without them. It comes from the richer information that fine-grained features contained.
Table 3: Model Coparison on Amazon Dataset and Movie- Lens Dataset. All the lines calculate RelaImpr by comparing with BaseModel on each dataset respectively.
Table 4: Best AUCs of BaseModel with different regular- izations on Alibaba Dataset corresponding to Fig.4. All the other lines calculate RelaImpr by comparing with first line.
Model MovieLens. AUC RelaImpr 0.7263 LR 0.7300 BaseModel 0.7304 Wide&Deep 0.7321 PNN 0.7324 DeepFM DIN 0.7337 DIN with Dicea 0.7348 -1.61% 0.00% 0.17% 0.91% 1.04% 1.61% 2.09% Amazon(Electro). RelaImpr AUC 0.7742 0.8624 0.8637 0.8679 0.8683 0.8818 0.8871 -24.34% 0.00% 0.36% 1.52% 1.63% 5.35% 6.82%
Regularization Without goods_ids feature and Reg. With goods_ids feature without Reg. With goods_ids feature and Dropout Reg. With goods_ids feature and Filter Reg. With goods_ids feature and Difacto Reg. With goods_ids feature and MBA. Reg. AUC RelaImpr 0.00% 0.5940 2.02% 0.5959 3.19% 0.5970 4.57% 0.5983 0.5954 1.49% 0.6031 9.68%
# a Other lines except LR use PReLU as activation function.
# 6.4 Result from model comparison on Amazon Dataset and MovieLens Dataset
Table 3 shows the results on Amazon dataset and MovieLens dataset. All experiments are repeated 5 times and averaged results are re- ported. The influence of random initialization on AUC is less than 0.0002. Obviously, all the deep networks beat LR model significantly, which indeed demonstrates the power of deep learning. PNN and DeepFM with specially designed structures preform better than Wide&Deep. DIN performs best among all the competitors. Espe- cially on Amazon Dataset with rich user behaviors, DIN stands out significantly. We owe this to the design of local activation unit struc- ture in DIN. DIN pays attentions to the locally related user interests by soft-searching for parts of user behaviors that are relevant to can- didate ad. With this mechanism, DIN obtains an adaptively varying representation of user interests, greatly improving the expressive ability of model compared with other deep networks. Besides, DIN with Dice brings further improvement over DIN, which verifies the effectiveness of the proposed data adaptive activation function Dice.
6.5 Performance of regularization As the dimension of features in both Amazon Dataset and Movie- Lens Dataset is not high (about 0.1 million), all the deep models including our proposed DIN do not meet grave problem of overfit- ting. However, when it comes to the Alibaba dataset from the online advertising system which contains higher dimensional sparse fea- tures, overfitting turns to be a big challenge. For example, when training deep models with fine-grained features (e.g., features of дoods_ids with dimension of 0.6 billion in Table 1), serious overfit- ting occurs after the first epoch without any regularization, which causes the model performance to drop rapidly, as the dark green line shown in Fig.4. For this reason, we conduct careful experiments to check the performance of several commonly used regularizations. ⢠Dropout[22]. Randomly discard 50% of feature ids in each
sample.
⢠Filter. Filter visited дoods_id by occurrence frequency in samples and leave only the most frequent ones. In our setting, top 20 million дoods_ids are left.
⢠Regularization in DiFacto[16]. Parameters associated with frequent features are less over-regularized.
⢠MBA. Our proposed Mini-Batch Aware regularization method (Eq.4). Regularization parameter λ for both DiFacto and MBA is searched and set to be 0.01.
Fig.4 and Table 4 give the comparison results. Focusing on the detail of Fig.4, model trained with fine-grained дoods_ids features
7
brings large improvement on the test AUC performance in the first epoch, compared without it. However, overfitting occurs rapidly in the case of training without regularization (dark green line). Dropout prevents quick overfitting but causes slower convergence. Frequency filter relieves overfitting to a degree. Regularization in DiFacto sets a greater penalty on дoods_id with high frequency, which performs worse than frequency filter. Our proposed mini- batch aware(MBA) regularization performs best compared with all the other methods, which prevents overfitting significantly.
Besides, well trained models with дoods_ids features show better AUC performance than without them. This is duo to the richer information that fine-grained features contained. Considering this, although frequency filter performs slightly better than dropout, it throws away most of low frequent ids and may lose room for models to make better use of fine-grained features.
# 6.6 Result from model comparison on Alibaba Dataset
Table 5 shows the experimental results on Alibaba dataset with full feature sets as shown in Table 1. As expected, LR is proven to be much weaker than deep models. Making comparisons among deep models, we report several conclusions. First, under the same activa- tion function and regularization, DIN itself has achieved superior performance compared with all the other deep networks including BaseModel, Wide&Deep, PNN and DeepFM. DIN achieves 0.0059 absolute AUC gain and 6.08% RelaImpr over BaseModel. It validates again the useful design of local activation unit structure. Second, ablation study based on DIN demonstrates the effectiveness of our proposed training techniques. Training DIN with mini-batch aware regularizer brings additional 0.0031 absolute AUC gain over dropout. Besides, DIN with Dice brings additional 0.0015 absolute AUC gain over PReLU.
Taken together, DIN with MBA regularization and Dice achieves total 11.65% RelaImpr and 0.0113 absolute AUC gain over Base- Model. Even compared with competitor DeepFM which performs best on this dataset, DIN still achieves 0.009 absolute AUC gain. It is notable that in commercial advertising systems with hundreds of millions of traffics, 0.001 absolute AUC gain is significant and worthy of model deployment empirically. DIN shows great superi- ority to better understand and make use of the characteristics of user behavior data. Besides, the two proposed techniques further improve model performance and provide powerful help for training large scale industrial deep networks.
6.7 Result from online A/B testing Careful online A/B testing in the display advertising system in Alibaba was conducted from 2017-05 to 2017-06. During almost a monthâs testing, DIN trained with the proposed regularizer and acti- vation function contributes up to 10.0% CTR and 3.8% RPM(Revenue Per Mille) promotion4 compared with the introduced BaseModel, the last version of our online-serving model. This is a significant improvement and demonstrates the effectiveness of our proposed approaches. Now DIN has been deployed online and serves the main traffic.
4In our real advertising system, ads are ranked by CTRα · bid-price with α > 1.0, which controls the balance of promotion of CTR and RPM.
8
Table 5: Model Comparison on Alibaba Dataset with full feature sets. All the lines calculate RelaImpr by comparing with BaseModel. DIN significantly outperforms all the other competitors. Besides, training DIN with our proposed mini- batch aware regularizer and Dice activation function brings further improvements.
Model AUC RelaImpr LR BaseModela,b Wide&Deepa,b PNNa,b DeepFMa,b DIN Modela,b DIN with MBA Reg.a DIN with Dice b DIN with MBA Reg. and Dice 0.5738 0.5970 0.5977 0.5983 0.5993 0.6029 0.6060 0.6044 0.6083 - 23.92% 0.00% 0.72% 1.34% 2.37% 6.08% 9.28% 7.63% 11.65%
a These lines are trained with PReLU as the activation function. b These lines are trained with dropout regularization.
It is worth mentioning that online serving of industrial deep networks is not an easy job with hundreds of millions of users vis- iting our system everyday. Even worse, at traffic peak our system serves more than 1 million users per second. It is required to make realtime CTR predictions with high throughput and low latency. For example, in our real system we need to predict hundreds of ads for each visitor in less than 10 milliseconds. In our practice, several important techniques are deployed for accelerating online serving of industrial deep networks under the CPU-GPU archi- tecture: i) request batching which merges adjacent requests from CPU to take advantage of GPU power, ii) GPU memory optimization which improves the access pattern to reduce wasted transactions in GPU memory, iii) concurrent kernel computation which allows execution of matrix computations to be processed with multiple CUDA kernels concurrently. In all, optimization of these techniques doubles the QPS (Query Per Second) capacity of a single machine practically. Online serving of DIN also benefits from this.
6.8 Visualization of DIN Finally we conduct case study to reveal the inner structure of DIN on Alibaba dataset. We first examine the effectiveness of local activa- tion unit. Fig.5 illustrates the activation intensity of user behaviors with respect to a candidate ad. As expected, behaviors with high relevance to candidate ad are weighted high.
We then visualize the learned embedding vectors. Taking the young mother mentioned before as example, we randomly select 9 categories (dress, sport shoes, bags, etc) and 100 goods of each category as the candidate ads for her. Fig.6 shows the visualization of embedding vectors of goods with t-SNE[17] learned by DIN, in which points with same shape correspond to the same category. We can see that goods with same category almost belong to one cluster, which shows the clustering property of DIN embeddings clearly. Besides, we color the points that represent candidate ads by the prediction value. Fig.6 is also a heat map of this motherâs interest density distribution for potential candidates in embedding space. It shows DIN can form a multimodal interest density distribution in
Past
Figure 5: Illustration of adaptive activation in DIN. Behav- iors with high relevance to candidate ad get high activation weight.
Figure 6: Visualization of embeddings of goods in DIN. Shape of points represents category of goods. Color of points corresponds to CTR prediction value.
candidatesâ embedding space for a certain user to capture his/her diverse interests.
7 CONCLUSIONS In this paper, we focus on the task of CTR prediction modeling in the scenario of display advertising in e-commerce industry with rich user behavior data. The use of fixed-length representation in traditional deep CTR models is a bottleneck for capturing the diversity of user interests. To improve the expressive ability of model, a novel approach named DIN is designed to activate related user behaviors and obtain an adaptive representation vector for user interests which varies over different ads. Besides two novel techniques are introduced to help training industrial deep networks and further improve the performance of DIN. They can be easily generalized to other industrial deep learning tasks. DIN now has been deployed in the online display advertising system in Alibaba.
REFERENCES [1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations.
[2] Ducharme Réjean Bengio Yoshua et al. 2003. A neural probabilistic language model. Journal of Machine Learning Research (2003), 1137â1155.
[3] Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 191â198.
9
[4] Cheng H. et al. 2016. Wide & deep learning for recommender systems. In Pro- ceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM. [5] Qu Y. et al. 2016. Product-Based Neural Networks for User Response Prediction.
In Proceedings of the 16th International Conference on Data Mining.
[6] Wang H. et al. 2018. DKN: Deep Knowledge-Aware Network for News Recom- mendation. In Proceedings of 26th International World Wide Web Conference. [7] Zhu H. et al. 2017. Optimized Cost per Click in Taobao Display Advertising. In Proceedings of the 23rd International Conference on Knowledge Discovery and Data Mining. ACM, 2191â2200.
[8] Tom Fawcett. 2006. An introduction to ROC analysis. Pattern recognition letters 27, 8 (2006), 861â874.
[9] Kun Gai, Xiaoqiang Zhu, et al. 2017. Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction. arXiv preprint arXiv:1704.05194 (2017). [10] Huifeng Guo, Ruiming Tang, et al. 2017. DeepFM: A Factorization-Machine based Neural Network for CTR Prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. 1725â1731.
[11] F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems 5, 4 (2015). [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision. 1026â1034. [13] Ruining He and Julian McAuley. 2016. Ups and Downs: Modeling the Vi- sual Evolution of Fashion Trends with One-Class Collaborative Filtering. In Proceedings of the 25th International Conference on World Wide Web. 507â517. https://doi.org/10.1145/2872427.2883037
[14] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks.
[15] Diederik Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimiza- tion. In Proceedings of the 3rd International Conference on Learning Representations. [16] Mu Li, Ziqi Liu, Alexander J Smola, and Yu-Xiang Wang. 2016. DiFacto: Dis- tributed factorization machines. In Proceedings of the 9th ACM International Conference on Web Search and Data Mining. 377â386.
[17] Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9, Nov (2008), 2579â2605.
[18] Julian Mcauley, Christopher Targett, Qinfeng Shi, and Van Den Hengel Anton. Image-Based Recommendations on Styles and Substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval. 43â52.
[19] H. Brendan Mcmahan, H. Brendan Holt, et al. 2014. Ad Click Prediction: a View from the Trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1222â1230.
[20] Steffen Rendle. 2010. Factorization machines. In Proceedings of the 10th Interna- tional Conference on Data Mining. IEEE, 995â1000.
[21] Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep Crossing: Web-scale modeling without manually crafted combinatorial features. [22] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, 1 (2014), 1929â1958. [23] Andreas Veit, Balazs Kovacs, et al. 2015. Learning Visual Clothing Style With Heterogeneous Dyadic Co-Occurrences. In Proceedings of the IEEE International Conference on Computer Vision.
[24] Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation (1989), 270â280.
[25] Ling Yan, Wu-jun Li, Gui-Rong Xue, and Dingyi Han. 2014. Coupled group lasso for web-scale ctr prediction in display advertising. In Proceedings of the 31th International Conference on Machine Learning. 802â810.
[26] Shuangfei Zhai, Keng-hao Chang, Ruofei Zhang, and Zhongfei Mark Zhang. 2016. Deepintent: Learning attentions for online advertising with recurrent neural networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1295â1304.
[27] Song J et al. Zhou C, Bai J. 2018. ATRank: An Attention-Based User Behavior Mod- eling Framework for Recommendation. In Proceedings of 32th AAAI Conference on Artificial Intelligence. | {
"id": "1704.05194"
} |
1706.06551 | Grounded Language Learning in a Simulated 3D World | We are increasingly surrounded by artificially intelligent technology that
takes decisions and executes actions on our behalf. This creates a pressing
need for general means to communicate with, instruct and guide artificial
agents, with human language the most compelling means for such communication.
To achieve this in a scalable fashion, agents must be able to relate language
to the world and to actions; that is, their understanding of language must be
grounded and embodied. However, learning grounded language is a notoriously
challenging problem in artificial intelligence research. Here we present an
agent that learns to interpret language in a simulated 3D environment where it
is rewarded for the successful execution of written instructions. Trained via a
combination of reinforcement and unsupervised learning, and beginning with
minimal prior knowledge, the agent learns to relate linguistic symbols to
emergent perceptual representations of its physical surroundings and to
pertinent sequences of actions. The agent's comprehension of language extends
beyond its prior experience, enabling it to apply familiar language to
unfamiliar situations and to interpret entirely novel instructions. Moreover,
the speed with which this agent learns new words increases as its semantic
knowledge grows. This facility for generalising and bootstrapping semantic
knowledge indicates the potential of the present approach for reconciling
ambiguous natural language with the complexity of the physical world. | http://arxiv.org/pdf/1706.06551 | Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, Phil Blunsom | cs.CL, cs.LG, stat.ML | 16 pages, 8 figures | null | cs.CL | 20170620 | 20170626 | 7 1 0 2 n u J 6 2 ] L C . s c [
2 v 1 5 5 6 0 . 6 0 7 1 : v i X r a
Grounded Language Learning
# Grounded Language Learning in a Simulated 3D World
Karl Moritz Hermannââ , Felix Hillâ, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis and Phil Blunsomâ
# Deepmind London, UK
# Abstract
We are increasingly surrounded by artiï¬cially intelligent technology that takes decisions and executes actions on our behalf. This creates a pressing need for general means to communicate with, instruct and guide artiï¬cial agents, with human language the most compelling means for such communication. To achieve this in a scalable fashion, agents must be able to relate language to the world and to actions; that is, their understanding of language must be grounded and embodied. However, learning grounded language is a notoriously challenging problem in artiï¬cial intelligence research. Here we present an agent that learns to interpret language in a simulated 3D environment where it is rewarded for the successful execution of written instructions. Trained via a combination of reinforcement and unsupervised learning, and beginning with minimal prior knowledge, the agent learns to relate linguistic symbols to emergent perceptual representations of its physical surroundings and to pertinent sequences of actions. The agentâs comprehension of language extends beyond its prior experience, enabling it to apply familiar language to unfamiliar situations and to interpret entirely novel instructions. Moreover, the speed with which this agent learns new words increases as its semantic knowledge grows. This facility for generalising and bootstrapping semantic knowledge indicates the potential of the present approach for reconciling ambiguous natural language with the complexity of the physical world.
# 1. Introduction
Endowing machines with the ability to relate language to the physical world is a long- standing challenge for the development of Artiï¬cial Intelligence. As situated intelligent technology becomes ubiquitous, the development of computational approaches to under- standing grounded language has become critical to human-AI interaction. Beginning with Winograd (1972), early attempts to ground language understanding in a physical world were constrained by their reliance on the laborious hard-coding of linguistic and physical rules. Modern devices with voice control may appear more competent but suï¬er from the same limitation in that their language understanding components are mostly rule-based and do not generalise or scale beyond their programmed domains.
â. These authors contributed equally to this work. â . Corresponding authors: kmh@google.com and pblunsom@google.com.
1
Hermann & Hill et al.
This work presents a novel paradigm for simulating language learning and understand- ing. The approach diï¬ers from conventional computational language learning in that the learning and understanding take place with respect to a continuous, situated environment. Simultaneously, we go beyond rule-based approaches to situated language understanding as our paradigm requires agents to learn end-to-end the grounding for linguistic expressions in the context of using language to complete tasks given only pixel-level visual input.
The initial experiments presented in this paper take place in an extended version of the DeepMind Lab (Beattie et al., 2016) environment, where agents are tasked with ï¬nding and picking up objects based on a textual description of each task. While the paradigm outlined gives rise to a large number of possible learning tasks, even the simple setup of object retrieval presents challenges for conventional machine learning approaches. Critically, we ï¬nd that language learning is contingent on a combination of reinforcement (reward- based) and unsupervised learning. By combining these techniques, our agents learn to connect words and phrases with emergent representations of the visible surroundings and embodied experience. We show that the semantic knowledge acquired during this process generalises both with respect to new situations and new language. Our agents exhibit zero- shot comprehension of novel instructions, and the speed at which they acquire new words accelerates as their semantic knowledge grows. Further, by employing a curriculum training regime, we train a single agent to execute phrasal instructions pertaining to multiple tasks requiring distinct action policies as well as lexical semantic and object knowledge.1
# 2. Related work
Learning semantic grounding without prior knowledge is notoriously diï¬cult, given the limitless possible referents for each linguistic expression (Quine, 1960). A learner must discover correlations in a stream of low level inputs, relate these correlations to both its own actions and to linguistic expressions and retain these relationships in memory. Perhaps unsurprisingly, the few systems that attempt to learn language grounding in artiï¬cial agents do so with respect to environments that are far simpler than the continuous, noisy sensory experience encountered by humans (Steels, 2008; Roy and Pentland, 2002; Krening et al., 2016; Yu et al., 2017).
The idea of programming computers to understand how to relate language to a simulated physical environment was pioneered in the seminal work of Winograd (1972). His SHRDLU system was programmed to understand user generated language containing a small number of words and predicates, to execute corresponding actions or to ask questions requesting more information. While initially impressive, this system required that all of the syntax and semantics (in terms of the physical world) of each word be hard coded a priori, and thus it was unable to learn new concepts or actions. Such rule-based approaches to language understanding have come to be considered too brittle to scale to the full complexities of natural language. Since this early work, research on language grounding has taken place across a number of disciplines, primarily in robotics, computer vision and computational linguistics. Research in both natural language processing and computer vision has pointed to the importance of cross modal approaches to grounded concept learning. For instance, it was shown that learnt concept representation spaces more faithfully reï¬ect human semantic
1. See https://youtu.be/wJjdu1bPJ04 for a video of the trained agents.
2
# Grounded Language Learning
intuitions if induced from information about the perceptible properties of objects as well as from raw text (Silberer and Lapata, 2012).
Semantic parsing, as pursued the ï¬eld of natural language processing, has predominantly focussed on building a compositional mapping from natural language to formal semantic representations that are then grounded in a database or knowledge graph (Zettlemoyer and Collins, 2005; Berant et al., 2013). The focus of this direction of work is on the compositional mapping between the two abstract modalities, natural language and logical form, where the grounding is usually discrete and high level. This is in contrast to the work presented in this paper where we focus on learning to ground language in low level perception and actions. Siskind (1995) represents an early attempt to ground language in perception by seeking to link objects and events in stick-ï¬gure animations to language. Broadly this can be seen as a precursor to more recent work on mapping language to actions in video and similar modalities (Siskind, 2001; Chen and Mooney, 2008; Yu and Siskind, 2013). In a similar vein, the work of Roy and Pentland (2002) applies machine learning to aspects of grounded language learning, connecting speech or text input with images, videos or even robotic controllers. These systems consisted of modular pipelines in which machine learning was used to optimise individual components while complementing hard-coded representations of the input data. Within robotics, there has been interest in using language to facilitate human-robot communication, as part of which it is necessary to devise mechanisms for grounding a perceptible environment with language (Hemachandra et al., 2014; Walter et al., 2014). In general, the amount of actual learning in these prior works is heavily constrained, either through the extensive use of hand-written grammars and mechanisms to support the grounding, or through simpliï¬cation in terms of the setup and environment. Other related work focuses on language grounding from the perspective of human- machine communication (Thomason et al., 2015; Wang et al., 2016; Arumugam et al., 2017). The key diï¬erence between these approaches and our work is that here again language is grounded to highly structured environments, as opposed to the continuous perceptible input our learning environment provides.
In the ï¬eld of computer vision, image classiï¬cation (Krizhevsky et al., 2012) can be interpreted as aligning visual data and semantic or lexical concepts. Moreover, neural net- works can eï¬ectively map image or video representations from these classiï¬cation networks to human-written image captions. These mappings can also yield plausible descriptions of visual scenes that were not observed during training (Xu et al., 2015; Vendrov et al., 2015). However, unlike our approach, these captioning models typically learn visual and linguistic processing and representation from ï¬xed datasets as part of two separate, independent op- timisations. Moreover, they do not model the grounding of linguistic symbols in actions or a visual stimuli that constantly change based on the exploration policy of the agent.
The idea that reinforcement-style learning could play a role in language learning has been considered for decades (Chomsky, 1959). Recently, however, RL agents controlled by deep neural nets have been trained to solve tasks in both 2D (Mnih et al., 2015) and 3D (Mnih et al., 2016) environments. Our language learning agents build on these approaches and algorithms, but with an agent architecture and auxiliary unsupervised objectives that are speciï¬c to our multi-modal learning task. Other recently-proposed frameworks for interactive language learning involve unimodal (text-only) settings (Narasimhan et al., 2015; Mikolov et al., 2015).
3
Hermann & Hill et al.
# 3. The 3D language learning environment
To conduct our language learning experiments we integrated a language channel into a 3D In this environment, an agent simulated world (DeepMind Lab, Beattie et al. (2016)). perceives its surroundings via a constant stream of continuous visual input and a textual instruction. It perceives the world actively, controlling what it sees via movement of its vi- sual ï¬eld and exploration of its surroundings. One can specify the general conï¬guration of layouts and possible objects in this environment together with the form of language instruc- tions that describe how the agent can obtain rewards. While the high-level conï¬guration of these simulations is customisable, the precise world experienced by the agent is chosen at random from billions of possibilities, corresponding to diï¬erent instantiations of objects, their colours, surface patterns, relative positions and the overall layout of the 3D world.
To illustrate this setup, consider a very simple environment comprising two connected rooms, each containing two objects. To train the agent to understand simple referring expressions, the environment could be conï¬gured to issue an instruction of the form pick the X in each episode. During training, the agent experiences multiple episodes with the shape, colour and pattern of the objects themselves diï¬ering in accordance with the instruction. Thus, when the instruction is pick the pink striped ladder, the environment might contain, in random positions, a pink striped ladder (with positive reward), an entirely pink ladder, a pink striped chair and a blue striped hairbrush (all with negative reward).
It is important to emphasise the complexity of the learning challenge faced by the agent, even for a simple reference task such as this. To obtain positive rewards across multiple training episodes, the agent must learn to eï¬ciently explore the environment and inspect candidate objects (requiring the execution of hundreds of inter-dependent actions) while simultaneously learning the (compositional) meanings of multi-word expressions and how they pertain to visual features of diï¬erent objects (Figure 1)
We also construct more complex tasks pertaining to other characteristics of human language understanding, such as the generalisation of linguistic predicates to novel objects, the productive composition of words and short phrases to interpret unfamiliar instructions and the grounding of language in relations and actions as well as concrete objects.
# 4. Agent design
Our agent consists of four inter-connected modules optimised as a single neural network. At each time step t, the visual input vt is encoded by the convolutional vision module V and a recurrent (LSTM, Hochreiter and Schmidhuber (1997)) language module L encodes the instruction string lt. A mixing module M determines how these signals are combined before they are passed to a two-layer LSTM action module A. The hidden state st of the upper LSTM in A is fed to a policy function, which computes a probability distribution over possible motor actions Ï(at|st), and a state-value function approximator V al(st), which computes a scalar estimate of the agent value function for optimisation. To learn from the scalar rewards that can be issued by the environment, the agent employs an actor-critic algorithm (Mnih et al., 2016).
The policy Ï is a distribution over a discrete set of actions. The baseline function V al es- timates the expected discounted future return following the state the agent is currently in. In
4
Grounded Language Learning
Top down view Agent view d object nex
Figure 1: In this example, the agent begins in position 1 and immediately receives the instruction pick the red object next to the green object. It explores the two-room layout, viewing objects and their relative positions before retrieving the object that best conforms to the instruction. This exploration and selection behaviour emerges entirely from the reward-driven learning and is not preprogrammed. When training on a task such as this, there are billions of possible episodes that the agent can experience, containing diï¬erent objects in diï¬erent positions across diï¬erent room layouts.
5
Hermann & Hill et al.
Figure 2: Schematic organisation of the network modules (grey) supplemented with auxil- iary learning objectives (coloured components)
other words, it approximates the state-value function Val,(s) = Ex[S7po9 Aâri+e41 | St = 5] where 5; is the state of the environment at time t when following policy 7 and r; is the reward received following the action performed at time t. ⬠[0, 1] is a discount parameter. The agentâs primary objective is is to find a policy which maximizes the expected dis- counted return Ex[>>/2 0 Mr,]. We apply the Advantage Actor Critic algorithm (Mnih et al., 2016) to optimize the policy âa Softmax multinomial distribution parametrized by the agentâs networkâtowards higher discounted returns.
Parameters are updated according to the RMSProp update rule (Tieleman and Hinton, 2012). We share a single parameter vector across 32 asynchronous threads. This conï¬gu- ration oï¬ers a suitable trade-oï¬ between increased speed and loss of accuracy due to the asynchronous updates (Mnih et al., 2016).
Importantly, early simulation results revealed that this initial design does not learn to solve even comparably simple tasks in our setup. As described thus far, the agent can learn only from comparatively infrequent object selection rewards, without exploiting the stream of potentially useful perceptual feedback available at each time step when exploring the environment. We address this by endowing the agent with ways to learn in an unsupervised manner from its immediate surroundings, by means of auto-regressive objectives that are applied concurrently with the reward-based learning and involve predicting or modelling aspects of the agentâs surroundings (Jaderberg et al., 2016).
Temporal autoencoding The temporal autoencoder auxiliary task tAE is designed to illicit intuitions in our agent about how the perceptible world will change as a consequence of its actions. The objective is to predict the visual environment vt+1 conditioned on the prior
6
# Grounded Language Learning
visual input vt and the action at (Oh et al., 2015). Our implementation reuses the standard visual module V and combines the representation of vt with an embedded representation of at. The combined representation is passed to a deconvolutional network to predict vt+1. As well as providing a means to ï¬ne-tune the visual system V, the tAE auxiliary task results in additional training of the action-policy network, since the action representations can be shared between tAE and the policy network Ï.
Language prediction To strengthen the ability of the agent to reconcile visual and linguistic modalities we design a word prediction objective LP that estimates instruction words lt given the visual observation vt, using model parameters shared with both V and L. The LP network can also serve to make the behaviour of trained agents more interpretable, as the agent emits words that it considers to best describe what it is currently observing.
The tAE and LP auxiliary networks were optimised with mini-batch gradient descent based on the mean-squared error and negative-log-likelihood respectively. We also experi- mented with reward prediction (RP) and value replay (VR) as additional auxiliary tasks to stabilise reinforcement based training (Jaderberg et al., 2016).
Figure 2 gives a schematic organisation of the agent with all the above auxiliary learning objectives. Precise implementation details of the agent are given in Appendix A.
# 5. Experiments
In evaluating the agent, we constructed tasks designed to test its capacity to cope with var- ious challenges inherent in language learning and understanding. We ï¬rst test its ability to eï¬ciently acquire a varied vocabulary of words pertaining to physically observable aspects of the environment. We then examine whether the agent can combine this lexical knowl- edge to interpret both familiar and unfamiliar word combinations (phrases). This analysis includes phrases whose meaning is dependent of word order, and cases in which the agent must induce and re-use lexical knowledge directly from (potentially ambiguous) phrases. Finally, we test the agentâs ability to learn less concrete aspects of language, including in- structions referring to relational concepts (Doumas et al., 2008) and phrases referring to actions and behaviours.
# 5.1 Role of unsupervised learning
Our ï¬rst experiment explored the eï¬ect of the auxiliary objectives on the ability of the agent to acquire a vocabulary of diï¬erent concrete words (and associated lexical concepts). Training consisted of multiple episodes in a single room containing two objects. For each episode, at time t = 0, the agent was spawned in a position equidistant from the two objects, and received a single-word instruction that unambiguously referred to one of the two objects. It received a reward of 1 if it walked over to and selected the correct referent object and â1 if it picked the incorrect object. A new episode began immediately after an object was selected, or if the agent had not selected either object after 300 steps. Objects and instructions were sampled at random from the full set of factors available in the simulation environment.2 We trained 16 replicas for each agent conï¬guration (Figure 3) with ï¬xed hyperparameters from
2. See Appendix B for a complete list.
7
Hermann & Hill et al.
the standard settings and random hyperparameters sampled uniformly from the standard ranges.3
A3C agent ASC agent +RP +VR A8C agent +RP +VR +LP A3C agent +RP +VR +tAE A3C agent +RP +VR +tAE +LP Average Reward per Episode A A, / \_ Pmt â.ta0ao- ee oS Yon Â¥ \ 500000 1000000 1500000 2000000 Training Episodes
Figure 3: Unsupervised learning via auxiliary prediction objectives facilitates word learning. Learning curves for a vocabulary acquisition task. The agent is situated in a single room faced with two objects and must select the object that correctly matches the textual instruction. A total of 59 diï¬erent words were used as instructions during training, referring to either the shape, colours, relative size (larger, smaller), relative shade (lighter, darker) or surface pattern (striped, spotted, etc.) of the target object. RP: reward predic- tion, VR: value replay, LP: language prediction, tAE: temporal autoencoder. Data show mean and conï¬dence bands (CB) across best ï¬ve of 16 hyperparameter settings sampled at random from ranges speciï¬ed in the appendix. Training episodes counts individual levels seen during training.
As shown in Figure 3, when relying on reinforcement learning alone, the agent exhibited no learning even after millions of training episodes. The fastest learning was exhibited by an agent applying both temporal auto-encoding and language prediction in conjunction with value replay and reward prediction. These results demonstrate that auto-regressive objectives can extract information that is critical for language learning from the perceptible environment, even when explicit reinforcement is not available.
8
Grounded Language Learning
10 So Q * 8 â a £ = o 5b 6 v 34 3 = Ri e - [) v Â¥ = o â 2 2 o @am Agent that already knows 20 words outside of training set @m Agent that already knows 2 words outside of training set 0 @m Agent trained from scratch 0 100000 +~4200000 300000 400000 500000 600000 700000 9800000 Training Episodes
Figure 4: Word learning is much faster once some words are already known The rate at which agents learned a vocabulary of 20 shape words was measured in agents in three conditions. In one condition, the agent had prior knowledge of 20 shapes and their names outside of the training data used here. In the second condition, the agent had prior knowledge of two shape words outside of the target vocabulary (same number of pre-training steps). In the third condition, the agent was trained from scratch. All agents used RP, VR, LP and tAE auxiliary objectives. Data show mean and conï¬dence bands across best ï¬ve of 16 hyperparameter settings in each condition, sampled at random from ranges speciï¬ed in Appendix C.
9
Hermann & Hill et al.
# 5.2 Word learning speed experiment
Before it can exhibit any lexical knowledge, the agent must learn various skills and capacities that are independent of the speciï¬cs of any particular language instruction. These include an awareness of objects as distinct from ï¬oors or walls; some capacity to sense ways in which those objects diï¬er; and the ability to both look and move in the same direction. In addition, the agent must infer that solving the solution to tasks is always contingent on both visual and linguistic input, without any prior programming or explicit teaching of the importance of inter-modal interaction. Given the complexity of this learning challenge, it is perhaps unsurprising that the agent requires thousands of training episodes before evidence of word learning emerges.
To establish the importance of this âpre-linguisticâ learning, we compared the speed of vocabulary acquisition in agents with diï¬erent degrees of prior knowledge. The training set consisted of instructions (and corresponding environments) from the twenty shape terms banana, cherries, cow, ï¬ower, fork, fridge, hammer, jug, knife, pig, pincer, plant, saxo- phone, shoe, spoon, tennis-racket, tomato, tree, wine-glass and zebra. The agent with most prior knowledge was trained in advance (in a single room setting with two objects) on the remaining twenty shapes from the full environment. The agent with minimal prior knowl- edge was trained only on the two terms ball and tv. Both regimes of advanced training were stopped once the agent reached an average reward of 9.5/10 across 1,000 episodes. The agent with no prior knowledge began learning directly on the training set.
The comparison presented in Figure 4 demonstrates that much of the initial learning in an agent trained from scratch involves acquiring visual and motor, rather than expressly linguistic, capabilities. An agent already knowing two words (and therefore exhibiting rudimentary motor and visual skills) learned new words at a notably faster rate than an agent trained from scratch. Moreover, the speed of word learning appeared to accelerate as more words were learned. This shows that the acquisition of new words is supported not only by general-purpose motor-skills and perception, but also existing lexical or semantic knowledge. In other words, the agent is able to bootstrap its existing semantic knowledge to enable the acquisition of new semantic knowledge.
# 5.3 One-shot learning experiments
Two important facets of natural language understanding are the ability to compose the meanings of known words to interpret otherwise unfamiliar phrases, and the ability to generalise linguistic knowledge learned in one setting to make sense of new situations. To examine these capacities in our agent, we trained it in settings where its (linguistic or visual) experience was constrained to a training set, and simultaneously as it learned from the training set, tested the performance of the agent on situations outside of this set (Figure 5). In the colour-shape composition experiment, the training instructions were either unigrams or bigrams. Possible unigrams were the 40 shape and the 13 colour terms listed in Appendix B. The possible bigrams were any colour-shape combination except those containing the shapes ice lolly, ladder, mug, pencil, suitcase or the colours red, magenta, grey, purple (subsets selected randomly). The test instructions consisted of all possible In each training episode, the target object was bigrams excluded from the training set.
3. See Appendix C for details.
10
# Grounded Language Learning
rendered to match the instruction (in colour, shape or both) and the confounding object did not correspond to any of the bigrams in the test set. Similarly, in each test episode, both the target object and the confounding object corresponded to bigrams in the test instructions. These constraints ensured that the agent could not interpret test instructions by excluding other objects or terms that it had seen in the training set.
The colour-shape decomposition / composition experiment is similar in design to the colour-shape composition experiment. The test tasks were identical, but the possible training instructions consisted only of the bigram instructions from the colour-shape com- position training set. To achieve above chance performance on the test set, the agent must therefore isolate aspects of the world that correspond to each of the constituent words in the bigram instructions (decomposition), and then build an interpretation of novel bigrams using these constituent concepts.
The relative size and relative shade experiments were designed to test the gener- smaller, larger ality of agentsâ representation of relational concepts (in this case larger, and darker. Training and testing episodes again took place in a single room with two ob- jects. The relative size experiment involved the 16 shapes in our environment whose size could be varied while preserving their shape. The possible instructions in both training and test episodes were simply the unigrams larger and smaller. The agent was required to choose between two objects of the same shape but diï¬erent size (and possibly diï¬erent colour) according to the instruction. All training episodes involved target and confounding objects whose shape was either a tv, ball, balloon, cake, can, cassette, chair, guitar, hair- brush or hat. All test episodes involved objects whose shape was either an ice lolly, ladder, mug, pencil or toothbrush.
The relative shade experiment followed the same design, but the agent was presented with two objects of possibly diï¬ering shape that diï¬ered only in the shade of their colouring (e.g. one light blue and one dark blue). The training colours were green, blue, cyan, yellow, pink, brown and orange. The test colours were red, magenta, grey and purple.
When trained on colour and shape unigrams together with a limited number of colour- shape bigrams, the agent naturally understood additional colour-shape bigrams if it is familiar with both constituent words. Moreover, this ability to productively compose known words to interpret novel phrases was not contingent on explicit training of those words in isolation. When exposed only to bigram phrases during training, the agent inferred the constituent lexical concepts and reapplied these concepts to novel combinations at test time. Indeed, in this condition (the decomposition/composition case), the agent learned to generalise after fewer training instances than in the apparently simpler composition case. This can be explained by by the fact that episodes involving bigram instructions convey greater information content, such that the latter condition avails the agent of more information per training episode. Critically, the agentâs ability to decompose phrases into constituent (emergent) lexical concepts reï¬ects an ability that may be essential for human- like language learning in naturalistic environments, since linguistic stimuli rarely contain words in isolation.
Another key requirement for linguistic generalisation is the ability to extend category terms beyond the speciï¬c exemplars from which those concepts were learned (Quinn et al., 1993; Rogers and McClelland, 2004). This capacity was also observed in our agent; when trained on the relational concepts larger and smaller in the context of particular shapes
11
Hermann & Hill et al.
10 Color-Shape Composition » Color-Shape Decomposition / Recomposition Average Reward per Episode (/10) Average Reward per E â Performance on training set â Performance on training set â Performance on test set â Performance on test set 200000 400000 600000 © 8000001000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Training Episodes Training Episodes 10 Lighter / Darker » Larger / Smaller Average Reward per Episode (/10) f 4 2 $2 < â Performance on training set â Performance on training set â Performance on test set â Performance on test set 0 ° 200000 400000 60000 © 8000001000000 1200000 1400000 200000 400000 600000 + 800000 1000000 1200000 1400000 Training Episodes Training Episodes
Figure 5: Semantic knowledge generalises to unfamiliar language and objects. Composition (A): training covered all shape and colour unigrams and â¼ 90% of possible colour-shape bigrams, such as blue ladder. Agents were periodically tested on the remaining 10% of bigrams without updating parameters. Decomposition-composition (B): the same regime as in A, but without any training on unigram descriptors. Lighter / darker (C): agents were trained to interpret the terms lighter and darker applied to a set of colours, and tested on the terms in the context of a set of diï¬erent colours. Relative size (D): agents were trained to interpret the terms larger and smaller applied to a set of shapes, and tested on the terms in the context of a set of diï¬erent shapes. Data show mean and CB across best ï¬ve of 16 randomly sampled hyperparameter settings in each condition. See Appendix B for hyperparameter ranges and exact train/test stimuli.
12
# Grounded Language Learning
it naturally applied them to novel shapes with almost perfect accuracy. In contrast, the ability to generalise lighter and darker to unfamiliar colours was signiï¬cantly above chance but less than perfect. This may be because it is particularly diï¬cult to infer the mapping corresponding to lighter and darker (as understood by humans) in an RGB colour space from the small number of examples observed during training.
Taken together, these instances of generalisation demonstrate that our agent does not simply ground language in hard coded features of the environment such as pixel activa- tions or speciï¬c action sequences, but rather learns to ground meaning in more abstract semantic representations. More practically, these results also suggest how artiï¬cial agents that are necessarily exposed to ï¬nite training regimes may ultimately come to exhibit the productivity characteristic of human language understanding.
# 5.4 Extending learning via a curriculum
A consequence of the agentâs facility for re-using its acquired knowledge for further learning is the potential to train the agent on more complex language and tasks via exposure to a curriculum of levels. Figure 6 shows an example for the successful application of such a curriculum, here applied to the task of selecting an object based on the ï¬oor colour of the room it is located in.
We also applied a curriculum to train an agent on a range of multi-word referring instructions of the form pick the X, where X represents a string consisting of either a single noun (shape term, such as chair ) an adjective and a noun (a colour term, pattern term or shade term, followed by a shape term, such as striped ladder ) or two adjectives and a noun (a shade term or a pattern term, followed by a colour term, followed by a shape term, such as dark purple toothbrush). The latter two cases were also possible with the generic term âobjectâ in place of a shape term. In each case, the training episode involved one object that coincided with the instruction and some number of distractors that did not. Learning curves for this âreferring expression agentâ are illustrated in Figure 7.
# 5.5 Multi-task learning
Language is typically used to refer to actions and behaviours as much as to objects and entities. To test the ability of our agents to ground such words in corresponding proce- dures, we trained a single agent to follow instructions pertaining to three dissociable tasks. We constructed these tasks using a two-room world with both ï¬oor colourings and object properties sampled at random.
In this environment, the Selection task involved instructions of the form pick the X object or pick all X, where X denotes a colour term. The Next to task involved instructions of the form pick the X object next to the Y object, where X and Y refer to objects. Finally, the In room task involved instructions of the form pick the X in the Y room, where Y referred to the colour of the ï¬oor in the target room. Both the Next to and the In room task employed large degrees of ambiguity, i.e. a given Next to level may contain several objects X and Y , but in a constellation that only one X would be located next to a Y .
The agent was exposed to instances of each task with equal probability during training. The possible values for variables X and Y in these instructions were red, blue, green, yellow,
13
Hermann & Hill et al.
single-room layout two room layout two object two object words and words and 1 room 2 room descriptors descriptors 2 Agent trained from scratch Agent previously trained on level 1 ° ° Agent trained from scratch 1000000 2000000 ooo +â5090000 600000 3000000 T Episodes Average Reward per Episode (/19) two room layout two room layout medium full object object word / word / 3 room 4 room descriptor descriptor vocabulary vocabulary w bette 4 Agent previously trained on level 2 ] Agent previously trained on level 3 ° Agent previously trained on level 1 ° Agent previously trained on level 2 1000000 | Agent trained from scratch 2000000 Agent previously trained on level 1 Average Reward per Episode (/10 Agent trained from scratch
Figure 6: Curriculum learning is necessary for solving more complex tasks. For the agent to learn to retrieve an object in a particular room as instructed, a four-lesson training curriculum was required. Each lesson involved a more complex layout or a wider selection of objects and words, and was only solved by an agent that had successfully solved the previous lesson. The schematic layout and vocabulary scope for each lesson is shown above the training curves for that lesson. The initial (spawn) position of this agent varies randomly during training among the locations marked x, as do the position of the four possible objects among the positions marked with a white diamond. Data show mean and CB across best ï¬ve of 16 randomly sampled hyperparameter settings in each condition.
14
Grounded Language Learning
Layout â agent trained from scratch â agent trained on level 1 â agent trained from scratch Average Reward per Episode (/10) Average Reward per Episode (/10) 500000 3000000 3500000 2000000 2500000 500000 1000000 11500000 2000000 2500000 Training Episodes Training Episodes
Figure 7: Learning curve for the referring expression agent. The trained agent is able to select the correct object in a two-object setup when described using a compositional expression. This ability transfers to more complex environments with a larger number of confounding objects.
Layout Layout â agent trained from scratch â agent trained on level 1 0 0 eNOS eee ea ae se â agent trained from scratch Average Reward per Episode (/10) Average Reward per Episode (/10) 200000 400000600000» 800000 1000000 1200000 200000 400000 00000 800000 1000000 Training Episodes Training Episodes
Figure 8: Multi-task learning via an eï¬cient curriculum of two steps. A single agent can learn to solve a number of diï¬erent tasks following a two-lesson training curricu- lum. The diï¬erent tasks cannot be distinguished based on visual information alone, but require the agent to use the language input to identify the task in question.
15
Hermann & Hill et al.
cyan and magenta. The shape of all objects in the environment was selected randomly from 40 possibilities.
As previously, a curriculum was required to achieve the best possible agent performance on these tasks (see Figure 8). When trained from scratch, the agent learned to solve all three types of task in a single room where the colour of the ï¬oor was used as a proxy for a diï¬erent room. However, it was unable to achieve the same learning in a larger layout with two distinct rooms separated by a corridor. When the agent trained in a single room was transferred to the larger environment, it continued learning and eventually was able to solve the more diï¬cult task.4
By learning these tasks, this agent demonstrates an ability to ground language referring not only to single (concrete) objects, but also to (more abstract) sequences of actions, plans and inter-entity relationships. Moreover, in mastering the Next to and In room tasks, the agent exhibits sensitivity to a critical facet of many natural languages, namely the dependence of utterance meaning on word order. The ability to solve more complex tasks by curriculum training emphasises the generality of the emergent semantic representations acquired by the agent, allowing it to transfer learning from one scenario to a related but more complex environment.
# 6. Conclusion
An artiï¬cial agent capable of relating natural languages to the physical world would trans- form everyday interactions between humans and technology. We have taken an important step towards this goal by describing an agent that learns to execute a large number of multi-word instructions in a simulated three-dimensional world, with no pre-programming or hard-coded knowledge. The agent learns simple language by making predictions about the world in which that language occurs, and by discovering which combinations of words, perceptual cues and action decisions result in positive outcomes. Its knowledge is distributed across language, vision and policy networks, and pertains to modiï¬ers, relational concepts and actions, as well as concrete objects. Its semantic representations enable the agent to productively interpret novel word combinations, to apply known relations and modiï¬ers to unfamiliar objects and to re-use knowledge pertinent to the concepts it already has in the process of acquiring new concepts.
While our simulations focus on language, the outcomes are relevant to machine learn- ing in a more general sense. In particular, the agent exhibits active, multi-modal concept induction, the ability to transfer its learning and apply its knowledge representations in un- familiar settings, a facility for learning multiple, distinct tasks, and the eï¬ective synthesis of unsupervised and reinforcement learning. At the same time, learning in the agent reï¬ects various eï¬ects that are characteristic of human development, such as rapidly accelerating rates of vocabulary growth, the ability to learn from both rewarded interactions and pre- dictions about the world, a natural tendency to generalise and re-use semantic knowledge, and improved outcomes when learning is moderated by curricula (Vosniadou and Brewer, 1992; Smith et al., 1996; Pinker, 1987, 2009). Taken together, these contributions open many avenues for future investigations of language learning, and learning more generally, in both humans and artiï¬cial agents.
4. See https://youtu.be/wJjdu1bPJ04 for a video of the ï¬nal trained agent.
16
Grounded Language Learning
# References
Dilip Arumugam, Siddharth Karamcheti, Nakul Gopalan, Lawson L. S. Wong, and Ste- fanie Tellex. Accurately and eï¬ciently interpreting human-robot instructions of varying granularities. CoRR, abs/1704.06616, 2017. URL http://arxiv.org/abs/1704.06616.
Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaï¬ney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on free- base from question-answer pairs. In EMNLP, pages 1533â1544. ACL, 2013. ISBN 978- 1-937284-97-8. URL http://dblp.uni-trier.de/db/conf/emnlp/emnlp2013.html# BerantCFL13.
David L. Chen and Raymond J. Mooney. Learning to sportscast: A test of grounded language acquisition. In Proceedings of the 25th International Conference on Machine Learning (ICML), Helsinki, Finland, July 2008. URL http://www.cs.utexas.edu/ users/ai-lab/?chen:icml08.
Noam Chomsky. A review of BF Skinnerâs Verbal Behavior. Language, 35(1):26â58, 1959.
Leonidas AA Doumas, John E Hummel, and Catherine M Sandhofer. A theory of the discovery and predication of relational concepts. Psychological review, 115(1):1, 2008.
Sachithra Hemachandra, Matthew R. Walter, Stefanie Tellex, and Seth Teller. Learning spatially-semantic representations from natural language descriptions and scene classiï¬ca- tions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, May 2014.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxil- iary tasks. In International Conference on Learning Representations, 2016.
Samantha Krening, Brent Harrison, Karen M Feigh, Charles Isbell, Mark Riedl, and Andrea Thomaz. Learning from explanations using sentiment and advice in RL. In 2016 IEEE Transactions on Cognitive and Developmental Systems. IEEE, 2016.
Alex Krizhevsky, Ilya Sutskever, and Geoï¬rey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541â551, 1989.
17
Hermann & Hill et al.
Tomas Mikolov, Armand Joulin, and Marco Baroni. A roadmap towards machine intelli- gence. arXiv preprint arXiv:1511.08130, 2015.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lil- licrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016.
Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for text-based games using deep reinforcement learning. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015.
Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action- conditional video prediction using deep networks in Atari games. In Advances in Neural Information Processing Systems 28, 2015.
Steven Pinker. The bootstrapping problem in language acquisition. Mechanisms of language acquisition, pages 399â441, 1987.
Steven Pinker. Language learnability and language development, volume 7. Harvard Uni- versity Press, 2009.
W. V. O. Quine. Word & Object. MIT Press, 1960.
Paul C Quinn, Peter D Eimas, and Stacey L Rosenkrantz. Evidence for representa- tions of perceptually similar natural categories by 3-month-old and 4-month-old infants. Perception, 22(4):463â475, 1993.
Timothy T Rogers and James L McClelland. Semantic cognition: A parallel distributed processing approach. MIT press, 2004.
Deb K Roy and Alex P Pentland. Learning words from sights and sounds: A computational model. Cognitive science, 26(1):113â146, 2002.
In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2012.
Jeï¬rey Mark Siskind. Grounding Language in Perception, pages 207â227. Springer Nether- ISBN 978-94-011-0273-5. doi: 10.1007/978-94-011-0273-5 12. lands, Dordrecht, 1995. URL http://dx.doi.org/10.1007/978-94-011-0273-5_12.
Jeï¬rey Mark Siskind. Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. J. Artif. Intell. Res. (JAIR), 15:31â90, 2001. doi: 10. 1613/jair.790. URL https://doi.org/10.1613/jair.790.
18
Grounded Language Learning
Linda B Smith, Susan S Jones, and Barbara Landau. Naming in young children: A dumb attentional mechanism? Cognition, 60(2):143â171, 1996.
Luc Steels. The symbol grounding problem has been solved. so whatâs next. Symbols and embodiment: Debates on meaning and cognition, pages 223â244, 2008.
Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. Learning to interpret In Proceedings of the 24th natural language commands through human-robot dialog. International Conference on Artiï¬cial Intelligence, IJCAIâ15, pages 1923â1929. AAAI Press, 2015. ISBN 978-1-57735-738-4. URL http://dl.acm.org/citation.cfm?id= 2832415.2832516.
Tijmen Tieleman and Geoï¬rey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. CoRR, abs/1511.06361, 2015. URL http://arxiv.org/abs/1511.06361.
Stella Vosniadou and William F Brewer. Mental models of the earth: A study of conceptual change in childhood. Cognitive psychology, 24(4):535â585, 1992.
Matthew R. Walter, Sachithra Hemachandra, Bianca Homberg, Stefanie Tellex, and Seth Teller. A framework for learning semantic maps from grounded natural language de- scriptions. The International Journal of Robotics Research, 33(9):1167â1190, 2014. doi: 10.1177/0278364914537359. URL http://dx.doi.org/10.1177/0278364914537359.
S. I. Wang, P. Liang, and C. Manning. Learning language games through interaction. In Association for Computational Linguistics (ACL), 2016.
Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdi- nov, Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. International Conference of Machine Learning, 2(3):5, 2015.
Haonan Yu and Jeï¬rey Mark Siskind. Grounded language learning from video described with sentences. In ACL, pages 53â63. The Association for Computer Linguistics, 2013. ISBN 978-1-937284-50-3.
Haonan Yu, Haichao Zhang, and Wei Xu. A deep compositional framework for human- like language acquisition in virtual environment. CoRR, abs/1703.09831, 2017. URL https://arxiv.org/abs/1703.09831.
Luke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classiï¬cation with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artiï¬cial Intelligence, UAIâ05, pages 658â 666, Arlington, Virginia, United States, 2005. AUAI Press. ISBN 0-9749039-1-4. URL http://dl.acm.org/citation.cfm?id=3020336.3020416.
19
Hermann & Hill et al.
# Appendix A. Agent details
# A.1 Agent core
At every time-step t the vision module V receives an 84 à 84 pixel RGB representation of t â R3Ã84Ã84), which is then processed the agentâs (ï¬rst person) view of the environment (xv with a three-layer convolutional neural network (LeCun et al., 1989) to emit an output representation vt â R64Ã7Ã7. The ï¬rst layer of the convolutional network contains 8 kernels applied at stride width 4, resulting in 32 (20Ã20) output channels. The second layer applies 4 kernels at stride with 2 yielding 64 (9 à 9) output channels. The third layer applies 3 kernels at stride width 1 resulting again in 64 (7 à 7) output channels.
t â Ns, where s is the maximum instruction length with words represented as indices in a dictionary. For tasks that require sensitivity to the order of words in the language instruction, the language module L encodes xl t with a recurrent (LSTM) architecture (Hochreiter and Schmidhuber, 1997). For other tasks, we applied a simpler bag-of-words (BOW) encoder, in which an instruction is represented as the sum of the embeddings of its constituent words, as this resulted in faster training. Both the LSTM and BOW encoders use word embeddings of dimension 128, and the hidden layer of the LSTM is also of dimension 128, resulting in both cases in an output representation lt â R128.
In the mixing module M, outputs vt and lt are combined by ï¬attening vt into a single vector and concatenating the two resultant vectors into a shared representation mt. The output from M at each time-step is fed to the action module A which maintains the agent state ht â Rd. ht is updated using an LSTM network combining output mt from M and htâ1 from the previous time-step. By default we set d = 256 in all our experiments.
# A.2 Auxiliary networks
Temporal Autoencoder The temporal autoencoder auxiliary network tAE samples sequences containing two data points xi, xi+1 as well as one-shot action representation i using the convolutional network deï¬ned by V into y â R64Ã7Ã7. The ai â Na. It encodes xv feature representation is then transformed using the action ai,
g=W, (Wyai © Woy),
with Ëy â R64Ã7Ã7. The weight matrix Wb shares its weights with the ï¬nal layer of the perceptron computing Ï in the core policy head. The transformed visual encoding Ëy is passed into a deconvolutional network (mirroring the conï¬guration of the convolutional encoder) to emit a predicted input w â R3Ã84Ã84. The tAE module is optimised on the mean-squared loss between w and xv
Language Prediction At each time-step t, the language prediction auxiliary network LP applies a replica of V (with shared weights) to encode vt. A linear layer followed by a rectiï¬ed linear activation function is applied to transform this representation from size 64 à 7 à 7 to a ï¬at vector of dimension 128 (the same size as the word embedding dimension in L). This representation is then transformed to an output layer with the same number of units as the agentâs vocabulary. The weights in this ï¬nal layer are shared with the initial layer (word embedding) weights from L. The output activations are fed through a Softmax
20
Grounded Language Learning
activation function to yield a probability distribution over words in the vocabulary, and the negative log likelihood of the instruction word lt is computed as the loss. Note that this objective requires a single meaningful word to be extracted from the instruction as the target.
# Appendix B. Environment details
The environment can contain any number of rooms connected through corridors. A level in the simulated 3D world is described by a map (a combination of rooms), object speciï¬ers, language and a reward function. Objects in the world are drawn from a ï¬xed inventory and can be described using a combination of ï¬ve factors.
Shapes (40) tv, ball, balloon, cake, can, cassette, chair, guitar, hairbrush, hat, ice lolly, ladder, mug, pencil, suitcase, toothbrush, key, bottle, car, cherries, fork, fridge, ham- mer, knife, spoon, apple, banana, cow, ï¬ower, jug, pig, pincer, plant, saxophone, shoe, tennis racket, tomato, tree, wine glass, zebra.
Colours (13) red , blue , white , grey , cyan , pink , orange , black , green , magenta , brown , purple , yellow.
Patterns (9) plain, chequered, crosses, stripes, discs, hex, pinstripe, spots, swirls.
Shades (3) light, dark, neutral.
Sizes (3) small, large, medium.
Within an environment, agent spawn points and object locations can be speciï¬ed or randomly sampled. The environment itself is subdivided into multiple rooms which can be distinguished through randomly sampled (unique) ï¬oor colours. We use up to seven factors to describe a particular object: the ï¬ve object-internal factors, the room it is placed in and its proximity to another object, which can itself be described by its ï¬ve internal factors.
In all simulations presented here, reward is attached to picking up a particular object. Reward is scaled to be in [â10; 10] and, where possible, balanced so that a random agent would have an expected reward of 0. This prevents agents from learning degenerate strate- gies that could otherwise allow them to perform well in a given task without needing to learn to ground the textual instructions.
# Appendix C. Hyperparameters
Tables 1 and 2 show parameter setting used throughout the experiments presented in this paper. We report results with conï¬dence bands (CB) equivalent to ± one standard deviation on the mean, assuming normal distribution.
21
Hermann & Hill et al.
Hyperparameter Value Description train steps env steps per core step num workers unroll length 640m 4 32 50 Theoretical maximum number of time steps (across all episodes) for which the agent will be trained. Number of time steps between each action decision (action smoothing) Number of independent workers running replicas of the environment with asynchronous updating. Number of time steps through which error is backpropagated in the core LSTM action module auxiliary networks vr batch size rp batch size lp batch size tae batch size 1 10 10 10 Aggregated time steps processed by value replay auxiliary for each weight update. Aggregated time steps processed by reward prediction auxiliary for each weight update. Aggregated time steps processed by language prediction auxiliary for each weight update. Aggregated time steps processed by temporal AE auxiliary for each weight update. language encoder encoder type BOW Whether the language encoder uses an additive bag-of-words (BOW) or an LSTM architecture. cost calculation additional discounting cost base 0.99 0.5 Discount used to compute the long-term return R t in the A3C objective Multiplicative scaling of all computed gradients on the backward pass in the network optimisation clip grad norm decay epsilon learning rate ï¬nish momentum 100 0.99 0.1 0 0 Limit on the norm of the gradient across all agent network parameters (if above, scale down) Decay term in RMSprop gradient averaging function Epsilon term in RMSprop gradient averaging function Learning rate at the end of training, based on which linear annealing of is applied. Momentum parameter in RMSprop gradient averaging function
Table 1: Agent hyperparameters that are ï¬xed throughout our experimentation but other- wise not speciï¬ed in the text.
Hyperparameter Value Description auxiliary networks vr weight rp weight lp weight tae weight uniform(0.1, 1) uniform(0.1, 1) uniform(0.1, 1) uniform(0.1, 1) Scalar weighting of value replay auxiliary loss relative to the core (A3C) objective. Scalar weighting of reward prediction auxiliary loss. Scalar weighting of language prediction auxiliary loss. Scalar weighting of temporal autoencoder prediction auxiliary. language encoder embed init uniform(0.5, 1) Standard deviation of normal distribution (mean = 0) for sampling initial values of word-embedding weights in L. optimisation entropy cost learning rate start uniform(0.0005, 0.005) loguniform(0.0001, 0.002) Strength of the (additive) entropy regularisation term in the A3C cost function. Learning rate at the beginning of training annealed linearly to reach learning rate ï¬nish at the end of train steps.
Table 2: Agent hyperparameters that randomly sampled in order to yield diï¬erent replicas of our agents for training. uniform(x, y) indicates that values are sampled uniformly from the range [x, y]. loguniform(x, y) indicates that values are sampled from a uniform distribution in log-space (favouring lower values) on the range [x, y].
22 | {
"id": "1511.08130"
} |
1706.06064 | Recent Advance in Content-based Image Retrieval: A Literature Survey | The explosive increase and ubiquitous accessibility of visual data on the Web
have led to the prosperity of research activity in image search or retrieval.
With the ignorance of visual content as a ranking clue, methods with text
search techniques for visual retrieval may suffer inconsistency between the
text words and visual content. Content-based image retrieval (CBIR), which
makes use of the representation of visual content to identify relevant images,
has attracted sustained attention in recent two decades. Such a problem is
challenging due to the intention gap and the semantic gap problems. Numerous
techniques have been developed for content-based image retrieval in the last
decade. The purpose of this paper is to categorize and evaluate those
algorithms proposed during the period of 2003 to 2016. We conclude with several
promising directions for future research. | http://arxiv.org/pdf/1706.06064 | Wengang Zhou, Houqiang Li, Qi Tian | cs.MM, cs.IR | 22 pages | null | cs.MM | 20170619 | 20170902 | # ee
7 1 0 2
# p e S 2
] M M . s c [ 2 v 4 6 0 6 0 . 6 0 7 1 : v i X r a
# Recent Advance in Content-based Image Retrieval: A Literature Survey
Wengang Zhou, Houqiang Li, and Qi Tian Fellow, IEEE
AbstractâThe explosive increase and ubiquitous accessibility of visual data on the Web have led to the prosperity of research activity in image search or retrieval. With the ignorance of visual content as a ranking clue, methods with text search techniques for visual retrieval may suffer inconsistency between the text words and visual content. Content-based image retrieval (CBIR), which makes use of the representation of visual content to identify relevant images, has attracted sustained attention in recent two decades. Such a problem is challenging due to the intention gap and the semantic gap problems. Numerous techniques have been developed for content-based image retrieval in the last decade. The purpose of this paper is to categorize and evaluate those algorithms proposed during the period of 2003 to 2016. We conclude with several promising directions for future research.
Index Termsâcontent-based image retrieval, visual representation, indexing, similarity measurement, spatial context, search re-ranking.
â¦
# 1 INTRODUCTION
With the universal popularity of digital devices embedded with cameras and the fast development of Internet tech- nology, billions of people are projected to the Web shar- ing and browsing photos. The ubiquitous access to both digital photos and the Internet sheds bright light on many emerging applications based on image search. Image search aims to retrieve relevant visual documents to a textual or visual query efï¬ciently from a large-scale visual corpus. Although image search has been extensively explored since the early 1990s [1], it still attracts lots of attention from the multimedia and computer vision communities in the past decade, thanks to the attention on scalability challenge and emergence of new techniques. Traditional image search engines usually index multimedia visual data based on the surrounding meta data information around images on the Web, such as titles and tags. Since textual information may be inconsistent with the visual content, content-based image retrieval (CBIR) is preferred and has been witnessed to make great advance in recent years.
In content-based visual retrieval, there are two fun- damental challenges, i.e., intention gap and semantic gap. The intention gap refers to the difï¬culty that a user suf- fers to precisely express the expected visual content by a query at hand, such as an example image or a sketch map. The semantic gap originates from the difï¬culty in describing high-level semantic concept with low-level visual feature [2] [3] [4]. To narrow those gaps, extensive efforts have been made from both the academia and industry.
From the early 1990s to the early 2000s, there have been extensive study on content-based image search. The progress in those years has been comprehensively discussed in existing survey papers [5] [6] [7]. Around the early 2000s, the introduction of some new insights and methods triggers another research trend in CBIR. Specially, two pioneering works have paved the way to the signiï¬cant advance in content-based visual retrieval on large-scale multimedia database. The ï¬rst one is the introduction of invariant local visual feature SIFT [8]. SIFT is demonstrated with excellent descriptive and discriminative power to capture visual con- tent in a variety of literature. It can well capture the invari- ance to rotation and scaling transformation and is robust to illumination change. The second work is the introduction of the Bag-of-Visual-Words (BoW) model [9]. Leveraged from information retrieval, the BoW model makes a compact representation of images based on the quantization of the contained local features and is readily adapted to the classic inverted ï¬le indexing structure for scalable image retrieval. the last emergence of numerous decade has witnessed the retrieval work on multimedia [15] [16] [17] [18] [19] [20] [21] [10] [11] [12] [13] [9] [14] [28] [29]. Meanwhile, in in- [22] [23] [24] [25] [26] [27] dustry, some commercial engines on content-based image search have been launched with different focuses, such as Tineye1, Ditto2, Snap Fashion3, ViSenze4, Cortica5, etc. Tineye is launched as a billion-scale reverse image search engine in May, 2008. Until January of 2017, the indexed image database size in Tineye has reached up to 17 billion. Different from Tineye, Ditto is specially focused on brand images in the wild. It provides an access to uncover the
⢠Wengang Zhou and Houqiang Li are with the CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System, Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230027, China. E-mail: {zhwg, lihq}@ustc.edu.cn.
⢠Qi Tian is with the Department of Computer Science, University of Texas at San Antonio, San Antonio, TX, 78249, USA. E-mail: qitian@cs.utsa.edu.
1. http://tineye.com/ 2. http://ditto.us.com/ 3. https://www.snapfashion.co.uk/ 4. https://www.visenze.com 5. http://www.cortica.com/
1
brands inside the shared photos on the public social media web sites.
there are three key issues in content-based image retrieval: image representation, image organization, and image similarity measurement. Existing algorithms can also be categorized based on their contribu- tions to those three key items.
Image representation originates from the fact that the intrinsic problem in content-based visual retrieval is image comparison. For convenience of comparison, an image is transformed to some kind of feature space. The motivation is to achieve an implicit alignment so as to eliminate the impact of background and potential transformations or changes while keeping the intrinsic visual content distin- guishable. In fact, how to represent an image is a fundamen- tal problem in computer vision for image understanding. There is a saying that âAn image is worth a thousand wordsâ. However, it is nontrivial to identify those âwordsâ. Usually, images are represented as one or multiple visual features. The representation is expected to be descriptive and discriminative so as to distinguish similar and dis- similar images. More importantly, it is also expected to be invariant to various transformations, such as translation, rotation, resizing, illumination change, etc.
In multimedia retrieval, the visual database is usually very large. It is a nontrivial issue to organize the large scale database to efï¬ciently identify the relevant results of a given query. Inspired by the success of information retrieval, many existing content-based visual retrieval algorithms and systems leverage the classic inverted ï¬le structure to index large scale visual database for scalable retrieval. Mean- while, some hashing based techniques are also proposed for indexing in a similar perspective. To achieve this goal, visual codebook learning and feature quantization on high- dimensional visual features are involved, with spatial con- text embedded to further enrich the discriminative capabil- ity of the visual representation.
Ideally, the similarity between images should reï¬ect the relevance in semantics, which, however, is difï¬cult due to the intrinsic âsemantic gapâ problem. Conventionally, the image similarity in content-based retrieval is formulated based on the visual feature matching results with some weighing schemes. Alternatively, the image similarity for- mulations in existing algorithms can also be viewed as different match kernels [30].
In this paper, we focus on the overview over research works in the past decade after 2003. For discussion be- fore and around 2003, we refer readers to previous sur- vey [5] [6][7]. Recently, there have been some surveys related to CBIR [31] [2] [3]. In [31], Zhang et al. surveyed image search in the past 20 years from the perspective of database scaling from thousands to billions. In [3], Li et al. made a review of the state-of-the-art CBIR techniques in the context of social image tagging, with focus on three closed linked problems, including image tag assignment, reï¬nement, and tag-based image retrieval. Another recent related survey is referred in [2]. In this work, we approach the recent advance in CBIR with different insights and emphasize more on the progress in methodology of a generic framework.
In the following sections, we ï¬rst brieï¬y review the generic pipeline of content-based image search. Then, we
discuss ï¬ve key modules of the pipeline, respectively. Af- ter that, we introduce the ground-truth datasets popularly exploited and the evaluation metrics. Finally, we discuss future potential directions and conclude this survey.
# 2 GENERAL FLOWCHART OVERVIEW
Content-based image search or retrieval has been a core problem in the multimedia ï¬eld for over two decades. The general ï¬owchart is illustrated in Fig. 1. Such a visual search framework consists of an off-line stage and an on-line stage. In the off-line stage, the database is built by image crawling and each database image is represented into some vectors and then indexed. In the on-line stage, several modules are involved, including user intention analysis, query forma- tion, image representation, image scoring, search reranking, and retrieval browsing. The image representation module is shared in both the off-line and on-line stages. This paper will not cover image crawling, user intention analysis [32], and retrieval browsing [33], of which the survey can be referred in previous work [6] [34]. In the following, we will focus on the other ï¬ve modules, i.e., query formation, image rep- resentation, database indexing, image scoring, and search reranking.
In the following sections, we make a review of related work in each module, discuss and evaluate a variety of strategies to address the key issues in the corresponding modules.
# 3 QUERY FORMATION
At the beginning of image retrieval, a user expresses his or her imaginary intention into some concrete visual query. The quality of the query has a signiï¬cant impact on the retrieval results. A good and speciï¬c query may sufï¬ciently reduce the retrieval difï¬culty and lead to satisfactory re- trieval results. Generally, there are several kinds of query formation, such as query by example image, query by sketch map, query by color map, query by context map, etc. As illustrated in Fig. 2, different query schemes lead to signiï¬cantly distinguishing results. In the following, we will discuss each of those representative query formations.
The most intuitive query formation is query by example image. That is, a user has an example image at hand and would like to retrieve more or better images about the same or similar semantics. For instance, a picture holder may want to check whether his picture is used in some web pages without his permission; a cybercop may want to check a terrorist logo appearing in the Web images or videos for anti-terrorism. To eliminate the effect of the background, a bounding box may be speciï¬ed in the example image to constrain the region of interest for query. Since the example images are objective without little human involvement, it is convenient to make quantitative analysis based on it so as to guide the design of the corresponding algorithms. Therefore, query by example is the most widely explored query formation style in the research on content-based im- age retrieval [9] [10] [35] [36].
Besides query by example, a user may also express his intention with a sketch map [37] [38]. In this way, the query is a contour image. Since sketch is more close to the semantic
2
Offline Stage
Image Crawling Image Database Image Representation Database Indexing User Intention Query Formation Image Representation Image Scoring Search Reranking Retrieval Browsing
Image Representation
Image Crawling
Database Indexing
Image Database
Database Indexing
Image Database
Database Indexing
>
>
>
Image Scoring
Search Reranking
Image Representation
Retrieval Browsing
Query Formation
Search Reranking
Retrieval Browsing
Query Formation
User Intention
User Intention
>
>
>
=
>
Online Stage
Fig. 1. The general framework of content-based image retrieval. The modules above and below the green dashed line are in the off-line stage and on-line stage, respectively. In this paper, we focus the discussion on ï¬ve components, i.e., query formation, image representation, database indexing, image scoring, and search reranking.
Abstract Thoughts Query by keyword Interface Query by example Query by sketch Query by color layout
dog
iat sa
Interface
re
Joop
# Query by concept layout
Fig. 2. Illustration of different query schemes with the corresponding retrieval results.
representation, it tends to help retrieve target results in usersâ mind from the semantic perspective [37]. Initial works on sketch based retrieval are limited to search for special artworks, such as clip arts [39] [40] and simple patterns [41]. As a milestone, the representative work on sketch-based retrieval for natural images is the edgel [42]. Sketch has also been employed in some image search engines, such as Gazopa6 and Retrievr7. However, there are two non-trivial issues on sketch based query. Firstly, although some simple concepts, such as sun, ï¬sh, and ï¬ower, can be easily inter- preted as simple shapes, in most time, it is difï¬cult for a user to quickly sketch out what he wants to search. Secondly, since the images in the database are usually natural images, it needs to design special algorithms to convert them to sketch maps consistent with user intention.
color map based query can easily involve user interaction to improve the retrieval results but is limited by potential concepts to be represented. Besides, color or illumination change is prevalent in image capturing, which casts severe challenge on the reliance of color-based feature.
The above query formations are convenient for uses to input but may still be difï¬cult to express the userâs semantic intention. To alleviate this problem, Xu et al. proposed to form the query with concepts by text words in some speciï¬c layout in the image plain [44] [45]. Such structured object query is also explored in [46] with a latent ranking SVM model. This kind of query is specially suitable for searching generalized objects or scenes with context when the object recognition results are ready for the database images and the queries.
Another query formation is color map. A user is allowed to specify the spatial distribution of colors in a given grid- like palette to generate a color map, which is used as query to retrieve images with similar colors in the relative regions of the image plain [43]. With coarse shape embedded, the
6. http://www.gazopa.com/ 7. http://labs.systemone.at/retrievr
It is notable that, in the above query schemes taken by most existing work, the query takes the form of single image, which may be insufï¬cient to reï¬ect user intension in some situations. If provided with multiple probe images as query, some new strategies are expected to collaboratively represent the the query or fuse the retrieval results of each single probe [47]. That may be an interesting research topic
3
especially in the case of video retrieval where the query a video shot of temporal sequence.
# 4 IMAGE REPRESENTATION
In content based image retrieval, the key problem is how to efï¬ciently measure the similarity between images. Since the visual objects or scenes may undergo various changes or transformations, it is infeasible to directly compare images at pixel level. Usually, visual features are extracted from images and subsequently transformed into a ï¬x-sized vec- tor for image representation. Considering the contradiction between large scale image database and the requirement for efï¬cient query response, it is necessary to âpackâ the visual features to facilitate the following indexing and image comparison. To achieve this goal, quantization with visual codebook training are used as a routine encoding processing for feature aggregation/pooling. Besides, as an important characteristic for visual data, spatial context is demonstrated vital to improve the distinctiveness of visual representation. Based on the above discussion, we can mathematically formulate the content similarity between two images X and Y in Eq. 1.
S(X , Y) = k(x, y) = yâY xâX X X Ï(x)T Ï(y) yâY xâX X X = Ψ(X )T Ψ(Y). (1) (2) (3)
Based on Eq. 1, there emerge three questions.
1)
2)
Firstly, how to describe the content image X by a set of visual features {x1, x2, · · · }? Secondly, how to transform feature sets X = {x1, x2, · · · } with various sizes to a ï¬xed-length vector Ψ(X )?
3) Thirdly, how to efï¬ciently compute the similarity between the ï¬xed-length vectors Ψ(X )T Ψ(Y)?
The above three questions essentially correspond to the feature extraction, feature encoding & aggregation, and database indexing, respectively. As for feature encoding and aggregation, it involves visual codebook learning, spatial context embedding, and quantization. In this section, we discuss the related works on those key issues in image representation, including feature extraction, visual code- book learning, spatial context embedding, quantization, and feature aggregation. The database indexing is left to the next section for discussion.
# 4.1 Feature Extraction
Traditionally, visual features are heuristically designed and can be categorized into local features and global features. Besides those hand-crafted features, recent years have wit- nessed the development of learning-based features. In the following, we will discuss those two kinds of features, respectively.
4.1.1 Hand Crafted Feature
In early CBIR algorithms and systems, global features are commonly used to describe image content by color [48] [43], shape [42] [49] [50] [51], texture [52][53], and structure [54] into a single holistic representation. As one of the repre- sentative global feature, GIST feature [55] is biologically plausible with low computational complexity and has been widely applied to evaluate approximate nearest neighbor search algorithms [56] [57] [58] [59]. With compact repre- sentation and efï¬cient implementation, global visual fea- ture are very suitable for duplicate detection in large-scale image database [54], but may not work well when the target images involve background clutter. Typically, global features can be used as a complementary part to improve the accuracy on near-duplicate image search based on local features [24].
Since the introduction of SIFT feature by Lowe [60] [8], local feature has been extensively explored as a routine image representation in many works on content-based im- age retrieval. Generally, local feature extraction involves two key steps, i.e. interest point detection and local region description. In interest point detection, some key points or regions with characteristic scale are detected with high repeatability. The repeatability here means that the interest points can be identiï¬ed under various transformations or changes. Popular detectors include Difference of Gaussian (DoG) [8], MSER [61], Hessian afï¬ne detector [62], Harris- Hessian detector [63], and FAST [64]. In interest point detec- tion, the invariance to translation and resizing is achieved. Distinguished from the above methods, it is also possible to obtain the interest points by uniformly and densely sample the image plane without any explicit detector [65].
After the detection of interest points, a descriptor or multiple descriptors [66] are extracted to describe the visual appearance of the local region centered at the interest point. Usually, the descriptor is designed to be invariant to rotation change and robust to afï¬ne distortion, addition of noise, and illumination changes, etc. Besides, it should also be distinctive so as to correctly match a single feature with high probability against a large corpus of features from many images. Such property is especially emphasized in the scenario of large-scale visual applications. The most popular choice with the above merits is SIFT feature [8]. As a variant, SURF [67] is demonstrated with comparable performance but better efï¬ciency.
Some improvements or extensions have been made on the basis of SIFT. In [23], Arandjelovic et al proposed a root-SIFT by making root-normalization on the original SIFT descriptor. Although such operation is simple, it is demonstrated to signiï¬cantly improve the image retrieval accuracy and can be readily plugged into many SIFT based image retrieval algorithms [68]. Zhou et al. proposed to generate binary signature of the SIFT descriptor with two median thresholds determined by the original descriptor itself [36]. The obtained binary SIFT leads to a new indexing scheme for image retrieval [69]. Liu et al. extend the binary SIFT by ï¬rst generating a binary comparison matrix via dimension-pair comparison and then ï¬exibly dividing the matrix entries into segments each of which is hashed to a bit [70]. In [21], the SIFT descriptor is transformed to
4
binary code with principal component analysis (PCA) and simple thresholding operations simply based on coefï¬cientsâ sign. In [71], Afï¬ne-SIFT (ASIFT) simulates a set of sample views of the initial images by varying the two camera axis orientation parameters, i.e., the latitude and the longitude angles and covers effectively all six parameters of the afï¬ne transformation, consequently achieving fully afï¬ne invari- ance.
SIFT features extracted in regions with weak internal structure suffers poor distinctiveness and may degrade im- age retrieval performance. To identify and remove those features, Dong et al. regarded a SIFT descriptor as 128 samples of a discrete random variable ranging from 0 to 255 and make use of the entropy as a measurement metric to ï¬lter SIFT features with low entropy [72].
Apart from ï¬oating point feature like SIFT, binary fea- tures are popularly explored and directly extracted from the local region of interest. Recently, binary feature BRIEF [73] and its variants, such as ORB [74], FREAK [75], and BRISK [76], have been proposed and have attracted a great deal of attention in visual matching applications. Those binary features are computed by some simple intensity difference tests, which are extremely computationally ef- ï¬cient. With the advantage in efï¬ciency from Hamming distance computation, those binary features based on FAST detector [64] may have potential in large scale image search. In [77], Zhang et al. proposed a novel ultra short binary descriptor (USB) from the local regions of regions detected by DoG detector. The USB achieves fast image matching and indexing. Besides, following the binary SIFT scheme [36], it avoids the expensive codebook training and feature quanti- zation in BoW model for image retrieval. A comprehensive evaluation of binary descriptors are referred in [78].
Besides the gradient information in the local regions as in SIFT feature, edge and color can also be expressed into a compact descriptor, generating Edge-SIFT [79] and color-SIFT [80]. As a binary local feature, Edge-SIFT [79] describes a local region with the extracted Canny edge detection results. Zheng et al extracted color name feature from the local regions, which is further transformed to a binary signature to enhance the discrimination of local SIFT feature [68].
# 4.1.2 Learning-based Feature
Apart from the above handcrafted visual features, it is also possible to learn features in a data-driven manner for image retrieval. Attribute feature, originally used for object catego- rization, can be used to represent the semantic characteris- tics for image retrieval [81] [82] [83]. Generally, the attribute vocabulary can be manually deï¬ned by humans [84] [85] or some ontology [86]. For each attribute, a classiï¬er can be trained with kernel over multiple low-level visual features based on labeled training image set and used to predict the attribute score for unseen images [86] [85] [87] [88]. In [89], the attribute feature is adopted as a semantic-aware representation to compensate local SIFT feature for image search. Karayev et al. learned classiï¬ers to predict image styles and applied it to search and rank image collection by styles [90]. The advantage of attribute feature is that it pro- vides an elegant way to approximate the visual semantics so as to reduce the semantic gap. However, there are two
issues on attribute features. Firstly, it is difï¬cult to deï¬ne a complete set of attribute vocabulary, either manually or in an automatic manner. Thus, the representation with the limited attribute vocabulary may be biased for a large and semantically diverse image database. Secondly, it is usually computationally expensive to extract attribute features due to the necessity to do classiï¬cation over thousands of at- tribute categories [81] [86].
Topic models, such as probabilistic Latent Semantic Analysis (pLSA) model [91] and Latent Dirichlet Alloca- tion (LDA) model [92], are popularly adopted to learn feature representation with semantics embedded for image retrieval [93] [94].
With the explosive research on deep neural network (DNN) [65] [95] [96], recent years have witnessed the success of the learning-based features in multiple areas. With the deep architectures, high-level abstractions close to human cognition process can be learned [97]. As a result, it is feasible to adopt DNN to extract semantic-aware features by the activations of different lays in the networks. In [98], features are extracted in local patches with a deep re- stricted Boltzmann machine (DBN) which is reï¬ned by using back-propagation. As a typical structure of the DNN family, deep convolutional neural network (CNN) [99] has demonstrated state-of-the-art performance in various tasks on image recognition and retrieval [100]. In [101], compre- hensive studies is conducted on the potential of learned visual features with deep CNN for various applications including content based image retrieval. Razavian et al. study the Alex-Net [99] and VGG-Net [95], and exploit the last convolutional layers response with max pooling as image representation for image retrieval [102]. In [103], the activations of the sixth layer of the Alex-Net [99] is taken out as a DNN feature for each image, which is fused in the image similarity score level with traditional visual features including SIFT-based BoW feature, HSV histogram, and GIST.
Besides working as a global description of images, learning-based feature can also be obtained in a manner similar to local features [104]. The local regions of interest are generated by unsupervised object detection algorithms, such as selective search [105], objectness [106], and binarized normed gradients (BING) [107]. Those algorithms generate a number of object proposals in the form of bounding boxes. Then, in each object proposal region, the learning-based feature can be extracted. In [108], Sun et al. adopted the CNN model to extract features from local image regions detected by a general object detector [107], and applied it for image retrieval and demonstrated impressive performance. Considering the fact that object detection is sensitive to rotation transformation, Xie et al. proposed to rotate the test image by four different angles and then conduct object detection. Object proposals with top detection scores are then selected to extract the deep CNN feature [99]. Tolias et al. generate feature vector of regional maximum acti- vation of convolutions (R-MAC) towards geometry-aware re-ranking [109]. To speedup the max-pooling operation, a novel approximation is proposed by extending the idea of integral images. In [110], the R-MAC descriptor is extended by selecting regions with a region-of-interest (ROI) selector based on region proposal network [111].
5
In the above approaches, the learning-based feature is extracted with the deep learning model trained for clas- siï¬cation task. As a result, the learned feature may not well reï¬ect the visual content characteristics of retrieval images, which may result in limited retrieval performance. Therefore, it is preferred to train the deep learning model directly for the retrieval task, which, however, is difï¬cult since the potential image category in retrieval is difï¬cult to deï¬ne or enumerated. To partially address this difï¬culty, Babenko et al. focus on landmark retrieval and ï¬ne-tune the pre-trained CNN model on ImageNet with the class corre- sponding to landmarks [112]. after the ï¬ne-tuning, promis- ing performance improvement is witnessed on the retrieval datasets with similar visual statistics, such as the Oxford Building dataset [11]. To get rid of the dependence on examples or class labels, Paulin et al. proposed to generate patch-level feature representation based on convolutional kernel networks in an unsupervised way [113]. In [114], the supervision takes the form of binary codes, which are obtained by decomposing the similarity matrix of training images. The resultant deep CNN model is therefore ready to generate binary codes for images in an end-to-end way. Further, Lai et al. propose deep neuron networks to hash images into short binary codes with optimization based on triplet ranking loss [115]. The resulted short binary codes for image representation enable efï¬cient retrieval by Hamming distance and considerable gain in storage.
# 4.2 Visual Codebook Learning
Usually, hundreds or thousands of local features can be extracted from a single image. To achieve a compact repre- sentation, high dimensional local features are quantized to visual words of a pre-trained visual codebook, and based on the quantization results an image with a set of local features can be transformed to a ï¬xed-length vector, by the Bag-of-Visual-Words model [9], VLAD [116], or Fisher Vector [117]. To generate a visual codebook beforehand, the most intuitive way is by clustering the training feature samples with brute-force k-means [9] [12] and then regard- ing the clustering centers as visual words. Since the local feature dimension is high and the training sample corpus is large, it suffers extremely high computational complexity to train a large, say, million-scale or larger, visual codebook. To address this problem, an alternative to to adopt the hierarchical k-means [10], which reduces the computational complexity from linear to logarithm for large size visual codebook generation.
In the standard k-means, the most computing overhead is consumed on the assignment of feature samples to the close cluster center vector, which is implemented by linearly comparing all cluster center vectors. That process can be speeded up by replacing the linear scan with approximate nearest neighbor search. With such observation, Philbin et al. proposed an approximate k-means algorithm by exploiting randomized k-D trees for fast assignment [11]. Instead of using k-means to generate visual words, Li et al. generated hyper-spheres by randomly sampling seed feature points with a predeï¬ned radius [118]. Then, those hyper-spheres with the seed features corresponds to the visual codebook. In [119], Chu et al. proposed to build the visual vocabulary
based on graph density. It measures the intra-word simi- larity by the feature graph density and derives the visual word by dense feature graph with a Scalable Maximization Estimation (SME) scheme.
In the Bag-of-Visual-Words model, the visual codebook works as a media to identify the visual word ID, which can be regarded as the quantization or hashing result. In other words, it is feasible to directly transform the visual feature to a visual word ID without explicitly deï¬ning the visual word. Following this idea, different from the above codebook generation methods, some other approaches on image retrieval generate a virtual visual codebook without explicit training. Those methods transform a local feature to binary signature, based on which the visual word ID is heuristically deï¬ned. In [21], Zhang et al. proposed a new query-sensitive ranking algorithm to rank PCA-based binary hash codes to search for Ç«-neighbors for image retrieval. The binary signature is generated with a LSH (locality sensitive hashing) strategy and the top bits are used as the visual word ID to group feature points with the same ID. Zhou et al. [36] proposed to binarize a SIFT descriptor into a 256-bit binary signature. Without training a codebook, this method selects 32 bits from the 256-bit vector as a codeword for indexing and search. The drawback of this approach is that the rest 224 bits per feature have to be stored in the inverted index lists, which casts a heavy overhead on memory. Similarly, Dong et al proposed to transform to a SIFT descriptor to a 128-bit vector [72] with a sketch embedding technique [120]. Then, the 128-bit vector is divided into 4 non-overlapped block, each of which is con- sidered as a key or a visual word for later indexing. In [121], Zhou et al proposed a codebook-training-free framework based on scalable cascaded hashing. To ensure the recall rate of feature matching, the scalable cascaded hashing (SCH) scheme which conducts scalar quantization on the principal components of local descriptors in a cascaded manner.
# 4.3 Spatial Context Embedding
As the representation of structured visual content, visual features are correlated by spatial context in terms of orien- tation, scale, and key pointsâ distance in image plane. By in- cluding the contextual information, the discriminative capa- bility of visual codebook can be greatly enhanced [26]. Anal- ogy to the text phrase in information retrieval, it is feasible to generate visual phrase over visual words. In [27] [122], neighboring local features are correlated to generate high- order visual phrases, which are further reï¬ned to be more descriptive for content representation.
Many algorithms target on modeling the local spatial context among local visual features. Loose spatial consis- tency from some spatially nearest neighbors can be imposed to ï¬lter false visual-word matches. Supports are collected by checking the matched features with the search area deï¬ned by 15 nearest neighbors [9]. Such loose scheme, although efï¬cient, is sensitive to the image noise incurred by edit- ing. Zhang et al. generated contextual visual codebook by modeling the spatial context of local features in group with a discriminant group distance metric [28]. Wang et al. pro- posed descriptor contextual weighting (DCW) and spatial contextual weighting (SCW) of local features in the descrip- tor domain and spatial domain, respectively, to upgrade
6
the vocabulary tree based approach [123]. DCW down- weights less informative features based on frequencies of descriptor quantization paths on a vocabulary tree while SCW exploits some efï¬cient spatial contextual statistics to preserve the rich descriptive information of local features. In [124], Liu et al. built a spatial-relationship dictionary by embedding spatial context among local features for image retrieval.
Further, the multi-modal property that multiple different features are extracted at an identical key points is discussed and explored for contextual hashing [125]. In [126], geo- metric min-hashing constructs repeatable hash keys with loosely local geometric information for more discriminative description. In [17], Wu et al. proposed to bundle local features in a MSER region [61]. The MSER regions are deï¬ned by an extremal property of the intensity function in the region and on its outer boundary and are detected as stable regions across a threshold range from watershed- based segmentation [61]. Bundled features are compared by the shared visual word amount and the relative ordering of matched visual words. In [63], ordinal measure (OM) feature [127] is extracted from the spatial neighborhood around local interest points. Then, local spatial consistency veriï¬cation is conducted by checking whether the OMs of the correspondence features are below a predeï¬ned thresh- old.
Different from the above approaches, Cao et al. modeled the global spatial context by two families of ordered bag- of-features as a generation of the spatial pyramid match- ing [128] by linear projection and circular projection and further reï¬ned them to capture the invariance of object translation, rotation, and scaling by simple histogram op- erations, including calibration, equalization, and decompo- sition [129].
In the scenario of face retrieval, the above general code- book generation methods are likely to fail to capture the unique facial characteristics. To generate discriminative vi- sual codebook, Wu et al. proposed to generate identity-based visual vocabulary with some training persons each with multiple face examples under various poses, expressions, and illumination conditions [130]. A visual word is deï¬ned as a tuple consisting of two components, i.e., person ID and position ID and associated with multiple examples.
# 4.4 Feature Quantization
With visual codebook deï¬ned, feature quantization is to assign a visual word ID to each feature. To design a suitable assignment function, special consideration should be made to balance quantization accuracy, efï¬ciency, and memory overhead.
The most naive choice is to take the nearest neighbor search, so as to ï¬nd the closest (the most similar) visual word of a given feature by linear scan, which, however, suffers expensive computational cost. Usually, approximate nearest neighbor (ANN) search methods are adopted to speed up the searching process, with sacriï¬ce of accuracy to some extent. In [8], a k-d tree structure [131] is utilized with a best-bin-ï¬rst modiï¬cation to ï¬nd approximate nearest neighbors to the descriptor vector of the query. In [10], based on the hierarchical vocabulary tree, an efï¬cient approximate
nearest neighbor search is achieved by propagating the query feature vector from the root node down the tree by comparing the corresponding child nodes and choosing the closest one. In [132], a k-d forest approximation algorithm is proposed with reduced time complexity. Muja and Lowe proposed a novel priority search k-means tree algorithm for scalable nearest neighbor search [133] with FLANN library8 provided. In [118], the feature quantization is achieved by range-based neighbor search over the random seeding code- book. This random seeding approach, although efï¬cient in implementation, suffers the bias to the training data and achieves limited retrieval accuracy in large-scale image retrieval [134]. Those approaches conduct quantization in a hard manner and inevitably incur severe quantization loss. Considering that the codebook partitions the feature space into some non-overlapping cells, feature quantization works to identify which cell a test feature falls into. When the codebook size is large which means the feature space is ï¬nely partitioned, features proximate to the partition boundary are likely to fall into different cells. On the other hand, with small codebook and feature space coarsely par- titioned, irrelevant features with large distance may also fall into the same cell. Both cases will incur quantization loss and degrade the recall and precision of feature matching, respectively. A trade-off shall be made on the codebook size to balance the recall and precision from the above two kinds of loss [10], or some constraints are involved to reï¬ne the quantization quality.
Some approaches adopt a large visual codebook but take account of the soft quantization to reduce the quantiza- tion loss. Generally, a descriptor-dependent soft assignment scheme [15] is used to map a feature vector to a weighted combination of multiple visual words. Intuitively, the soft quantization can be performed for both a query feature and the database features. However, it will cost several times more memory to store the multiple quantization results for each database feature. As a trade-off, the soft quantization can be constrained to only the query side [35]. In [35], a new quantizer is designed based on a codebook learned by brute-force k-means clustering. It ï¬rst performs k-means clustering on the pre-trained visual words and generate a two-layer visual vocabulary tree in a bottom-up way. Then, new connections between the two-layer nodes are constructed by quantizing a large feature set with both layers of quantizers. Soft assignment is performed with a criteria based on distance ratio.
On the other hand, some other approaches keep a rela- tively small visual codebook but performs further veriï¬ca- tion to reduce the quantization loss. In [12], Hamming Em- bedding reduces the dimension of SIFT descriptors quan- tized to a visual word, and then trains a median vector by taking the median value in each dimension of the feature samples. After a new feature is quantized to a visual word, it is projected to the low dimensional space, and then com- pared with the median vector dimension-wise to generate binary signature for matching veriï¬cation [54]. In [135], a variant, i.e., the asymmetric Hamming Embedding scheme, is proposed to exploit the rich information conveyed by the binary signature. Zhou et al.adopt a similar veriï¬cation
8. http://www.cs.ubc.ca/research/ï¬ann/
7
idea with a different binary signature which is generated by comparing each element of a feature descriptor to its median [136].
The above approaches rely on single visual codebook for feature quantization. To correct quantization artifacts and improve recall, typically, multiple vocabularies are generated for feature quantization to improve the re- call [137][138]. Since multiple vocabularies suffers from vocabulary correlation, Zheng et al proposed a Bayes merg- ing approach to down-weight the indexed features in the intersection set of multiple vocabularies [139]. It models the the correlation problem in a probabilistic view and estimate a joint similarity on both image- and feature-level for the indexed features in the intersection set.
The vector quantization of local descriptors is closely related to approximate nearest neighbor search [58]. there are many hashing algorithms for In literature, approximate nearest neighbor (ANN) search, such as LSH [140][141], multi-probe LSH [142], kernelized LSH [56], semi-supervised hashing method (SSH) [143], spectral hash- iterative quantization [144], ing [57], min-Hashing [16], random grids [145], bucket distance hashing (BDH) [146], query-driven iterated neighborhood graph search [147], and linear distance preserving hashing [148]. These hashing methods, however, are mostly applied to global image fea- tures such as GIST or BoW features at the image level, or to feature retrieval only at the local feature level. There is few work dedicated to image level search based on local feature hashing [22]. The major concern of those hashing methods is that multiple hashing tables are usually involved and each feature needs to be indexed multiple times, which cast heavy memory burden. Besides, in hashing methods such as LSH [141], multi-probe LSH [142] and kernelized LSH [56], the original database feature vectors need be kept in memory to compute the exact distance to the query feature, which is infeasible in the scenario of large-scale image search with local features. Moreover, approximate nearest neighbor search usually targets at identifying the top-k closest data to the query, which ignores the essence of range-based neighbor search in visual feature matching. That is, given a query feature, the number of target data in the database is query-sensitive and determined by the coverage of the range-based neighborhood of the query.
In [58], a novel product quantization is proposed to generate an exponentially large codebook with low cost in memory and time for approximate nearest neighbor search. It decomposes the feature space into a Cartesian product of low-dimensional subspaces and quantizes each sub-space individually. The quantization indices of each sub-space are presented as a short code, based on which the Euclidean distance between two feature vectors can be efï¬ciently es- timated by looking up a pre-computed table. The product quantization, however, suffers from exhaustive search for identifying target features, which is prohibitive in large- scale image search [58]. As a partial solution to this bottle neck, vector quantization by k-means can be involved to narrow the search scope and allow the product to focus on a small fraction of indexed features [58]. In [149], the product quantization is optimized with respect to the vector space decomposition and the quantization codebook with two solutions from the non-parametric and the parametric
perspectives. Zhou et al. formulated the feature matching as an Ç«âneighborhood problem and approximated it with a dual-resolution quantization scheme for efï¬cient indexing and querying [134]. It performs scalar quantization in coarse and ï¬ne resolutions on each dimension of the data, and cascades the quantization results over all dimensions. The cascaded quantization results in coarse resolution are used to build the index, while the cascaded quantization results in ï¬ne resolutions are transformed to a binary signature for matching veriï¬cation.
In [150], the high dimensional SIFT descriptor space is partitioned into regular lattices. Although demonstrated to work well in image classiï¬cation, in [15], regular lattice quantization is revealed to work much worse than [10] [15] in large scale image search application.
# 4.5 Feature Aggregation
When an image is represented by a set of local features, it is necessary to aggregate those local features into a ï¬xed- length vector representation for convenience of similarity comparison between query and database images. Generally, there are three alternatives to achieve this goal. The ï¬rst one is the classic Bag-of-Visual-Words representation, which quantizes each local feature to the closest visual word of a pre-trained visual codebook. The quantization result of a single local feature can be regarded as a high-dimensional binary vector, where the non-zero dimension corresponds to the quantized visual word. By pooling the quantization results of all local features in an image, we obtain a BoW vector with the dimension size as the visual codebook size. In this scheme, the involved visual codebook is usually very large in size and the generated BoW vector is very sparse, which facilitates the use of the inverted ï¬le indexing.
The second popular feature aggregation method is the VLAD (vector of locally aggregated descriptors) ap- proach [116], which adopts k-means based vector quantiza- tion and accumulates the quantization residues for features quantized to each visual word and concatenate those accu- mulated vectors into a single vector representation. With compact size, the VLAD vector inherits some important properties from SIFT feature, including invariance to trans- lation, rotation, and scaling. In [151], the VLAD approach is improved by a new intra-normalization scheme and multiple spatial VLAD representation. An in-depth analysis on VLAD is conducted in [152]. In [153], an extension of VLAD is proposed with triangulation embedding scheme and democratic aggregation technique. Further, Tolias et al. encompassed the VLAD vector with various matching schemes [30]. To reduce the computational complexity of the democratic aggregation scheme, Gao et al. proposed a fast scheme with comparable retrieval accuracy perfor- mance [154]. In [155], sparse coding is adopted to encode the local feature descriptors into sparse vectors, which are further aggregated with a max-pooling strategy. Liu et al. proposed a hierarchical scheme to build the VLAD vec- tor with SIFT feature [156]. By involving a hidden-layer vocabulary, the distribution of the residue vectors to be aggregated becomes much more uniform, leading to better discrimination for the representation.
representation is achieved by global aggregation of all local features in an
8
image, the original VLAD vector sacriï¬ces the ï¬exibility to address partial occlusion and background clutter. To allevi- ate this problem, Liu et al. [157] grouped local key points by their spatial positions in the image plane and aggregated all local descriptors in each group by the VLAD scheme [116]. As a result, a local aggregation of local features is achieved and promising retrieval accuracy is demonstrated with a tradeoff in memory cost.
Besides the BoW representation and the VLAD, another alternative is the Fisher Vector based representation [117] with Fisher kernel [158] [159]. As a generative model, given a set of features for an image, Fisher vector represents them into a ï¬x-sized vector by the gradient of the log-likelihood function with respect to a set of parameter vectors [160]. In [117] [161], Gaussian Mixture Model (GMM) is adopted as a generative model to aggregate the normalized con- catenated gradient vectors of all local descriptors into a uniform Fisher vector with an average pooling scheme. In fact, the Fisher Vector can be regarded as a generalized representation of the BoW representation and VLAD. On one hand, if we keep only the gradient of the log-likelihood function with respect to the weight of GMM, the Fisher Vector degenerates to a soft version of the BoW vector. On the other hand, If we keep only the gradient of the log-likelyhood function with respect to the mean vector of GMM, we can derive the VLAD representation [58].
In either the Fish vector or VLAD representation, the in- volved GMM number or codebook size is relative small and the obtained aggregated vector is no long sparse. As a result, it is unsuitable to apply the inverted ï¬le indexing scheme to index images based on the aggregated results. To address this dilemma, the aggregated vector is dimensionally re- duced and further encoded by product quantization [58] for efï¬cient distance computation.
The above aggregation schemes are based on local hand-crafted feature, such as SIFT feature. Intuitively, such schemes can be directly leveraged to local deep features. Following this idea, Gong et al. [162] extract local CNN features from the local patches sampled regularly at mul- tiple scale levels and pool the CNN features in each scale level with the VLAD scheme [37]. In [163], Babenko et al. interpret the activations from the last convolutional layers of CNNs as local deep features. They reveal that the individual similarity of local deep feature is very discriminative and the simple aggregation with sum pooling over local deep feature yields the best performance.
5 DATABASE INDEXING Image index refers to a database organizing structure to assist for efï¬cient retrieval of the target images. Since the response time is a key issue in retrieval, the signiï¬cance of database indexing is becoming increasingly evident as the scale of image database on the Web explosively grows. Generally, in CBIR, two kinds of indexing techniques are popularly adopted, i.e., inverted ï¬le indexing and hashing based indexing. In the following, we will discuss related retrieval algorithms in each category, respectively.
# 5.1 Inverted File Indexing
Inspired by the success of text search engines, inverted ï¬le indexing [164] has been successfully used for large
scale image search [9] [11] [18] [14] [10] [12] [17] [165]. In essence, the inverted ï¬le structure is a compact column- wise representation of a sparse matrix, where the row and the column denote image and visual word, respectively. In on-line retrieval, only those images sharing common visual words with the query image need to be checked. Therefore, the number of candidate images to be compared is greatly reduced, achieving an efï¬cient response.
In the inverted ï¬le structure, each visual word is fol- lowed by an inverted ï¬le list of entries. Each entry stores the ID of the image where the visual word appears, and some other clues for veriï¬cation or similarity measurement. For instance, Hamming Embedding [12] generates a 64-bit Hamming code for each feature to verify the descriptor matching. The geometric clues, such as feature position, scale, and orientation, are also stored in the inverted ï¬le list for geometric consistency veriï¬cation [11] [18] [12] [13]. In [17], Wu et al. recorded the feature orders in horizontal and veriï¬cation direction in each bundled feature located in a MSER region. In [123], 3 spatial statistics, including descriptor density, mean relative log scale, and mean orien- tation difference, are calculated for each feature and stored in the inverted list after quantization. Zheng et al. modeled the correlation between multiple features with a multi-IDF scheme and coupled the binary signatures of those features into the inverted ï¬le to enhances the quality of visual matching [166].
Following the general idea of inverted ï¬le structure, many variants are proposed. In [42], to adapt to the in- verted index structure for sketch-based retrieval, it regularly quantizes the edge pixel in position channel and orientation channel and follows each entry in the edgel dictionary with an inverted lists of related images. In [68], Zheng et al proposed a new coupled Multi-Index (c-MI) framework to fuse complementary features at indexing level. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and color attribute [85] feature spaces. In [70], the image database is cross-indexed in both the binary SIFT space and the original SIFT space. With such cross-indexing structure, a new searching strategy is designed to ï¬nd target data for effective feature quantization.
Some methods try to embed the semantics into the index structure. In [167], Zhang et al proposed a new indexing structure by decomposing a document-like representation of an image into two components, one for dimension reduction and the other for residual information preservation. The decomposition is achieved by either a graphical model or a matrix factorization approach. Then, the similarity between images is transferred to measuring similarities of their com- ponents. In [89], Zhang et al proposed a semantic-aware co- indexing to jointly embed two strong cues, i.e., local SIFT feature and semantic attributes, into the inverted indexes. It exploits 1000 semantic attributes to ï¬lter out isolated images and insert semantically similar images to the initial inverted index set built based on local SIFT features. As a result, the discriminative capability of the indexed features is signiï¬cantly enhanced.
To adapt the product quantization [58] to the inverted in- dex idea, inverted multi-index is proposed to generalize the inverted index idea by replacing the standard quantization
9
within inverted indices with product quantization, so as to speed up the approximate nearest neighbor search.
To improve the recall rate of inverted indexing algo- rithms, the database images are indexed multiple times with multiple quantizers, such as randomized k-d trees [168] [66]. In [137], a joint inverted indexing algorithm is proposed, which jointly optimizes all codewords in all quantizers and demonstrates considerable improvement over methods with multiple independent quantizers. In [23], this goal is achieved by augmenting the image features for the database images which are estimated to be visible in a homograpy in the augmented images.
To speedup the online retrieval process, Zheng et al. proposed a novel Q-Index structure based on the inverted index organization [169]. It deï¬nes an impact score for each indexed local SIFT feature based on TF-IDF, scale, saliency, and quantization ambiguity. Then, based on the impact score, it introduced two complementary strategies, i.e. query pruning and early termination, with the former to discard less important features in the query and the later to partially visit the index lists containing the most important indexed features. The proposed algorithm demonstrates signiï¬cant speed-up for online query with competitive retrieval accu- racy. In [170], Ji et al. considered the scenario of parallelized image retrieval and proposed to distribute visual indexing structure over multiple servers. To reduce the search latency across servers, it formulates the index distribution problem as a learning problem by maximizing the uniformity of assigning the words of a given query to multiple servers.
# 5.2 Hashing Based Indexing
When the image representation, for instance GIST feature and VLAD feature, is a dense vector with the majority of the coefï¬cients being non-zero, it is unsuitable to directly apply the inverted ï¬le structure for indexing. To achieve efï¬cient retrieval for relevant results, hashing techniques are popularly adopted [171] [172] [173] [174] [175]. The most representative hashing scheme is the locality sensitive hashing (LSH) [176], which partitions the feature space with multiple hash functions of random projections with the intuition that for objects which are close to each other, the collision probability is much higher than for those which are far away. Given a query, some candidates are ï¬rst retrieved based on hashing collision and re-ranked based on the exact distance from the query. In [56], LSH is generated to accommodate arbitrary kernel functions, with sub-linear time approximate similarity search permitted. The potential concern of those hashing scheme is that, since the raw database representation vectors should be stored in memory for the reranking stage, they are not well scalable to large- scale image database. In [177], a feature map is proposed by integrating appearance and global geometry, which is further hashed for indexing. This scheme, however, suffers expensive memory cost which is quadratic in the number of local features, which limits its scalability towards large scale image retrieval. To address this drawback, an extension is made with a feature selection model to replace the hashing approach [178].
With the inverted index structure, the memory cost is proportional to the amount of non-zero elements in
the representation vector. To further reduce such memory overhead, Jegou et al. proposed to approximate the orig- inal visual word occurrence vector by projecting it onto a set of pre-deï¬ned sparse projection functions, generat- ing multiple min-BOF descriptors [179]. Those min-BOF descriptors is further quantized for indexing. With similar attempt, in [16][180], min-Hash is proposed to describe images by mapping the visual word occurrence vector to a low-dimensional representation by a group of min-hash functions and deï¬ne image similarity as the visual word set overlap. Consequently, only a small constant amount of data per image need to be stored. The potential concern of min-hashing [16][180] and its variant [126] is that although high retrieval precision can be achieved, the retrieval recall performance may be limited unless many more hashing ta- bles are involved, which, however, imposes severe memory burden.
# 6 IMAGE SCORING
In multimedia retrieval, the target results in the index image database are assigned with a relevance score for ranking and then returned to users. The relevance score can be deï¬ned either by measuring distance between the aggregated fea- ture vectors of image representation or from the perspective of voting from relevant visual feature matches.
# 6.1 Distance Based Scoring
With feature aggregation, an image is represented into a ï¬x-sized vector. The content relevance between images can be measured based on the Lp-normalized distance between their feature aggregation vectors, as shown in Eq. 4.
N 1 p D(Iq, Im) = |qi â mi|p ! (4)
i=1 X where the feature aggregation vectors of image Iq and Im are denoted as [q1, q2, · · · , qN ] and [m1, m2, · · · , mN ], respectively, and N denotes the vector dimension. In [10], it is revealed that L1-norm yields better retrieval accuracy than L2-norm with the BoW model. Lin et al. extended the above feature distance to measure partial similarity between images with an optimization scheme [181].
When the BoW model is adopted for image representa- tion, the feature aggregation vector is essentially a weighted visual word histogram obtained based on the feature quan- tization results. To distinguish the signiï¬cance of visual words in different images, term frequency (TF) and inverted document/image frequency (IDF) are widely applied in many existing state-of-the-art algorithms [10][12][9][15][17]. Generally, the visual word vector weighted by TF and IDF are Lp-normalized for later distance computation. When the codebook size is much larger than the local feature amount in images, the aggregated feature vector of image is very sparse and we only need to check those visual words appearing in both images as illustrated in Eq. 6 [10], which is very efï¬cient in practical implementation.
10
N D(Iq, Im) = |qi â mi|p (5) i=1 X = 2 + (|qi â mi|p â qp i â mp i )(6) Xi|qi6=0,mi6=0
However, the dissimilarity measure by the Lp-distance is not optimal. As revealed in [182], there exists the neigh- borhood reversibility issue, which means that an image is usually not the k-nearest neighbor of its k-nearest neighbor images. Such issue causes that problem that some images are frequently returned while others are rarely returned when submitting query images. To address this problem, Jegou et al. proposed a novel contextual dissimilarity mea- sure to reï¬ne the Euclidean distance based distance [182]. It modiï¬es the neighborhood structure in the BoW space by iteratively estimating distance update terms in the spirit of Sinkhorns scaling algorithm. Alternatively, in [183], a probabilistic framework is proposed to model the feature to feature similarity measure and a query adaptive similarity is derived. Different from the above approaches, in [184], the similarity metric is implicitly learnt with diffusion processes by exploring the afï¬nity graphs to capture the intrinsic manifold of database images.
In [138], Jegou et al. investigated the phenomenon of co- missing and co-occurrence in the regular BoW vector repre- sentation. The co-missing phenomenon denotes a negative evidence, i.e., a visual word is jointly missing from two BoW vectors. To include the under-estimated evidence for similarity measurement reï¬nement, vectors of images are centered by mean substraction [138]. On the other hand, the co-occurrence of visual words across BoW vectors will cause over-counting of some visual patterns. To limit this impact, a whitening operation is introduced to the BoW vector to gen- erate a new representation [138]. Such preprocessing also applies to the VLAD vector [116]. Considerable accuracy gain has been demonstrated with the above operations.
# 6.2 Voting Based Scoring
In local feature based image retrieval, the image similarity is intrinsically determined by the feature matches between images. Therefore, it is natural to derive the image similarity score by aggregating votes from the matched features. In this way, the similarity score is not necessarily normalized, which is acceptable considering the nature of visual ranking in image retrieval.
In [13], the relevance score is simply deï¬ned by counting how many pairs of local feature are matches across two images. In [35], Jegou et al formulated the scoring function as a cumulation of squared TF-IDF weights on shared visual words, which is essentially a BOF (bag of features) inner product [35]. In [17], the image similarity is deï¬ned as the sum of the TF-IDF score [20], which is further enhanced with a weighting term by matching bundled feature sets. The weighting term consists of membership term and geometric term. The former term is deï¬ned as the number of shared visual words between two bundled features, while the latter is formulated using relative ordering to penalize geometric
inconsistency of the matching between two bundled fea- tures. In [185][186], Zheng et al propose a novel Lp-norm IDF to extend the classic IDF weighting scheme.
The context clues in the descriptor space and the spatial domain are important to contribute the similarity score when comparing images. In [123], a contextual weighting scheme is introduced to enhance the original IDF-based voting so as to improve the classic vocabulary tree approach. Two kinds of weighting scheme, i.e., descriptor contextual weighting (DCW) and spatial contextual weighting, are formulated to multiply the basic IDF weight as a new weighting scheme for image scoring. In [187], Shen et al. proposed a spatially-constrained similarity measure based on a certain transformation to formulate voting score. The transformation space is discretized and a voting map is gen- erated based on the relative locations of matched features to determine the optimal transformation.
In [179], each indexed feature is embedded with a binary signature and the image distance is deï¬ned as a summation of the hamming distance between matched features, of which the distance for the unobserved match is set as sta- tistical expectation of the distance. Similar scoring scheme for the unobserved match is also adopted by Liu et al. [157]. In [63], to tolerate the correspondences of multiple visual objects with different transformations, local similarity of de- formations is derived from the peak value in the histogram of pairwise geometric consistency [188]. This similarity score is used as a weighting term to the general voting scores from local correspondences.
In image retrieval with visual word representation, sim- ilar to text-based information retrieval [189], there is a phenomenon of visual word burstiness, i.e., some visual element appears much more frequently in an image than the statistically expectation, which undermines the visual similarity measure. To address this problem, Jegou et al proposed three strategies to penalize the voting scores from the bursting visual words by removing multiple local matches and weaken the inï¬uence of intra- and inter-images bursts [190] [191].
# 7 SEARCH RERANKING
re- The initially returned result ï¬ned by exploring the visual [193] or enhancing the original query. Geometric veriï¬ca- tion [11] [18] [12] [13] [126] [194], query expansion [14] [195], and retrieval fusion [24] are three of the most successful post-processing techniques to boost the accuracy of large scale image search. In the following, we will review the related literature in each category.
# 7.1 Geometric Context Veriï¬cation
In image retrieval with local invariant features, the feature correspondences between query and database images are built based on the proximity of local features in the descrip- tor space. As a popular criteria, a tentative correspondence is built if the corresponding two local features are quantized to the same visual word of a pre-trained visual vocabulary. However, due to the ambiguity of local descriptor and the quantization loss, false correspondences of irrelevant visual
11
content are inevitably incurred, which confuse the similarity measurement for images and degrade the retrieval accuracy. Note that, besides the descriptor, local invariant features are characterised by other geometric context, such as the location of key points in image plane, orientation, scale, and spatial co-occurrences with other local features. Such geometric context is an important clue to depress or exclude those false matches.
Generally, among the inliers in the correspondences set, there is an underlying transformation model. If the model is uncovered, we can easily distinguish the inliers from the outliers. To model the transformation of visual object or scene across images, an afï¬ne transformation model with six parameters can be used, which estimates the rotation, scaling, translation, and perspective change in a single homography [11]. For some difï¬cult cases, there may exist multiple homographies which makes the model estimation problem much more challenging.
Some approaches estimate the transformation model in an explicit way to verify the correspon- dences. Those methods are either based the RANSAC-like idea [11][8][196] [63] or follow the Hough voting strat- egy [8][197]. The key idea of RANSAC [198] is to gen- erate hypotheses on random sets of correspondences and identify a geometric model with the maximum inliers. Sta- tistically speaking, the genuine model can be recovered with sufï¬cient number of correspondence sampling and model evaluation. However, when the rate of inliers is small, the expected number of correspondence sampling is large, which incurs high computational complexity. In [11], by adopting the region shape of local feature, a hypothe- sis is generated with single correspondence, which make it feasible to enumerate all hypotheses and signiï¬cantly reduces the computational cost compared with RANSAC. There are two issues on the RANSAC based algorithms. Firstly, it needs a parameter for hypothesis veriï¬cation, which is usually deï¬ned in an ad-hoc way. Secondly, the computational complexity is quadratic with respect to the number of correspondences, which is somewhat expensive. An alternative to the RANSAC-like methods, Hough voting strategy [8] [199] operates in a transformation space. In this case, the voting operation is linear to the correspon- dence number. In [12], the Hough voting is conducted in the space of scale and orientation. Based on the SIFT feature correspondences between images, it builds two histograms on the orientation difference and scale difference separately. Assuming that truly matched features will share similar orientation difference, it identiï¬es the peak points in the histogram on orientation difference of matched features and regard those feature pairs with orientation difference far from the peak as irrelevant and false matches. Simi- lar operation is also performed on the scale difference of matched features to further remove those geometrically inconsistent SIFT matches. In [20], Zhang et al. built a 2D Hough voting space based on the relative displacements of corresponding local features to derive the geometric- preserving visual phrase (GVP).This algorithm can be ex- tended to address the transformation invariance to scale and rotation with the price of high memory overhead to maintain the Hough histograms. The potential problem in Hough voting is the ï¬exibility issue in the deï¬nition of the
bin size for the transformation space partition. To address the problem, in [197], motivated by the pyramid matching scheme [200], Tolias et al. propose a Hough pyramid match- ing scheme. It approximates afï¬nity by bin size and group the correspondences based on the afï¬nity in a bottom-up way. Notably, the complexity of this algorithm is linear to the correspondence number. In [199], the Hough pyramid matching scheme is extended by including the soft assign- ment for feature quantization on the query image. Different from the above methods, Li et al. proposed a novel pair- wise geometric matching method [194] for implicit spatial veriï¬cation at a signiï¬cantly reduced computational cost. To reduce the correspondence redundancy, it ï¬rst builds the initial correspondence set with a one-versus-one matching strategy, which is further reï¬ned based on Hough voting in the scaling and rotation transformation space [12]. Based on the reliable correspondence set, a new pairwise weighting method is proposed to measure the matching score between two images.
Some other algorithms approach the geometric context veriï¬cation problem without explicit handling the transfor- mation model. Sivic et al. adopted the consistency of spatial context in local feature groups to verify correspondences [9]. In [18], a spatial coding scheme is proposed to encode into two binary maps by comparing the relative coordi- nates of matched feature points in horizontal and vertical directions, respectively. Then it recursively removes geo- metrically inconsistent matches by analyzing those maps. Although spatial coding map is invariant to image changes in translation and scaling, it cannot handle the rotation change. In [13] [201], Zhou et al. extended the spatial cod- ing by including the characteristic orientation and scale of SIFT feature and proposed two geometric context coding methods, i.e., geometric square coding and geometric fan coding. The geometric coding algorithm can well handle image changes in translation, rotation, and scaling. In [202], Chu et al. proposed a Combined-Orientation-Position (COP) consistency graph model to measure the relative spatial consistency among the candidate matches of SIFT features with a coarse-to-ï¬ne family of evenly sectored polar coor- dinate system. Those spatially inconsistent noisy features are effectively identiï¬ed and rejected by detecting the group of candidate feature matches with the largest average COP consistency.
# 7.2 Query Expansion
Query expansion, leveraged from text retrieval, reissues the initially highly-ranked results to generate new queries. Some relevant features, which are not present in the original query, can be used to enrich the original query to further im- prove the recall performance. Several expansion strategies, such as average query expansion, transitive closure expan- sion, recursive average query expansion, intra-expansion, and inter-expansion, etc., have been discussed in [14] [195]. In [23], a discriminative query expansion algorithm is proposed. It takes spatially veriï¬ed images as positive data and images with low tf-idf scores as the negative training data. Then, a classiï¬er is learnt on-the-ï¬y and images are sorted by their signed distances from the decision boundary. In [203], Xie et al. constructed a sparse graph by connecting
12
potentially relevant images ofï¬ine and adopted a query- dependent algorithm, i.e., HITS [204], to reranking images based on afï¬nity propagation. Further, Xie et al. formu- lated the search process with a heterogeneous graph model and proposed two graph-based re-ranking algorithms to improve the search precision and recall, respectively [205]. It ï¬rst incrementally identiï¬es the most reliable images from the database to expand the query so as to boost the recall. After that, an image-feature voting scheme is used to iteratively update the scores of images and features to re- rank images. In [206], a contextual query expansion scheme is proposed to explore the common visual patterns. The contextual query expansion is performed in both the visual word level and the image level.
relevance feedback [1] has been demonstrated to be success- ful search re-ranking technique and well studied be- fore attention in recent years [207] [208] [209] [210] [211] [212]. In relevance feed- back, the key idea is to learn a query-speciï¬c similarity metric based on the relevant and irrelevant examples in- dicated by users. Some discriminative models are learned with SVM [207][208] or boosting schemes [213]. Considering that users are usually reluctant or impatient to specify positive or negative images, user click log information can be collected as feedback to implicitly improve the retrieval system [31] [214]. For more discussion on relevance feed- back, we refer readers to [215] [216] for a comprehensive survey.
# 7.3 Retrieval Fusion
An image can be represented by different features, based on which different methods can be designed for retrieval. If the retrieval results of different methods are comple- mentary to each other, they can be fused to obtain better results. Most approaches conduct retrieval fusion in the rank level. Fagin et al. proposed a rank aggregation algorithm to combine the image ranking lists of multiple independent retrieval methods or âvotersâ [217]. In [24], the retrieval fusion is formulated as a graph-based ranking problem. A weighted undirected graph is built based on the retrieval results of one method and the graphs corresponding to multiple retrieval methods are fused to a single graph, based on which, link analysis [218] or maximizing weighted density is conducted to identify the relevance score and rank the retrieval results. In [219], Ye et al. proposed a novel rank minimization method to fuse the conï¬dence scores of multiple different models. It ï¬rst constructs a comparative relationship matrix based on the predicted conï¬dent scores for each model. With the assumption that the relative score relations are consistent across different models with some sparse deviations, it formulates the score fusion problem as seeking a shred rank-2 matrix and derives a robust a score vector.
Different from the above fusion methods, Zheng et al. approached the retrieval fusion in the score level [103]. Motivated by the shape differences in the ranked score curve between good and bad representation features, it normalizes the score curves by reference curves trained on irrelevant data and derives an effectiveness score based
on the area under the normalized score curve. Then, the query similarity measurement is adaptively formulated in a product manner over the feature scores weighted by the effectiveness score.
# 8 DATASET AND PERFORMANCE EVALUATION
To quantitatively demonstrate the effectiveness and efï¬- ciency of various image retrieval algorithms, it is indispens- able to collect some benchmark datasets and deï¬ne the eval- uation metrics. In this section, we discuss the recent ground truth datasets and distractor datasets used in experimental study for image retrieval. Besides, we introduce the key evaluation indicators in CBIR, including accuracy, efï¬ciency, and memory cost.
# 8.1 Recent Dataset for CBIR
Intuitively, the ground-truth dataset should be sufï¬cient large so as to well demonstrate the scalability of image retrieval algorithms. However, considering the tedious la- bor in dataset collection, the existing ground-truth dataset are relatively small, but mixed with random million-scale distractor database for evaluation on scalability. The exist- ing ground-truth datasets target on particular object/scene retrieval or partial-duplicate Web image retrieval. Generally, the ground-truth images contain a speciï¬c object or scene and may undergo various changes and be taken under different views or changes in illumination, scale, rotation, partial occlusion, compression rate, etc. Typical ground truth dataset for this task includes the UKBench dataset [10], the Oxford Building dataset [11], and the Holidays dataset [12], etc. MIR Flickr-1M and Flickr-1M are two different million- scale databases which are usually used as distractor to evaluate the scalability of image retrieval algorithms. For convenience of comparison and reference, we list the gen- eral information of those recent datasets popularly used in CBIR in Table 1. Some sample images from those datasets are shown in Fig. 3.
UKBench dataset It contains 10,200 images from 2,550 categories9. In each category, there are four images taken on the same scene or object from different views or illumination conditions. All the 10,200 images are taken as query and their retrieval performances are averaged.
Holidays dataset There are 1,491 images from 500 groups in the Holidays dataset10. Images in each group are taken on a scene or an object with various viewpoints. The ï¬rst image in each group is selected as query for evaluation. Oxford Building dataset (Oxford-5K) The Oxford Build- ings Dataset11 consists of 5062 images collected from Flickr12 by searching for particular Oxford landmarks. The collection has been manually annotated to generate a comprehensive ground truth for 11 different landmarks, each represented by 5 possible queries. This gives a set of 55 queries over which an object retrieval system can be evaluated. Some junk images are mixed in it as distractor.
9. http://www.vis.uky.edu/â¼stewe/ukbench/ 10. http://lear.inrialpes.fr/people/jegou/data.php 11. http://www.robots.ox.ac.uk/â¼vgg/data/oxbuildings/ 12. http://www.ï¬ickr.com/
13
TABLE 1 General information of the popular retrieval datasets in CBIR. The âmixedâ database type denotes that the corresponding dataset is a ground truth dataset mixed with distractor images.
Resolution 640 Ã 480 1024 Ã 768 1024 Ã 768 1024 Ã 768 460 Ã 350 (average) 1024 Ã 768 1000 Ã 720 (average) 320 Ã 240 640 Ã 480 500 Ã 500 N/A Category Number 2,550 500 11 12 33 32 200 200 1,200 N/A N/A Database Name Database Type Database Size Query Number UKBench Holidays Oxford-5K Paris DupImage FlickrLogos-32 INSTRE ZuBuD SMVS MIR Flickr-1M Flickr1M 10,200 1,491 6,053 6,412 1,104 8,240 28,543 1,005 1,200 1,000,000 1,000,000 10,200 500 55 500 108 500 N/A 115 3,300 N/A N/A Ground Truth Ground Truth Mixed Mixed Ground Truth Mixed Ground Truth Ground Truth Ground Truth Distractor Distractor
Paris dataset In the Paris dataset13, there are 6,412 im- ages, which are collected from Flickr by searching for 12 text queries of particular Paris landmarks. For this dataset, 500 query images are used for evaluation.
DupImage dataset This dataset contains 1,104 images from 33 groups14. Each group corresponds to a logo, a painting, or an artwork, such as KFC, American Gothic Painting, Mona Lisa, etc. 108 representative query images are selected from those groups for evaluation.
MIR Flickr-1M This is a distractor dataset19, with one million images randomly downloaded from Flickr and re- sized to be no larger than 500 by 500.
Flickr1M is another distractor database containing SIFT features20 of one million images arbitrarily retrieved from Flickr. The original images in this database are not available.
# 8.2 Performance Evaluation for CBIR
FlickrLogos-32 dataset This dataset15 contains logo im- ages of 32 different brands which are downloaded from Flickr. All logo images in this dataset have an approximately planar structure. The dataset is partitioned into three subsets for evaluation, i.e., training set, validation set, and query set [220]. Of those 8,240 images in the dataset, 6,000 images contain no logos and works as distractors.
INSTRE As an instance-level benchmark dataset, the INSTRE dataset 16 contains two subsets, i.e., INSTRE-S and INSTRE-M [221]. In the former subset, there are 23,070 images, each with a single label of 200 classes. The latter subset contains 5,473 images and each image contains two instances from 100 object categories.
ZuBuD dataset The basic dataset contains 1,005 images of 201 buildings in Zurich, with 5 views for each building17. Besides, there are additional 115 query images which are not included in the basic dataset. The resolution of those images are uniformly 320 Ã 240.
Stanford Mobile Visual Search (SMVS) Dataset This dataset18 is targeted for mobile visual search and contains images taken by camera phone on products, CDs, books, outdoor landmarks, business cards, text documents, mu- seum paintings and video clips. It is characterized by rigid objects, widely varying lighting conditions, perspective dis- tortion, foreground and background clutter, and realistic ground-truth reference data [222]. In the dataset, there are 1,200 distinct categories. For each category, one reference image with resolution quality is collected for evaluation. There are 3,300 query images in total which are collected from heterogeneous low and high-end camera phones.
13. http://www.robots.ox.ac.uk/â¼vgg/data/parisbuildings/ 14. http://pan.baidu.com/s/1jGETFUm 15. http://www.multimedia-computing.de/ï¬ickrlogos/ 16. http://vipl.ict.ac.cn/isia/instre/ 17. http://www.vision.ee.ethz.ch/showroom/zubud/index.en.html 18. http://purl.stanford.edu/rb470rw0983
In the design of a multimedia content-based retrieval sys- tem, there are three key indicators which should be carefully considered: accuracy, efï¬ciency, and memory cost. Usually, a retrieval method contributes to improving at least one of those indicators with little sacriï¬ce in the other indicators.
Accuracy To measure the retrieval quality quantitatively, the database images are categorized into difference rele- vance levels and the accuracy score is summarized based on the rank order of the database images. For different relevance levels, there are different accuracy metrics. Where there are only two relevance level, i.e., relevant and irrel- evant, average precision (AP) is widely used to evaluate the retrieval quality of a single queryâs retrieval results. AP takes consideration of both precision and recall. Precision denotes the fraction of retrieved (top k) images that are relevant while recall means fraction of relevant image that are retrieved (in the top k returned results). Generally, for a retrieval system, precision decreases as either the number of images retrieved increases or recall grows. AP averages the precision values from the rank positions where a relevant image was retrieved, as deï¬ned in Eq. 7. To summarize the retrieval quality over multiple query images, the mean average precision (mAP) is usually adopted, which average the average precision over all queries.
AP = n k=1 P (k) · rel(k) R (7)
# P
where R denotes the number of relevant results for the current query image, P (k) denotes the precision of top k retrieval results, rel(k) is a binary indicator function equalling 1 when the k-th retrieved result is relevant to the current query image and 0 otherwise, and n denotes the total number of retrieved results.
19. http://medialab.liacs.nl/mirï¬ickr/mirï¬ickr1m/ 20. http://bigimbaz.inrialpes.fr/herve/siftgeo1M/
14
Fig. 3. Samples images of the existing datasets. First row: UKBench dataset; second row: Holidays dataset; third row: Oxford Building dataset; fourth row: DupImage dataset; ï¬fth row: INSTRE dataset; sixth row: ZuBuD dataset; seventh row: SMVS dataset.
When there are multiple relevance levels, we can resort to normalized discounted cumulative gain (NDCG) metric deï¬ned in Eq. 8 to summarize the ranking results.
N DCG = 1 N (r1 + n f (rk) log2(k) ), (8)
# k=2 X
where n denotes the number of retrieved images, rk denotes the relevance level, f (·) is function to tune the contribution of difference relevance levels, and N denotes the normalized term to ensure that the NDCG score for the ideal retrieved results is 100%. Popular deï¬nitions of f (·) include f (x) = x
and f (x) = 2x â1, with the latter to emphasize on retrieving highly relevant images.
Besides the above measures, some simple measures may be adopted for special dataset. In the public UKBench dataset, considering that there are four relevant images for all queries, the N-S score, i.e., the average 4 times top-4 precision over the dataset, are used to measure the retrieval accuracy [10].
Computational Efï¬ciency The efï¬ciency of a retrieval system involves the time cost in visual vocabulary (code- book) construction, visual feature indexing, and image querying. The ï¬rst two items are performed off-line, while
15
the last one is conducted on-line. Both the off-line and on- line processing is expected to be as fast as possible. Specially, the on-line querying is usually expected to be responded in real time.
In a multimedia content-based visual retrieval system, the memory cost usually refers to the memory usage in the on-line query stage. Generally, the memory is mainly spent on the quantizer and the index ï¬le of database, which need to be loaded into the main memory for on-line retrieval. Popular quantizer includes tree-based structure, such as hierarchical vocabulary tree, randomized forests, etc, which usually cost a few hundred mega-bytes memory for codebook containing million-scale visual words. In some binary code based quantization meth- ods [36] [72], the quantizer is simple hash function with negligible memory overhead. For the index ï¬le, the memory cost is proportional to the indexed database size. When the database images are represented by local features and each local feature is indexed locally, the index ï¬le is proportional to the amount of indexed features and the memory cost per indexed feature.
9 FUTURE DIRECTIONS Despite the extensive research efforts in the past decade, there is still sufï¬cient space to further boost content based visual search. In the following, we will discuss several directions for future research, on which new advance shall be made in the next decade.
# 9.1 Ground-Truth Dataset Collection
In the multimedia and computer vision ï¬eld, ground-truth datasets are motivated by the speciï¬c tasks. At the begin- ning of those dataset construction, they inspire researchers to update the performance records with their best efforts, leading to many classic ideas and algorithms to address the research problem. However, with the advance to address those datasets, the break-through of some algorithms may suffer from the over-ï¬tting to the dataset. Meanwhile, with deeper understanding and clearer deï¬nition of the research problem, the limitation of existing datasets is revealed and new datasets are expected. For content-based image re- trieval, we also expect better ground-truth dataset to be collected and released. Generally, the new ground-truth datasets shall be speciï¬c to eliminate the ambiguity of rele- vance of image content, such as logo datasets. Meanwhile, the scale of the dataset shall be sufï¬ciently large so as to distinguish the problem of CBIR from image classiï¬cation.
# 9.2 Intention Oriented Query Formation and Selection
Intention gap is the ï¬rst and of the greatest challenge in content-based image retrieval. A simple query in the form of example, color map or sketch map is still insufï¬cient in most time to reï¬ect the user intention, consequently generating unsatisfactory retrieval results. Besides the traditional query formations, assistance from user to specify the concrete ex- pectation will greatly alleviate the difï¬culty of the following image retrieval process. Considering that the end-users may be reluctant to involve much in the query formation, it is still possible to design convenient query formation interface
to reduce the user involvement as much as possible. For instance, it is easy for a user to specify the region of interest in an example image for retrieval, or indicate the expected results are partial-duplicates or just similar in spatial color and texture. It is also possible to predict the potential inten- tions based on the initial query and make conï¬rmation with end-user. In all, rather than passively induce the intension behind the query, it is beneï¬cial to actively involve end-user in the retrieval process.
In image retrieval, the search performance is signiï¬- cantly impacted by the quality of the query. How to select a suitable query towards the optimal retrieval is a nontrivial issue. The query quality is related with many factors, includ- ing resolution, noise pollution, afï¬ne distortion, background clutter, etc. In the scenario of mobile search, the query can be selected by guiding the end user to retake better photos. In the server end, automatic retrieval quality assessment methods [223] [224] can be designed to select potential candidate from the initial retrieval results of high precision.
# 9.3 Deep Learning in CBIR
Despite the advance in content-based visual retrieval, there is still signiï¬cant gap towards semantic-aware retrieval from visual content. This is essentially due to the fact that current image representation schemes are hand-crafted and insuf- ï¬cient to capture the semantics. Due to the tremendous diversity and quantity in multimedia visual data, most existing methods are un-supervised. To proceed towards semantic-aware retrieval, scalable scalable supervised or semi-supervised learning are promising to learn semantic- aware representation so as to boost the content-based re- trieval quality. The success of deep learning in large-scale visual recognition [99] [96] [95] [225] has already demon- strated such potential.
To adapt those existing deep learning techniques to CBIR, there are several non-trivial issues that deserve re- search efforts. Firstly, the learned image representation with deep learning shall be ï¬exible and robust to various com- mon changes and transformations, such as rotation and scal- ing. Since the existing deep learning relies on the convolu- tional operation with anisotropic ï¬lters to convolve images, the resulted feature maps are sensitive to large translation, rotation, and scaling changes. It is still an open problem as whether that can solved by simply including more training samples with diverse transformations. Secondly, since com- putational efï¬ciency and memory overhead are emphasized in particular in CBIR, it would be beneï¬cial to consider those constraints in the structure design of deep learn- ing networks. For instance, both compact binary semantic hashing codes [59] [65] and very sparse semantic vector representations are desired to represent images, since the latter are efï¬cient in both distance computing and memory storing while the latter is well adapted to the inverted index structure.
# 9.4 Unsupervised Database Mining
In traditional content-based image retrieval algorithms and systems, the database images are processed independently without considering their potential relevance context in- formation. This is primarily due to the fact that, there is
16
usually no label information for the database images and the potential category number is unlimited. Those constraints limit the application of sophisticated supervised learning algorithms in CBIR. However, as long as the database is large, it is likely that there exist some subsets of images and images in each sub-set are relevant to each other images. Therefore, it is feasible to explore the database images with some unsupervised techniques to uncover those sub-sets in the off-line processing stage. If we regard each database image as a node and the relevance level between images as edge to link images, the whole image database can be repre- sented as a large graph. Then, the sub-sets mining problem can be formulated as a sub-graph discovery problem. On the other hand, in practice, new images may be incrementally included into the graph, which casts challenge to dynami- cally uncover those sub-graphs on the ï¬y. The mining results in the off-line stage will be beneï¬cial for the on-line query to yield better retrieval results.
# 9.5 Cross-modal Retrieval
In the above discussion of this survey, we focus on the visual content for image retrieval. However, besides the visual features, there are other very useful clues, such as the textual information around images in Web pages, the click log of users when using the search engines, the speech information in videos, etc. Those multi-modal clues are complementary to each to collaboratively identify the visual content of images and videos. Therefore, it would be beneï¬cial to explore cross-modal retrieval and fuse those multi-modal features with different models. With multi- modal representation, there are still many open search topics in terms of collaborative quantization, indexing, search re- ranking, etc.
# 9.6 End-to-End Retrieval Framework
As discussed in the above sections, the retrieval framework is involved with multiple modules, including feature ex- traction, codebook learning, feature quantization, feature quantization, image indexing, etc. Those modules are in- dividually designed and independently optimized for the retrieval task. On the other hand, if we investigate the structure of the convolutional neural network (CNN) in deep learning, we can ï¬nd a very close analogy between the BoW model and the CNN model. The convolutional ï¬lters used in the CNN model works in a similar way as the code- words of the codebook in the BoW model. The convolution results between the image patch and the convolution ï¬lter are essentially the soft quantization results, with the max- pooling operation similar to the local aggregation in the BoW model. As long as the learned feature vector is sparse, we can also adopt the inverted index structure to efï¬ciently index the image database. Different from the BoW model, the above modules in the CNN model are collaboratively optimized for the task of image classiï¬cation. Based on the above analogy, similarly, we may also resort to an end-to- end paradigm to design a framework that takes images as input and outputs the index-oriented features directly, with the traditional key retrieval-related modules implicitly and collaboratively optimized.
17
# 9.7 Social Media Mining with CBIR
Different from the traditional unstructured Web media, the emerging social media in recent years have been charac- terized by community based personalized content creation, sharing, and interaction. There are many successful promi- nent platforms of social media, such as Facebook, Twitter, Wikipedia, LinkedIn, Pinterest, etc. The social media is enriched with tremendous information which dynamically reï¬ects the social and cultural background and trend of the community. Besides, it also reveals the personal affection and behavior characteristics. As an important media of the user-created content, the visual data can be used as an entry point with the content-based image retrieval technique to uncover and understand the underlying community struc- ture. It would be beneï¬cial to understand the behavior of individual users and conduct recommendation of products and services to users. Moreover, it is feasible to analyze the sentiment of crowd for supervision and forewarning.
# 9.8 Open Grand Challenge
Due to the difference in deployment structure and availabil- ity of data, the research on content based image retrieval in the academia suffers a gap from the real application in industry. To bridge this gap, it is beneï¬cial to initiate some open grand challenges from the industry and involve the researchers in the academia to investigate the key difï¬culties in real scenarios. In the past ï¬ve years, there are some limited open grand challenge, such as the Microsoft Image Grand Challenge on Image Retrieval21 and Alibaba Large- Scale Image Search Challenge22. In the future, we would expect many more such grand challenges. The open grand challenge will only only advance the research progress in the academia, but also beneï¬t the industry with more and better practical and feasible solutions to the real-world challenges.
# 10 CONCLUSIONS
In this paper, we have investigated the advance on content- based image retrieval in recent years. We focus on the ï¬ve key modules of the general framework, i.e., query formation, image representation, image indexing, retrieval scoring, and search re-ranking. For each component, we have discussed the key problems and categorized a variety of representative strategies and methods. Further, we have summarized eight potential directions that may boost the advance of content based image retrieval in the near future.
# REFERENCES
[1]
[2]
Y. Rui, T. S. Huang, M. Ortega, and S. Mehrotra, âRelevance feed- back: a power tool for interactive content-based image retrieval,â IEEE Transactions on Circuits and Systems for Video Technology, vol. 8, no. 5, pp. 644â655, 1998. A. Alzubi, A. Amira, and N. Ramzan, âSemantic content-based image retrieval: A comprehensive study,â Journal of Visual Com- munication and Image Representation, vol. 32, pp. 20â54, 2015.
21. http://acmmm13.org/submissions/call-for-multimedia-grand- challenge-solutions/msr-bing-grand-challenge-on-image-retrieval- scientiï¬c-track
22. http://tianchi.aliyun.com/competition/introduction.htm?spm=5176.100069 .5678.1.SmufkG&raceId=231510& lang=en US
X. Li, T. Uricchio, L. Ballan, M. Bertini, C. G. Snoek, and A. D. Bimbo, âSocializing the semantic gap: A comparative survey on image tag assignment, reï¬nement, and retrieval,â ACM Comput- ing Surveys (CSUR), vol. 49, no. 1, p. 14, 2016. Z. Lin, G. Ding, M. Hu, and J. Wang, âSemantics-preserving hashing for cross-view retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3864â3872. A. W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain, âContent-based image retrieval at the end of the early years,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1349â1380, 2000.
[6] M. S. Lew, N. Sebe, C. Djeraba, and R. Jain, âContent-based mul- timedia information retrieval: State of the art and challenges,â ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 2, no. 1, pp. 1â19, 2006. Y. Liu, D. Zhang, G. Lu, and W.-Y. Ma, âA survey of content- based image retrieval with high-level semantics,â Pattern Recog- nition, vol. 40, no. 1, pp. 262â282, 2007. D. G. Lowe, âDistinctive image features from scale invariant keypoints,â International Journal of Computer Vision, vol. 60, no. 2, pp. 91â110, 2004. J. Sivic and A. Zisserman, âVideo Google: A text retrieval ap- proach to object matching in videos,â in IEEE Conference on Computer Vision and Pattern Recognition, 2003, pp. 1470â1477. [10] D. Nister and H. Stewenius, âScalable recognition with a vocab- ulary tree,â in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2006, pp. 2161â2168. J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, âObject retrieval with large vocabularies and fast spatial matching,â in IEEE Conference on Computer Vision and Pattern Recognition, 2007, pp. 1â8.
[8]
[12] H. Jegou, M. Douze, and C. Schmid, âHamming embedding and weak geometric consistency for large scale image search,â in European Conference on Computer Vision, 2008, pp. 304â317. [13] W. Zhou, H. Li, Y. Lu, and Q. Tian, âLarge scale image search with geometric coding,â in ACM International Conference on Multimedia, 2011, pp. 1349â1352.
[14] O. Chum, J. Philbin, J. Sivic, M. Isard, and A. Zisserman, âTotal recall: Automatic query expansion with a generative feature model for object retrieval,â in International Conference on Computer Vision, 2007, pp. 1â8. J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, âLost in quantization: Improving particular object retrieval in large scale image databases,â in IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1â8.
[16] O. Chum, J. Philbin, and A. Zisserman, âNear duplicate image detection: min-hash and tf-idf weighting,â in British Machine Vision Conference, vol. 3, 2008, p. 4.
[17] Z. Wu, Q. Ke, M. Isard, and J. Sun, âBundling features for large scale partial-duplicate web image search,â in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 25â32.
[18] W. Zhou, Y. Lu, H. Li, Y. Song, and Q. Tian, âSpatial coding for large scale partial-duplicate web image search,â in ACM International Conference on Multimedia, 2010, pp. 511â520.
[19] O. Chum, A. Mikulik, M. Perdoch, and J. Matas, âTotal recall II: Query expansion revisited,â in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 889â896.
[20] Y. Zhang, Z. Jia, and T. Chen, âImage retrieval with geometry- preserving visual phrases,â in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 809â816.
[21] X. Zhang, L. Zhang, and H.-Y. Shum, âQsrank: Query-sensitive hash code ranking for efï¬cient Ç«-neighbor search,â in IEEE Con- ference on Computer Vision and Pattern Recognition, 2012, pp. 2058â 2065. J. He, J. Feng, X. Liu, T. Cheng, T.-H. Lin, H. Chung, and S.- F. Chang, âMobile product search with bag of hash bits and boundary reranking,â in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3005â3012.
[23] R. Arandjelovic and A. Zisserman, âThree things everyone should know to improve object retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 2911â2918. S. Zhang, M. Yang, T. Cour, K. Yu, and D. N. Metaxas, âQuery speciï¬c fusion for image retrieval,â in European Conference on Computer Vision (ECCV), 2012.
[24]
[25] Q. Tian, S. Zhang, W. Zhou, R. Ji, B. Ni, and N. Sebe, âBuilding descriptive and discriminative visual codebook for large-scale
image applications,â Multimedia Tools and Applications, vol. 51, no. 2, pp. 441â477, 2011.
[26] W. Zhou, H. Li, Y. Lu, and Q. Tian, âLarge scale partial-duplicate image retrieval with bi-space quantization and geometric consis- tency,â in IEEE International Conference Acoustics Speech and Signal Processing, 2010, pp. 2394â2397. S. Zhang, Q. Tian, G. Hua, Q. Huang, and S. Li, âDescriptive visual words and visual phrases for image applications,â in ACM International Conference on Multimedia, 2009, pp. 75â84. S. Zhang, Q. Huang, G. Hua, S. Jiang, W. Gao, and Q. Tian, âBuilding contextual visual vocabulary for large-scale image ap- plications,â in ACM International Conference on Multimedia, 2010, pp. 501â510.
[27]
[28]
[29] W. Zhou, Q. Tian, Y. Lu, L. Yang, and H. Li, âLatent visual context learning for web image applications,â Pattern Recognition, vol. 44, no. 10, pp. 2263â2273, 2011.
[30] G. Tolias, Y. Avrithis, and H. Jgou, âTo aggregate or not to aggre- gate: selective match kernels for image search,â in International Conference on Computer Vision (ICCV), 2013.
[31] L. Zhang and Y. Rui, âImage searchÅfrom thousands to billions in 20 years,â ACM Transactions on Multimedia Computing, Communi- cations, and Applications (TOMM), vol. 9, no. 1s, p. 36, 2013. [32] X. Tang, K. Liu, J. Cui, F. Wen, and X. Wang, âIntentsearch: Cap- turing user intention for one-click internet image search,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 34, no. 7, pp. 1342â1353, 2012.
[33] B. Moghaddam, Q. Tian, N. Lesh, C. Shen, and T. S. Huang, âVisualization and user-modeling for browsing personal photo libraries,â International Journal of Computer Vision (IJCV), vol. 56, no. 1-2, pp. 109â130, 2004.
[34] R. Datta, D. Joshi, J. Li, and J. Z. Wang, âImage retrieval: Ideas, inï¬uences, and trends of the new age,â ACM Computing Surveys (CSUR), vol. 40, no. 2, p. 5, 2008.
[35] H. J´egou, M. Douze, and C. Schmid, âImproving bag-of-features for large scale image search,â International Journal of Computer Vision, vol. 87, no. 3, pp. 316â336, 2010.
[36] W. Zhou, Y. Lu, H. Li, and Q. Tian, âScalar quantization for large scale image search,â in ACM International Conference on Multimedia, 2012, pp. 169â178.
[37] Y. Cao, H. Wang, C. Wang, Z. Li, L. Zhang, and L. Zhang, âMindï¬nder: interactive sketch-based image search on millions of images,â in ACM International Conference on Multimedia (MM), 2010, pp. 1605â1608.
[38] C. Xiao, C. Wang, L. Zhang, and L. Zhang, âSketch-based image retrieval via shape words,â in ACM International Conference on Multimedia Retrieval (ICMR). ACM, 2015, pp. 571â574.
[39] P. Sousa and M. J. Fonseca, âSketch-based retrieval of drawings using spatial proximity,â Journal of Visual Languages & Computing, vol. 21, no. 2, pp. 69â80, 2010.
[40] M. J. Fonseca, A. Ferreira, and J. A. Jorge, âSketch-based retrieval of complex drawings using hierarchical topology and geometry,â Computer-Aided Design, vol. 41, no. 12, pp. 1067â1081, 2009. S. Liang and Z. Sun, âSketch retrieval and relevance feedback with biased svm classiï¬cation,â Pattern Recognition Letters, vol. 29, no. 12, pp. 1733â1741, 2008.
[41]
[42] Y. Cao, C. Wang, L. Zhang, and L. Zhang, âEdgel index for large- scale sketch-based image search,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011, pp. 761â768. J. Wang and X.-S. Hua, âInteractive image search by color map,â ACM Transactions on Intelligent Systems and Technology (TIST), vol. 3, no. 1, p. 12, 2011.
[44] H. Xu, J. Wang, X.-S. Hua, and S. Li, âImage search by concept map,â in International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2010, pp. 275â282.
[45] ââ, âInteractive image search by 2d semantic map,â in Interna- tional Conference on World Wide Web (WWW). ACM, 2010, pp. 1321â1324.
[46] T. Lan, W. Yang, Y. Wang, and G. Mori, âImage retrieval with structured object queries using latent ranking svm,â in European Conference on Computer Vision (ECCV). Springer, 2012, pp. 129â 142.
[47] G. Kim, S. Moon, and L. Sigal, âRanking and retrieval of image sequences from multiple paragraph queries,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1993â 2001.
18
[48] C. Wengert, M. Douze, and H. J´egou, âBag-of-colors for improved image search,â in ACM International Conference on Multimedia. ACM, 2011, pp. 1437â1440. J. Xie, Y. Fang, F. Zhu, and E. Wong, âDeepshape: Deep learned shape descriptor for 3d shape matching and retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1275â1283.
[49]
[50] F. Wang, L. Kang, and Y. Li, âSketch-based 3d shape retrieval using convolutional neural networks,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1875â 1883. S. Bai, X. Bai, Z. Zhou, Z. Zhang, and L. Jan Latecki, âGift: A real- time and scalable 3d shape search engine,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 5023â 5032.
[52] M. Park, J. S. Jin, and L. S. Wilson, âFast content-based image retrieval using quasi-gabor ï¬lter and reduction of image feature dimension,â in IEEE Southwest Symposium on Image Analysis and Interpretation.
[53] X.-Y. Wang, B.-B. Zhang, and H.-Y. Yang, âContent-based image retrieval by integrating color and texture features,â Multimedia Tools and Applications (MTA), vol. 68, no. 3, pp. 545â569, 2014.
[54] B. Wang, Z. Li, M. Li, and W.-Y. Ma, âLarge-scale duplicate detection for web image search,â in IEEE International Conference on Multimedia and Expo (ICME).
[55] C. Siagian and L. Itti, âRapid biologically-inspired scene clas- siï¬cation using features shared with visual attention,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 29, no. 2, pp. 300â312, 2007.
[56] B. Kulis and K. Grauman, âKernelized locality-sensitive hashing for scalable image search,â in International Conference on Computer Vision, 2009, pp. 2130â2137.
[57] Y. Weiss, A. Torralba, and R. Fergus, âSpectral hashing,â in Advances in Neural Information Processing Systems (NIPS), 2009, pp. 1753â1760.
[58] H. J´egou, M. Douze, and C. Schmid, âProduct quantization for nearest neighbor search,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp. 117â128, 2011.
[59] A. Torralba, R. Fergus, and Y. Weiss, âSmall codes and large image databases for recognition,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[60] D. G. Lowe, âObject recognition from local scale-invariant fea- tures,â in IEEE International Conference on Computer Vision, vol. 2, 1999, pp. 1150â1157. J. Matas, O. Chum, M. Urban, and T. Pajdla, âRobust wide- baseline stereo from maximally stable extremal regions,â Image and Vision Computing, vol. 22, no. 10, pp. 761â767, 2004.
[62] K. Mikolajczyk and C. Schmid, âScale & afï¬ne invariant interest point detectors,â International Journal of Computer Vision, vol. 60, no. 1, pp. 63â86, 2004.
[63] H. Xie, K. Gao, Y. Zhang, S. Tang, J. Li, and Y. Liu, âEfï¬cient feature detection and effective post-veriï¬cation for large scale near-duplicate image search,â IEEE Transactions on Multimedia (TMM), vol. 13, no. 6, pp. 1319â1332, 2011.
[64] E. Rosten, R. Porter, and T. Drummond, âFaster and better: A machine learning approach to corner detection,â IEEE Transac- tions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 105â119, 2010.
[65] A. Krizhevsky and G. E. Hinton, âUsing very deep autoencoders for content-based image retrieval,â in ESANN. Citeseer, 2011.
[66] Z. Wu, Q. Ke, J. Sun, and H.-Y. Shum, âA multi-sample, multi- tree approach to bag-of-words image representation for image retrieval,â in IEEE International Conference on Computer Vision. IEEE, 2009, pp. 1992â1999.
[67] H. Bay, T. Tuytelaars, and L. Van Gool, âSURF: Speeded up robust features,â in European Conference on Computer Vision, 2006, pp. 404â417.
[68] L. Zheng, S. Wang, Z. Liu, and Q. Tian, âPacking and padding: Coupled multi-index for accurate image retrieval,â in IEEE Con- ference on Computer Vision and Pattern Recognition, 2014.
[69] W. Zhou, H. Li, R. Hong, Y. Lu, and Q. Tian, âBSIFT: towards data-independent codebook for large scale image search,â IEEE Transactions on Image Processing (TIP), vol. 24, no. 3, pp. 967â979, 2015.
[70] Z. Liu, H. Li, L. Zhang, W. Zhou, and Q. Tian, âCross-indexing of binary SIFT codes for large-scale image search,â IEEE Transactions on Image Processing (TIP), 2014.
[71] G. Yu and J.-M. Morel, âAsift: an algorithm for fully afï¬ne invariant comparison,â Image Processing On Line, vol. 2011, 2011. [72] W. Dong, Z. Wang, M. Charikar, and K. Li, âHigh-conï¬dence near-duplicate image detection,â in ACM International Conference on Multimedia Retrieval (ICMR). ACM, 2012, p. 1.
[73] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, âBrief: binary robust independent elementary features,â in European Conference on Computer Vision (ECCV), 2010, pp. 778â792.
[74] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, âOrb: an efï¬cient alternative to sift or surf,â in International Conference on Computer Vision, 2011, pp. 2564â2571.
[75] A. Alahi, R. Ortiz, and P. Vandergheynst, âFreak: fast retina keypoint,â in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 510â517. S. Leutenegger, M. Chli, and R. Y. Siegwart, âBrisk: binary ro- bust invariant scalable keypoints,â in International Conference on Computer Vision, 2011, pp. 2548â2555. S. Zhang, Q. Tian, Q. Huang, W. Gao, and Y. Rui, âUSB: Ultra- short binary descriptor for fast visual matching and retrieval,â IEEE Transactions on Image Processing (TIP), vol. 23, no. 8, pp. 3671â3683, 2014. S. Madeo and M. Bober, âFast, compact and discriminative: Evaluation of binary descriptors for mobile applications,â IEEE Transactions on Multimedia, 2016. S. Zhang, Q. Tian, K. Lu, Q. Huang, and W. Gao, âEdge-SIFT: Discriminative binary descriptor for scalable partial-duplicate mobile search,â IEEE Transactions on Image Processing, 2013. [80] K. E. Van De Sande, T. Gevers, and C. G. Snoek, âEvaluating color descriptors for object and scene recognition,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1582â1596, 2010.
[77]
[79]
[81] M. Douze, A. Ramisa, and C. Schmid, âCombining attributes and ï¬sher vectors for efï¬cient image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 745â752. S. Zhao, H. Yao, Y. Yang, and Y. Zhang, âAffective image retrieval via multi-graph learning,â in ACM International Conference on Multimedia (MM). ACM, 2014, pp. 1025â1028.
[83] R. Tao, A. W. Smeulders, and S.-F. Chang, âAttributes and cate- gories for generic instance search from one example,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 177â186.
[84] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth, âDescribing objects by their attributes,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[85] F. S. Khan, R. M. Anwer, J. van de Weijer, A. D. Bagdanov, M. Van- rell, and A. M. Lopez, âColor attributes for object detection,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[86] L. Torresani, M. Szummer, and A. Fitzgibbon, âEfï¬cient object category recognition using classemes,â in European Conference on Computer Vision (ECCV). Springer, 2010, pp. 776â789. J. Deng, A. C. Berg, and L. Fei-Fei, âHierarchical semantic in- dexing for large scale image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011, pp. 785â792. J. Cai, Z.-J. Zha, M. Wang, S. Zhang, and Q. Tian, âAn attribute- assisted reranking model for web image search,â IEEE Transac- tions on Image Processing (TIP), vol. 24, no. 1, pp. 261â272, 2015. S. Zhang, M. Yang, X. Wang, Y. Lin, and Q. Tian, âSemantic-aware co-indexing for image retrieval,â in IEEE International Conference on Computer Vis, 2013. S. Karayev, M. Trentacoste, H. Han, A. Agarwala, T. Darrell, A. Hertzmann, and H. Winnemoeller, âRecognizing image style,â in British Machine Vision Conference (BMVC), 2014.
[87]
[89]
[90]
[91] T. Hofmann, âUnsupervised learning by probabilistic latent se- mantic analysis,â Machine learning, vol. 42, no. 1-2, pp. 177â196, 2001.
[92] D. M. Blei, A. Y. Ng, and M. I. Jordan, âLatent dirichlet alloca- tion,â Journal of Machine Learning Research, vol. 3, pp. 993â1022, 2003.
[93] E. H¨orster, R. Lienhart, and M. Slaney, âImage retrieval on large- scale image databases,â in ACM International Conference on Image and Video Retrieval, 2007, pp. 17â24.
[94] R. Lienhart and M. Slaney, âpLSA on large scale image databases,â in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 4, 2007, pp. IVâ1217.
19
[95] K. Simonyan and A. Zisserman, âVery deep convolutional networks for large-scale image recognition,â arXiv preprint arXiv:1409.1556, 2014.
[96] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, âGoing deeper with convolutions,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
97] 98, Y. Bengio, âLearning deep architectures for ai,â Foundations and trends® in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009. E. Hérster and R. Lienhart, âDeep networks for image retrieval on large-scale databases,â in ACM International Conference on Multimedia. ACM, 2008, pp. 643-646.
[99] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classiï¬- cation with deep convolutional neural networks,â in Advances in Neural Information Processing Systems (NIPS), 2012, pp. 1097â1105. [100] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, âCNN features off-the-shelf: an astounding baseline for recogni- tion,â in IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR), 2014.
[101] J. Wan, D. Wang, S. C. H. Hoi, P. Wu, J. Zhu, Y. Zhang, and J. Li, âDeep learning for content-based image retrieval: A comprehen- sive study,â in ACM International Conference on Multimedia (MM). ACM, 2014, pp. 157â166.
[102] A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson, âVisual instance retrieval with deep convolutional networks,â arXiv preprint arXiv:1412.6574, 2014.
[103] L. Zheng, S. Wang, L. Tian, F. He, Z. Liu, and Q. Tian, âQuery-adaptive late fusion for image search and person re- identiï¬cation,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, 2015.
[104] L. Xie, R. Hong, B. Zhang, and Q. Tian, âImage classiï¬cation and retrieval are one,â in ACM International Conference on Multimedia Retrieval (ICMR), 2015.
[105] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders, âSelective search for object recognition,â International Journal of Computer Vision (IJCV), vol. 104, no. 2, pp. 154â171, 2013. [106] B. Alexe, T. Deselaers, and V. Ferrari, âMeasuring the objectness of image windows,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 34, no. 11, pp. 2189â2202, 2012. [107] M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr, âBing: Binarized normed gradients for objectness estimation at 300fps,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
[108] S. Sun, W. Zhou, Q. Tian, and H. Li, âScalable object retrieval with compact image representation from generic object regions,â ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 12, no. 2, p. 29, 2015.
[109] G. Tolias, R. Sicre, and H. J´egou, âParticular object retrieval with integral max-pooling of cnn activations,â International Conference on Learning and Representation (ICLR), 2016.
[110] A. Gordo, J. Almazan, J. Revaud, and D. Larlus, âDeep image retrieval: Learning global representations for image search,â in European Conference on Computer Vision (ECCV), 2016.
[111] S. Ren, K. He, R. Girshick, and J. Sun, âFaster r-cnn: Towards real-time object detection with region proposal networks,â in Advances in Neural Information Processing Systems (NIPS), 2015, pp. 91â99.
[112] A. Babenko, A. Slesarev, A. Chigorin, and V. Lempitsky, âNeural codes for image retrieval,â in European Conference on Computer Vision (ECCV). Springer, 2014, pp. 584â599.
[113] M. Paulin, M. Douze, Z. Harchaoui, J. Mairal, F. Perronin, and C. Schmid, âLocal convolutional features with unsupervised training for image retrieval,â in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 91â99.
[114] R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan, âSupervised hashing for image retrieval via image representation learning,â in AAAI Conference on Artiï¬cial Intellignece, 2014, pp. 2156â2162.
[115] H. Lai, Y. Pan, Y. Liu, and S. Yan, âSimultaneous feature learning and hash coding with deep neural networks,â arXiv preprint arXiv:1504.03410, 2015.
[116] H. J´egou, M. Douze, C. Schmid, and P. P´erez, âAggregating local descriptors into a compact image representation,â in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 3304â3311.
[117] F. Perronnin, Y. Liu, J. S´anchez, and H. Poirier, âLarge-scale image retrieval with compressed ï¬sher vectors,â in IEEE Conference on
Computer Vision and Pattern Recognition (CVPR). 3384â3391. IEEE, 2010, pp.
[118] F. Li, W. Tong, R. Jin, A. K. Jain, and J.-E. Lee, âAn efï¬cient key point quantization algorithm for large scale image retrieval,â in ACM workshop on Large-scale Multimedia Retrieval and Mining. ACM, 2009, pp. 89â96.
[119] L. Chu, S. Wang, Y. Zhang, S. Jiang, and Q. Huang, âGraph- density-based visual word vocabulary for image retrieval,â in IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2014, pp. 1â6.
[120] W. Dong, Z. Wang, M. Charikar, and K. Li, âEfï¬ciently matching sets of features with random histograms,â in ACM International Conference on Multimedia (MM). ACM, 2008, pp. 179â188. [121] W. Zhou, M. Yang, H. Li, X. Wang, Y. Lin, and Q. Tian, âTo- wards codebook-free: Scalable cascaded hashing for mobile im- age search,â IEEE Transactions on Multimedia, vol. 16, no. 3, pp. 601â611, 2014.
[122] S. Zhang, Q. Tian, G. Hua, Q. Huang, and W. Gao, âGenerating descriptive visual words and visual phrases for large-scale image applications,â IEEE Transactions on Image Processing (TIP), vol. 20, no. 9, pp. 2664â2677, 2011.
[123] X. Wang, M. Yang, T. Cour, S. Zhu, K. Yu, and T. X. Han, âCon- textual weighting for vocabulary tree based image retrieval,â in International Conference on Computer Vision, 2011, pp. 209â216.
[124] Z. Liu, H. Li, W. Zhou, and Q. Tian, âEmbedding spatial context information into inverted ï¬le for large-scale image retrieval,â in ACM International Conference on Multimedia, 2012, pp. 199â208.
[125] Z. Liu, H. Li, W. Zhou, R. Zhao, and Q. Tian, âContextual hashing for large-scale image search,â IEEE Transactions on Image Processing (TIP), vol. 23, no. 4, pp. 1606â1614, 2014.
[126] O. Chum, M. Perdoch, and J. Matas, âGeometric min-hashing: Finding a (thick) needle in a haystack,â in IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 17â24.
[127] D. N. Bhat and S. K. Nayar, âOrdinal measures for image cor- respondence,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 20, no. 4, pp. 415â423, 1998.
[128] S. Lazebnik, C. Schmid, and J. Ponce, âBeyond bags of fea- tures: Spatial pyramid matching for recognizing natural scene categories,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2.
[129] Y. Cao, C. Wang, Z. Li, L. Zhang, and L. Zhang, âSpatial-bag- of-features,â in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 3352â3359.
[130] Z. Wu, Q. Ke, J. Sun, and H.-Y. Shum, âScalable face image retrieval with identity-based quantization and multireference reranking,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 1991â2001, 2011.
[131] J. L. Bentley, âK-d trees for semidynamic point sets,â in Annual Symp. Computational Geometry, 1990, pp. 187â197.
[132] C. Silpa-Anan and R. Hartley, âLocalization using an image map,â in Australian Conference on Robotics and Automation, 2004.
[133] M. Muja and D. G. Lowe, âScalable nearest neighbor algorithms for high dimensional data,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 36, 2014.
[134] W. Zhou, M. Yang, X. Wang, H. Li, Y. Lin, and Q. Tian, âScalable feature matching by dual cascaded scalar quantization for im- age retrieval,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 38, no. 1, pp. 159â171, 2016.
[135] M. Jain, H. J´egou, and P. Gros, âAsymmetric hamming embed- ding: taking the best of our bits for large scale image search,â in ACM International Conference on Multimedia, 2011, pp. 1441â1444. [136] W. Zhou, H. Li, Y. Lu, M. Wang, and Q. Tian, âVisual word expansion and BSIFT veriï¬cation for large-scale image search,â Multimedia Systems, vol. 21, no. 3, pp. 245â254, 2013.
[137] Y. Xia, K. He, F. Wen, and J. Sun, âJoint inverted indexing,â in International Conference on Computer Vision, 2013.
[138] H. J´egou and O. Chum, âNegative evidences and co-occurences in image retrieval: The beneï¬t of PCA and whitening,â in Euro- pean Conference on Computer Vision, 2012, pp. 774â787.
[139] L. Zheng, S. Wang, W. Zhou, and Q. Tian, âBayes merging of multiple vocabularies for scalable image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition, 2014. [140] P. Indyk and R. Motwani, âApproximate nearest neighbors: to- wards removing the curse of dimensionality,â in Annual ACM Symposium Theory of Computing. ACM, 1998, pp. 604â613.
20
[141] A. Andoni and P. Indyk, âNear-optimal hashing algorithms for approximate nearest neighbor in high dimensions,â in IEEE Symposium Foundations of Computer Science, 2006, pp. 459â468.
[142] Q. Lv, W. Josephson, Z. Wang, M. Charikar, and K. Li, âMulti- probe lsh: efï¬cient indexing for high-dimensional similarity search,â in International Conference Very Large Data Bases, 2007, pp. 950â961.
[143] J. Wang, S. Kumar, and S.-F. Chang, âSemi-supervised hashing for scalable image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 3424â3431.
[144] Y. Gong and S. Lazebnik, âIterative quantization: A procrustean approach to learning binary codes,â in IEEE Conference on Com- puter Vision and Pattern Recognition, 2011, pp. 817â824.
[145] D. Aiger, E. Kokiopoulou, and E. Rivlin, âRandom grids: Fast approximate nearest neighbors and range searching for image search,â in International Conference on Computer Vision, 2013. [146] M. Iwamura, T. Sato, and K. Kise, âWhat is the most efï¬cient way to select nearest neighbor candidates for fast approximate nearest neighbor search?â in International Conference on Computer Vision, 2013.
[147] J. Wang and S. Li, âQuery-driven iterated neighborhood graph search for large scale indexing,â in ACM International Conference on Multimedia (MM). ACM, 2012, pp. 179â188.
[148] M. Wang, W. Zhou, Q. Tian, Z. Zha, and H. Li, âLinear distance preserving pseudo-supervised and unsupervised hashing,â in ACM International Conference on Multimedia (MM). ACM, 2016, pp. 1257â1266.
[149] T. Ge, K. He, Q. Ke, and J. Sun, âOptimized product quantization for approximate nearest neighbor search,â in IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[150] T. Tuytelaars and C. Schmid, âVector quantizing feature space with a regular lattice,â in International Conference on Computer Vision, 2007, pp. 1â8.
[151] R. Arandjelovic and A. Zisserman, âAll about vlad,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013, pp. 1578â1585.
I. Kompatsiaris, G. Tsoumakas, and I. Vlahavas, âA comprehensive study over vlad and product quantizationin for large-scale image retrieval,â IEEE Transactions on Multimedia (TMM), 2014.
[153] H. J´egou and A. Zisserman, âTriangulation embedding and democratic aggregation for image search,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014, pp. 3310â3317.
[154] Z. Gao, J. Xue, W. Zhou, S. Pang, and Q. Tian, âFast democratic aggregation and query fusion for image search,â in ACM Interna- tional Conference on Multimedia Retrieval (ICMR), 2015.
[155] T. Ge, Q. Ke, and J. Sun, âSparse-coded features for image retrieval.â British Machine Vision Conference (BMVC), 2013.
[156] Z. Liu, H. Li, W. Zhou, T. Rui, and Q. Tian, âUniforming residual vector distribution for distinctive image representation,â IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2015.
[157] Z. Liu, H. Li, W. Zhou, and Q. Tian, âUniting keypoints: Local visual information fusion for large scale image search,â IEEE Transactions on Multimedia (TMM), 2015.
[158] T. Jaakkola and D. Haussler, âExploring generative model in discriminative classiï¬ers,â in Advances in Neural Information Pro- cessing Systems (NIPS), 1998.
[159] T. Jaakkola, D. Haussler et al., âExploiting generative models in discriminative classiï¬ers,â Advances in Neural Information Process- ing Systems (NIPS), pp. 487â493, 1999.
[160] J. S´anchez, F. Perronnin, T. Mensink, and J. Verbeek, âImage clas- siï¬cation with the ï¬sher vector: theory and practice,â International Journal of Computer Vision (IJCV), vol. 105, no. 3, pp. 222â245, 2013. [161] L.-Y. Duan, F. Gao, J. Chen, J. Lin, and T. Huang, âCompact descriptors for mobile visual search and mpeg cdvs standard- ization,â in IEEE International Symposium on Circuits and Systems (ISCAS).
[162] Y. Gong, L. Wang, R. Guo, and S. Lazebnik, âMulti-scale orderless pooling of deep convolutional activation features,â in European Conference on Computer Vision (ECCV). Springer, 2014, pp. 392â 407.
[163] A. Babenko and V. Lempitsky, âAggregating local deep features for image retrieval,â in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1269â1277.
[164] R. Baeza-Yates, B. Ribeiro-Neto et al., Modern information retrieval. ACM press New York., 1999, vol. 463.
[165] J. Cai, Q. Liu, F. Chen, D. Joshi, and Q. Tian, âScalable image search with multiple index tables,â in International Conference on Multimedia Retrieval (ICMR). ACM, 2014, p. 407.
[166] L. Zheng, S. Wang, and Q. Tian, âCoupled binary embedding for large-scale image retrieval,â IEEE Transactions on Image Processing (TIP), vol. 23, no. 8, pp. 3368â3380, 2014.
[167] X. Zhang, Z. Li, L. Zhang, W.-Y. Ma, and H.-Y. Shum, âEfï¬cient indexing for large scale visual search,â in IEEE International Conference on Computer Vision.
[168] C. Silpa-Anan and R. Hartley, âOptimised kd-trees for fast image descriptor matching,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[169] L. Zheng, S. Wang, Z. Liu, and Q. Tian, âFast image retrieval: Query pruning and early termination,â IEEE Transactions on Multimedia (TMM), vol. 17, no. 5, pp. 648â659, 2015.
[170] R. Ji, L.-Y. Duan, J. Chen, L. Xie, H. Yao, and W. Gao, âLearning to distribute vocabulary indexing for scalable visual search,â IEEE Transactions on Multimedia (TMM), vol. 15, no. 1, pp. 153â166, 2013.
[171] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon, âSpherical hashing,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[172] J. Tang, Z. Li, M. Wang, and R. Zhao, âNeighborhood discrim- inant hashing for large-scale image retrieval,â IEEE Transactions on Image Processing (TPI), vol. 24, no. 9, pp. 2827â2840, 2015. [173] L. Wu, K. Zhao, H. Lu, Z. Wei, and B. Lu, âDistance preserv- ing marginal hashing for image retrieval,â in IEEE International Conference on Multimedia and Expo (ICME), 2015, pp. 1â6.
[174] K. Jiang, Q. Que, and B. Kulis, âRevisiting kernelized locality- sensitive hashing for improved large-scale image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4933â4941.
[175] H. Liu, R. Wang, S. Shan, and X. Chen, âDeep supervised hashing for fast image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2064â2072.
[176] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, âLocality- sensitive hashing scheme based on p-stable distributions,â in Annual Symposium on Computational Geometry. ACM, 2004, pp. 253â262.
[177] Y. Avrithis, G. Tolias, and Y. Kalantidis, âFeature map hashing: sub-linear indexing of appearance and global geometry,â in ACM International Conference on Multimedia (MM). ACM, 2010, pp. 231â240.
[178] G. Tolias, Y. Kalantidis, Y. Avrithis, and S. Kollias, âTowards large-scale geometry indexing by feature selection,â Computer Vision and Image Understanding, vol. 120, pp. 31â45, 2014. [179] H. J´egou, M. Douze, and C. Schmid, âPacking bag-of-features,â in
International Conference on Computer Vision, 2009, pp. 2357â2364.
[180] O. Chum, J. Philbin, M. Isard, and A. Zisserman, âScalable near identical image and shot detection,â in Proceedings of the ACM International Conference on Image and Video Retrieval, 2007, pp. 549â 556.
[181] Z. Lin and J. Brandt, âA local bag-of-features model for large- scale object retrieval,â in European Conference on Computer Vision (ECCV). Springer, 2010, pp. 294â308.
[182] H. Jegou, C. Schmid, H. Harzallah, and J. Verbeek, âAccurate image search using the contextual dissimilarity measure,â IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 2â11, 2010.
[183] D. Qin, C. Wengert, and L. Van Gool, âQuery adaptive similarity for large scale object retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013, pp. 1610â1617. [184] M. Donoser and H. Bischof, âDiffusion processes for retrieval revisited,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 1320â1327.
[185] L. Zheng, S. Wang, Z. Liu, and Q. Tian, âLp-norm IDF for large scale image search,â in IEEE Conference on Computer Vision and Pattern Recognition, 2013.
[186] L. Zheng, S. Wang, and Q. Tian, âLp-norm IDF for scalable image retrieval,â IEEE Transactions on Image Processing, vol. 23, no. 8, pp. 3604â3617, 2014.
[187] X. Shen, Z. Lin, J. Brandt, S. Avidan, and Y. Wu, âObject retrieval and localization with spatially-constrained similarity measure and k-nn re-ranking,â in IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3013â3020.
21
[188] H. Xie, K. Gao, Y. Zhang, J. Li, and Y. Liu, âPairwise weak geometric consistency for large scale image search,â in ACM International Conference on Multimedia Retrieval (ICMR). ACM, 2011, p. 42.
[189] S. M. Katz, âDistribution of content words and phrases in text and language modelling,â Natural Language Engineering, vol. 2, no. 01, pp. 15â59, 1996.
[190] H. J´egou, M. Douze, and C. Schmid, âOn the burstiness of visual elements,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[191] M. Shi, Y. Avrithis, and H. J´egou, âEarly burst detection for memory-efï¬cient image retrieval,â in IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), 2015.
[192] S. Bai and X. Bai, âSparse contextual activation for efï¬cient visual re-ranking,â IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1056â1069, 2016.
[193] F. Yang, B. Matei, and L. S. Davis, âRe-ranking by multi-feature fusion with diffusion for image retrieval,â in IEEE Winter Confer- ence on Applications of Computer Vision (WACV). IEEE, 2015, pp. 572â579.
[194] X. Li, M. Larson, and A. Hanjalic, âPairwise geometric matching for large-scale object retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5153â5161. [195] Y.-H. Kuo, K.-T. Chen, C.-H. Chiang, and W. H. Hsu, âQuery expansion for hash-based image object retrieval,â in ACM Inter- national Conference on Multimedia, 2009, pp. 65â74.
[196] O. Chum and J. Matas, âMatching with prosac-progressive sam- ple consensus,â in IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 220â226.
[197] G. Tolias and Y. Avrithis, âHough pyramid matching: Speeded- up geometry re-ranking for large scale image retrieval,â in IEEE International Conference on Computer Vision (ICCV), 2011.
[198] M. A. Fischler and R. C. Bolles, âRandom sample consensus: a paradigm for model ï¬tting with applications to image analysis and automated cartography,â Communications of the ACM, vol. 24, no. 6, pp. 381â395, 1981.
[199] Y. Avrithis and G. Tolias, âHough pyramid matching: Speeded- up geometry re-ranking for large scale image retrieval,â Interna- tional Journal of Computer Vision, vol. 107, no. 1, pp. 1â19, 2014.
[200] K. Grauman and T. Darrell, âThe pyramid match kernel: Dis- criminative classiï¬cation with sets of image features,â in IEEE International Conference on Computer Vision (ICCV), vol. 2. IEEE, 2005, pp. 1458â1465.
[201] W. Zhou, H. Li, Y. Lu, and Q. Tian, âSIFT match veriï¬cation by geometric coding for large-scale partial-duplicate web image search,â ACM Transactions on Multimedia Computing, Communica- tions, and Applications (TOMM), vol. 9, no. 1, p. 4, 2013.
[202] L. Chu, S. Jiang, S. Wang, Y. Zhang, and Q. Huang, âRobust spatial consistency graph model for partial duplicate image re- trieval,â IEEE Transactions on Multimedia (TMM), vol. 15, no. 8, pp. 1982â1996, 2013.
[203] L. Xie, Q. Tian, W. Zhou, and B. Zhang, âFast and accurate near-duplicate image search with afï¬nity propagation on the imageweb,â Computer Vision and Image Understanding, vol. 124, pp. 31â41, 2014.
[204] J. M. Kleinberg, âAuthoritative sources in a hyperlinked environ- ment,â Journal of the ACM (JACM), vol. 46, no. 5, pp. 604â632, 1999.
[205] L. Xie, Q. Tian, W. Zhou, and B. Zhang, âHeterogeneous graph propagation for large-scale web image search,â IEEE Transactions on Image Processing (TIP), 2015.
[206] H. Xie, Y. Zhang, J. Tan, L. Guo, and J. Li, âContextual query expansion for image retrieval,â IEEE Transactions on Multimedia (TMM), vol. 16, no. 4, pp. 1104â1114, 2014.
[207] D. Tao and X. Tang, âRandom sampling based svm for relevance feedback image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004.
[208] D. Tao, X. Tang, X. Li, and X. Wu, âAsymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval,â IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 28, no. 7, pp. 1088â1099, 2006.
[209] S. C. Hoi, R. Jin, J. Zhu, and M. R. Lyu, âSemi-supervised svm batch mode active learning for image retrieval,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1â7.
[210] M. Arevalillo-Herr´aez and F. J. Ferri, âAn improved distance- based relevance feedback strategy for image retrieval,â Image and Vision Computing (IVC), vol. 31, no. 10, pp. 704â713, 2013. [211] E. Rabinovich, O. Rom, and O. Kurland, âUtilizing relevance feedback in fusion-based retrieval,â in International ACM SIGIR Conference on Research & Development in Information Retrieval (SI- GIR). ACM, 2014, pp. 313â322.
[212] X.-Y. Wang, Y.-W. Li, H.-Y. Yang, and J.-W. Chen, âAn image retrieval scheme with relevance feedback using feature recon- struction and svm reclassiï¬cation,â Neurocomputing, vol. 127, pp. 214â230, 2014.
[213] K. Tieu and P. Viola, âBoosting image retrieval,â International Journal of Computer Vision (IJCV), vol. 56, no. 1-2, pp. 17â36, 2004. [214] J. Yu, D. Tao, M. Wang, and Y. Rui, âLearning to rank using user clicks and visual features for image retrieval,â IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 767â779, 2015.
[215] X. S. Zhou and T. S. Huang, âRelevance feedback in image retrieval: A comprehensive review,â Multimedia systems, vol. 8, no. 6, pp. 536â544, 2003.
[216] P. B. Patil and M. B. Kokare, âRelevance feedback in content based image retrieval: A review.â Journal of Applied Computer Science & Mathematics, no. 10, 2011.
[217] R. Fagin, R. Kumar, and D. Sivakumar, âEfï¬cient similarity search and classiï¬cation via rank aggregation,â in ACM SIGMOD International Conference on Management of Data. ACM, 2003, pp. 301â312.
[218] L. Page, S. Brin, R. Motwani, and T. Winograd, âThe pagerank citation ranking: bringing order to the web.â 1999.
[219] G. Ye, D. Liu, I.-H. Jhuo, S.-F. Chang et al., âRobust late fusion with rank minimization,â in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 3021â3028.
[220] S. Romberg, L. G. Pueyo, R. Lienhart, and R. Van Zwol, âScalable logo recognition in real-world images,â in ACM International Conference on Multimedia Retrieval (ICMR). ACM, 2011, p. 25.
[221] S. Wang and S. Jiang, âInstre: a new benchmark for instance-level object retrieval and recognition,â ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 11, no. 3, p. 37, 2015.
[222] V. R. Chandrasekhar, D. M. Chen, S. S. Tsai, N.-M. Cheung, H. Chen, G. Takacs, Y. Reznik, R. Vedantham, R. Grzeszczuk, J. Bach et al., âThe stanford mobile visual search data set,â in ACM conference on Multimedia Systems. ACM, 2011, pp. 117â122. [223] X. Tian, Y. Lu, L. Yang, and Q. Tian, âLearning to judge image search results,â in ACM International Conference on Multimedia (MM). ACM, 2011, pp. 363â372.
[224] X. Tian, Q. Jia, and T. Mei, âQuery difï¬culty estimation for image search with query reconstruction error,â IEEE Transactions on Multimedia (TMM), vol. 17, no. 1, pp. 79â91, 2015.
[225] K. He, X. Zhang, S. Ren, and J. Sun, âSpatial pyramid pooling in deep convolutional networks for visual recognition,â in European Conference on Computer Vision (ECCV). Springer, 2014, pp. 346â 361.
22 | {
"id": "1504.03410"
} |
1706.05587 | Rethinking Atrous Convolution for Semantic Image Segmentation | In this work, we revisit atrous convolution, a powerful tool to explicitly
adjust filter's field-of-view as well as control the resolution of feature
responses computed by Deep Convolutional Neural Networks, in the application of
semantic image segmentation. To handle the problem of segmenting objects at
multiple scales, we design modules which employ atrous convolution in cascade
or in parallel to capture multi-scale context by adopting multiple atrous
rates. Furthermore, we propose to augment our previously proposed Atrous
Spatial Pyramid Pooling module, which probes convolutional features at multiple
scales, with image-level features encoding global context and further boost
performance. We also elaborate on implementation details and share our
experience on training our system. The proposed `DeepLabv3' system
significantly improves over our previous DeepLab versions without DenseCRF
post-processing and attains comparable performance with other state-of-art
models on the PASCAL VOC 2012 semantic image segmentation benchmark. | http://arxiv.org/pdf/1706.05587 | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | cs.CV | Add more experimental results | null | cs.CV | 20170617 | 20171205 | 7 1 0 2 c e D 5
] V C . s c [
3 v 7 8 5 5 0 . 6 0 7 1 : v i X r a
# Rethinking Atrous Convolution for Semantic Image Segmentation
Liang-Chieh Chen George Papandreou Florian Schroff Hartwig Adam Google Inc. {lcchen, gpapan, fschroff, hadam}@google.com
# Abstract
In this work, we revisit atrous convolution, a powerful tool to explicitly adjust ï¬lterâs ï¬eld-of-view as well as control the resolution of feature responses computed by Deep Convolu- tional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional fea- tures at multiple scales, with image-level features encoding global context and further boost performance. We also elab- orate on implementation details and share our experience on training our system. The proposed âDeepLabv3â system signiï¬cantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
# 1. Introduction
For the task of semantic segmentation [20, 63, 14, 97, 7], we consider two challenges in applying Deep Convolutional Neural Networks (DCNNs) [50]. The ï¬rst one is the reduced feature resolution caused by consecutive pooling operations or convolution striding, which allows DCNNs to learn in- creasingly abstract feature representations. However, this invariance to local image transformation may impede dense prediction tasks, where detailed spatial information is de- sired. To overcome this problem, we advocate the use of atrous convolution [36, 26, 74, 66], which has been shown to be effective for semantic image segmentation [10, 90, 11]. Atrous convolution, also known as dilated convolution, al- lows us to repurpose ImageNet [72] pretrained networks to extract denser feature maps by removing the downsam- pling operations from the last few layers and upsampling the corresponding ï¬lter kernels, equivalent to inserting holes (âtrousâ in French) between ï¬lter weights. With atrous convo- lution, one is able to control the resolution at which feature
Conv Conv Conv kernel: 3x3 kernel: 3x3 kemel: 3x3 rate: 1 rate: 6 rate: 24 rate = 24 â rate = 6 rate=1 â_ Feature map Feature map Feature map
Figure 1. Atrous convolution with kernel size 3 à 3 and different rates. Standard convolution corresponds to atrous convolution with rate = 1. Employing large value of atrous rate enlarges the modelâs ï¬eld-of-view, enabling object encoding at multiple scales.
responses are computed within DCNNs without requiring learning extra parameters.
Another difï¬culty comes from the existence of objects at multiple scales. Several methods have been proposed to handle the problem and we mainly consider four categories in this work, as illustrated in Fig. 2. First, the DCNN is applied to an image pyramid to extract features for each scale input [22, 19, 69, 55, 12, 11] where objects at different scales become prominent at different feature maps. Sec- ond, the encoder-decoder structure [3, 71, 25, 54, 70, 68, 39] exploits multi-scale features from the encoder part and re- covers the spatial resolution from the decoder part. Third, extra modules are cascaded on top of the original network for capturing long range information. In particular, DenseCRF [45] is employed to encode pixel-level pairwise similarities [10, 96, 55, 73], while [59, 90] develop several extra convo- lutional layers in cascade to gradually capture long range context. Fourth, spatial pyramid pooling [11, 95] probes an incoming feature map with ï¬lters or pooling operations at multiple rates and multiple effective ï¬eld-of-views, thus capturing objects at multiple scales.
In this work, we revisit applying atrous convolution, which allows us to effectively enlarge the ï¬eld of view of ï¬lters to incorporate multi-scale context, in the framework of both cascaded modules and spatial pyramid pooling. In par- ticular, our proposed module consists of atrous convolution with various rates and batch normalization layers which we
1
LW Qa a âKe t | 2up [erp ms 2x up rT LF LOY Image Scale1 Image Scale 2 Image Small Resolution AY ZT t t â__ a AlZTy Spatial Pyramid Pooling âee a convolution â7 oar mall AY Image Image Image
(a) Image Pyramid
(b) Encoder-Decoder (c) Deeper w. Atrous Convolution (d) Spatial Pyramid Pooling Figure 2. Alternative architectures to capture multi-scale context.
found important to be trained as well. We experiment with laying out the modules in cascade or in parallel (speciï¬cally, Atrous Spatial Pyramid Pooling (ASPP) method [11]). We discuss an important practical issue when applying a 3 à 3 atrous convolution with an extremely large rate, which fails to capture long range information due to image boundary effects, effectively simply degenerating to 1 à 1 convolu- tion, and propose to incorporate image-level features into the ASPP module. Furthermore, we elaborate on imple- mentation details and share experience on training the pro- posed models, including a simple yet effective bootstrapping method for handling rare and ï¬nely annotated objects. In the end, our proposed model, âDeepLabv3â improves over our previous works [10, 11] and attains performance of 85.7% on the PASCAL VOC 2012 test set without DenseCRF post- processing.
the encoder where the spatial dimension of feature maps is gradually reduced and thus longer range information is more easily captured in the deeper encoder output, and (b) the decoder where object details and spatial dimension are gradually recovered. For example, [60, 64] employ deconvo- lution [92] to learn the upsampling of low resolution feature responses. SegNet [3] reuses the pooling indices from the encoder and learn extra convolutional layers to densify the feature responses, while U-Net [71] adds skip connections from the encoder features to the corresponding decoder acti- vations, and [25] employs a Laplacian pyramid reconstruc- tion network. More recently, Reï¬neNet [54] and [70, 68, 39] have demonstrated the effectiveness of models based on encoder-decoder structure on several semantic segmentation benchmarks. This type of model is also explored in the context of object detection [56, 77].
# 2. Related Work
It has been shown that global features or contextual in- teractions [33, 76, 43, 48, 27, 89] are beneï¬cial in cor- In rectly classifying pixels for semantic segmentation. this work, we discuss four types of Fully Convolutional Networks (FCNs) [74, 60] (see Fig. 2 for illustration) that exploit context information for semantic segmentation [30, 15, 62, 9, 96, 55, 65, 73, 87].
Image pyramid: The same model, typically with shared weights, is applied to multi-scale inputs. Feature responses from the small scale inputs encode the long-range context, while the large scale inputs preserve the small object details. Typical examples include Farabet et al. [22] who transform the input image through a Laplacian pyramid, feed each scale input to a DCNN and merge the feature maps from all the scales. [19, 69] apply multi-scale inputs sequentially from coarse-to-ï¬ne, while [55, 12, 11] directly resize the input for several scales and fuse the features from all the scales. The main drawback of this type of models is that it does not scale well for larger/deeper DCNNs (e.g., networks like [32, 91, 86]) due to limited GPU memory and thus it is usually applied during the inference stage [16].
Context module: This model contains extra modules laid out in cascade to encode long-range context. One ef- fective method is to incorporate DenseCRF [45] (with efï¬- cient high-dimensional ï¬ltering algorithms [2]) to DCNNs [10, 11]. Furthermore, [96, 55, 73] propose to jointly train both the CRF and DCNN components, while [59, 90] em- ploy several extra convolutional layers on top of the belief maps of DCNNs (belief maps are the ï¬nal DCNN feature maps that contain output channels equal to the number of predicted classes) to capture context information. Recently, [41] proposes to learn a general and sparse high-dimensional convolution (bilateral convolution), and [82, 8] combine Gaussian Conditional Random Fields and DCNNs for se- mantic segmentation.
Spatial pyramid pooling: This model employs spatial pyramid pooling [28, 49] to capture context at several ranges. The image-level features are exploited in ParseNet [58] for global context information. DeepLabv2 [11] proposes atrous spatial pyramid pooling (ASPP), where parallel atrous con- volution layers with different rates capture multi-scale infor- mation. Recently, Pyramid Scene Parsing Net (PSP) [95] performs spatial pooling at several grid scales and demon- strates outstanding performance on several semantic segmen- tation benchmarks. There are other methods based on LSTM
Encoder-decoder: This model consists of two parts: (a)
[35] to aggregate global context [53, 6, 88]. Spatial pyramid pooling has also been applied in object detection [31].
In this work, we mainly explore atrous convolution [36, 26, 74, 66, 10, 90, 11] as a context module and tool for spatial pyramid pooling. Our proposed framework is general in the sense that it could be applied to any network. To be concrete, we duplicate several copies of the original last block in ResNet [32] and arrange them in cascade, and also revisit the ASPP module [11] which contains several atrous convolutions in parallel. Note that our cascaded mod- ules are applied directly on the feature maps instead of belief maps. For the proposed modules, we experimentally ï¬nd it important to train with batch normalization [38]. To further capture global context, we propose to augment ASPP with image-level features, similar to [58, 95].
Atrous convolution: Models based on atrous convolu- tion have been actively explored for semantic segmentation. For example, [85] experiments with the effect of modify- ing atrous rates for capturing long-range information, [84] adopts hybrid atrous rates within the last two blocks of ResNet, while [18] further proposes to learn the deformable convolution which samples the input features with learned offset, generalizing atrous convolution. To further improve the segmentation model accuracy, [83] exploits image cap- tions, [40] utilizes video motion, and [44] incorporates depth information. Besides, atrous convolution has been applied to object detection by [66, 17, 37].
# 3. Methods
In this section, we review how atrous convolution is ap- plied to extract dense features for semantic segmentation. We then discuss the proposed modules with atrous convolu- tion modules employed in cascade or in parallel.
# 3.1. Atrous Convolution for Dense Feature Extrac- tion
Deep Convolutional Neural Networks (DCNNs) [50] de- ployed in fully convolutional fashion [74, 60] have shown to be effective for the task of semantic segmentation. However, the repeated combination of max-pooling and striding at consecutive layers of these networks signiï¬cantly reduces the spatial resolution of the resulting feature maps, typically by a factor of 32 across each direction in recent DCNNs [47, 78, 32]. Deconvolutional layers (or transposed convolu- tion) [92, 60, 64, 3, 71, 68] have been employed to recover the spatial resolution. Instead, we advocate the use of âatrous convolutionâ, originally developed for the efï¬cient computa- tion of the undecimated wavelet transform in the âalgorithme `a trousâ scheme of [36] and used before in the DCNN context by [26, 74, 66].
Consider two-dimensional signals, for each location i on the output y and a ï¬lter w, atrous convolution is applied over the input feature map x:
y[i] = x[i + r · k]w[k] k (1)
where the atrous rate r corresponds to the stride with which we sample the input signal, which is equivalent to convolving the input x with upsampled ï¬lters produced by inserting r â 1 zeros between two consecutive ï¬lter values along each spatial dimension (hence the name atrous convolution where the French word trous means holes in English). Standard convolution is a special case for rate r = 1, and atrous convolution allows us to adaptively modify ï¬lterâs ï¬eld-of- view by changing the rate value. See Fig. 1 for illustration.
Atrous convolution also allows us to explicitly control how densely to compute feature responses in fully convolu- tional networks. Here, we denote by output stride the ratio of input image spatial resolution to ï¬nal output resolution. For the DCNNs [47, 78, 32] deployed for the task of image classiï¬cation, the ï¬nal feature responses (before fully con- nected layers or global pooling) is 32 times smaller than the input image dimension, and thus output stride = 32. If one would like to double the spatial density of computed fea- ture responses in the DCNNs (i.e., output stride = 16), the stride of last pooling or convolutional layer that decreases resolution is set to 1 to avoid signal decimation. Then, all subsequent convolutional layers are replaced with atrous convolutional layers having rate r = 2. This allows us to extract denser feature responses without requiring learning any extra parameters. Please refer to [11] for more details.
# 3.2. Going Deeper with Atrous Convolution
We ï¬rst explore designing modules with atrous convolu- tion laid out in cascade. To be concrete, we duplicate several copies of the last ResNet block, denoted as block4 in Fig. 3, and arrange them in cascade. There are three 3 à 3 convolu- tions in those blocks, and the last convolution contains stride 2 except the one in last block, similar to original ResNet. The motivation behind this model is that the introduced strid- ing makes it easy to capture long range information in the deeper blocks. For example, the whole image feature could be summarized in the last small resolution feature map, as illustrated in Fig. 3 (a). However, we discover that the con- secutive striding is harmful for semantic segmentation (see Tab. 1 in Sec. 4) since detail information is decimated, and thus we apply atrous convolution with rates determined by the desired output stride value, as shown in Fig. 3 (b) where output stride = 16.
In this proposed model, we experiment with cascaded ResNet blocks up to block7 (i.e., extra block5, block6, block7 as replicas of block4), which has output stride = 256 if no atrous convolution is applied.
Convi Pool1 output Image stride 4 Block1 Block2 Block3 Block4 BlockS Block6 > > > > Uâo 8 16 32 Block7 > oO 64 128 256 256
(a) Going deeper without atrous convolution.
Convi Pool1 rate=2 rate=4 rate=8 rate=16 output Image stride 4 Block1 Block2 Block3 Block4 BlockS Block6 Block7 | - EI â| 8 16 16 16 16 16 16
(b) Going deeper with atrous convolution. Atrous convolution with rate > 1 is applied after block3 when output stride = 16. Figure 3. Cascaded modules without and with atrous convolution.
# 3.2.1 Multi-grid Method
Motivated by multi-grid methods which employ a hierar- chy of grids of different sizes [4, 81, 5, 67] and following [84, 18], we adopt different atrous rates within block4 to block7 in the proposed model. In particular, we deï¬ne as Multi Grid = (r1, r2, r3) the unit rates for the three convo- lutional layers within block4 to block7. The ï¬nal atrous rate for the convolutional layer is equal to the multiplication of the unit rate and the corresponding rate. For example, when output stride = 16 and Multi Grid = (1, 2, 4), the three convolutions will have rates = 2 · (1, 2, 4) = (2, 4, 8) in the block4, respectively.
# 3.3. Atrous Spatial Pyramid Pooling
id i") id (2) ° eS Normalized count ° io ââ1 valid weight â=-4 valid weights â=â9 valid weights % 20 40 60 80 atrous rate
Figure 4. Normalized counts of valid weights with a 3 à 3 ï¬lter on a 65 à 65 feature map as atrous rate varies. When atrous rate is small, all the 9 ï¬lter weights are applied to most of the valid region on feature map, while atrous rate gets larger, the 3 à 3 ï¬lter degenerates to a 1 à 1 ï¬lter since only the center weight is effective.
We revisit the Atrous Spatial Pyramid Pooling proposed in [11], where four parallel atrous convolutions with different atrous rates are applied on top of the feature map. ASPP is inspired by the success of spatial pyramid pooling [28, 49, 31] which showed that it is effective to resample features at different scales for accurately and efï¬ciently classifying regions of an arbitrary scale. Different from [11], we include batch normalization within ASPP.
ASPP with different atrous rates effectively captures multi-scale information. However, we discover that as the sampling rate becomes larger, the number of valid ï¬lter weights (i.e., the weights that are applied to the valid fea- ture region, instead of padded zeros) becomes smaller. This effect is illustrated in Fig. 4 when applying a 3 à 3 ï¬lter to a 65 à 65 feature map with different atrous rates. In the extreme case where the rate value is close to the feature map size, the 3 à 3 ï¬lter, instead of capturing the whole image context, degenerates to a simple 1 à 1 ï¬lter since only the center ï¬lter weight is effective.
pooling on the last feature map of the model, feed the re- sulting image-level features to a 1 à 1 convolution with 256 ï¬lters (and batch normalization [38]), and then bilinearly upsample the feature to the desired spatial dimension. In the end, our improved ASPP consists of (a) one 1Ã1 convolution and three 3 à 3 convolutions with rates = (6, 12, 18) when output stride = 16 (all with 256 ï¬lters and batch normaliza- tion), and (b) the image-level features, as shown in Fig. 5. Note that the rates are doubled when output stride = 8. The resulting features from all the branches are then concate- nated and pass through another 1 à 1 convolution (also with 256 ï¬lters and batch normalization) before the ï¬nal 1 à 1 convolution which generates the ï¬nal logits.
# 4. Experimental Evaluation
To overcome this problem and incorporate global context information to the model, we adopt image-level features, similar to [58, 95]. Speciï¬cally, we apply global average
We adapt the ImageNet-pretrained [72] ResNet [32] to the semantic segmentation by applying atrous convolution to extract dense features. Recall that output stride is deï¬ned as the ratio of input image spatial resolution to ï¬nal out-
Convi + Pool1 Block1 Block2 Block3 >| â output Image stride 4 8 16 (a) Atrous Spatial Pyramid Pooling = 1x1 Conv rate=2 fea 3x3 Conv Concat rate=6 + Block4 3x3Conv | 1x1 Conv |â_â__» rate=12 ââ 3x3 Conv rate=18 16 16 (b) Image Pooling
Figure 5. Parallel modules with atrous convolution (ASPP), augmented with image-level features.
put resolution. For example, when output stride = 8, the last two blocks (block3 and block4 in our notation) in the original ResNet contains atrous convolution with rate = 2 and rate = 4 respectively. Our implementation is built on TensorFlow [1].
We evaluate the proposed models on the PASCAL VOC 2012 semantic segmentation benchmark [20] which con- tains 20 foreground object classes and one background class. The original dataset contains 1, 464 (train), 1, 449 (val), and 1, 456 (test) pixel-level labeled images for training, valida- tion, and testing, respectively. The dataset is augmented by the extra annotations provided by [29], resulting in 10, 582 (trainaug) training images. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes.
output stride 8 16 32 64 128 256 mIOU 75.18 73.88 70.06 59.99 42.34 20.29
Table 1. Going deeper with atrous convolution when employing ResNet-50 with block7 and different output stride. Adopting output stride = 8 leads to better performance at the cost of more memory usage.
convolution allows us to control output stride value at dif- ferent training stages without requiring learning extra model parameters. Also note that training with output stride = 16 is several times faster than output stride = 8 since the inter- mediate feature maps are spatially four times smaller, but at a sacriï¬ce of accuracy since output stride = 16 provides coarser feature maps.
# 4.1. Training Protocol
In this subsection, we discuss details of our training pro- tocol.
Learning rate policy: Similar to [58, 11], we employ a âpolyâ learning rate policy where the initial learning rate is multiplied by (1 â iter
Crop size: Following the original training protocol [10, 11], patches are cropped from the image during training. For atrous convolution with large rates to be effective, large crop size is required; otherwise, the ï¬lter weights with large atrous rate are mostly applied to the padded zero region. We thus employ crop size to be 513 during both training and test on PASCAL VOC 2012 dataset.
Batch normalization: Our added modules on top of ResNet all include batch normalization parameters [38], which we found important to be trained as well. Since large batch size is required to train batch normalization parame- ters, we employ output stride = 16 and compute the batch normalization statistics with a batch size of 16. The batch normalization parameters are trained with decay = 0.9997. After training on the trainaug set with 30K iterations and ini- tial learning rate = 0.007, we then freeze batch normalization parameters, employ output stride = 8, and train on the ofï¬- cial PASCAL VOC 2012 trainval set for another 30K itera- tions and smaller base learning rate = 0.001. Note that atrous
Upsampling logits: In our previous works [10, 11], the target groundtruths are downsampled by 8 during training when output stride = 8. We ï¬nd it important to keep the groundtruths intact and instead upsample the ï¬nal logits, since downsampling the groundtruths removes the ï¬ne anno- tations resulting in no back-propagation of details.
Data augmentation: We apply data augmentation by randomly scaling the input images (from 0.5 to 2.0) and randomly left-right ï¬ipping during training.
# 4.2. Going Deeper with Atrous Convolution
We ï¬rst experiment with building more blocks with atrous convolution in cascade.
ResNet-50: In Tab. 1, we experiment with the effect of output stride when employing ResNet-50 with block7 (i.e., extra block5, block6, and block7). As shown in the table, in the case of output stride = 256 (i.e., no atrous convolution at all), the performance is much worse than the others due to the severe signal decimation. When output stride gets larger and apply atrous convolution correspondingly, the performance improves from 20.29% to 75.18%, showing that atrous convolution is essential when building more blocks cascadedly for semantic segmentation.
ResNet-50 vs. ResNet-101: We replace ResNet-50 with deeper network ResNet-101 and change the number of cas- caded blocks. As shown in Tab. 2, the performance improves
Network block4 block5 block6 block7 ResNet-50 ResNet-101 64.81 68.39 72.14 73.21 74.29 75.34 73.88 75.76
Table 2. Going deeper with atrous convolution when employ- ing ResNet-50 and ResNet-101 with different number of cas- caded blocks at output stride = 16. Network structures âblock4â, âblock5â, âblock6â, and âblock7â add extra 0, 1, 2, 3 cascaded modules respectively. The performance is generally improved by adopting more cascaded blocks.
Multi-Grid block4 block5 block6 block7 (1, 1, 1) (1, 2, 1) (1, 2, 3) (1, 2, 4) (2, 2, 2) 68.39 70.23 73.14 73.45 71.45 73.21 75.67 75.78 75.74 74.30 75.34 76.09 75.96 75.85 74.70 75.76 76.66 76.11 76.02 74.62
Table 3. Employing multi-grid method for ResNet-101 with dif- ferent number of cascaded blocks at output stride = 16. The best model performance is shown in bold.
as more blocks are added, but the margin of improvement becomes smaller. Noticeably, employing block7 to ResNet- 50 decreases slightly the performance while it still improves the performance for ResNet-101.
Multi-grid: We apply the multi-grid method to ResNet- 101 with several cascadedly added blocks in Tab. 3. The unit rates, Multi Grid = (r1, r2, r3), are applied to block4 and all the other added blocks. As shown in the table, we observe that (a) applying multi-grid method is generally better than the vanilla version where (r1, r2, r3) = (1, 1, 1), (b) simply doubling the unit rates (i.e., (r1, r2, r3) = (2, 2, 2)) is not effective, and (c) going deeper with multi-grid improves the performance. Our best model is the case where block7 and (r1, r2, r3) = (1, 2, 1) are employed.
Inference strategy on val set: The proposed model is trained with output stride = 16, and then during inference we apply output stride = 8 to get more detailed feature map. As shown in Tab. 4, interestingly, when evaluating our best cascaded model with output stride = 8, the per- formance improves over evaluating with output stride = 16 by 1.39%. The performance is further improved by per- forming inference on multi-scale inputs (with scales = {0.5, 0.75, 1.0, 1.25, 1.5, 1.75}) and also left-right ï¬ipped images. In particular, we compute as the ï¬nal result the average probabilities from each scale and ï¬ipped images.
# 4.3. Atrous Spatial Pyramid Pooling
We then experiment with the Atrous Spatial Pyramid Pooling (ASPP) module with the main differences from [11] being that batch normalization parameters [38] are ï¬ne-tuned and image-level features are included.
Method OS=16 OS=8 MS _ Flip | mlOU block7 + v 76.66 MG(1, 2, 1) v 78.05 v v 78.93 v v v 79.35
Table 4. Inference strategy on the val set. MG: Multi-grid. OS: output stride. MS: Multi-scale inputs during test. Flip: Adding left-right ï¬ipped inputs.
Multi-Grid ASPP Image d,1,1) 2,1) (,2,4) | (6,12, 18) (6, 12, 18, 24) | Pooling | mlOU v v 75.36 v v 75.93 v v 76.58 v v 76.46 v v v 77.21
Table 5. Atrous Spatial Pyramid Pooling with multi-grid method and image-level features at output stride = 16.
Method OS=16 OS=8 MS Flip COCO | mlOU MG(1, 2, 4) + v 77.21 ASPP(6, 12, 18) + v 78.51 Image Pooling v v 79.45 v v v 79.77 v v v v 82.70
Table 6. Inference strategy on the val set: MG: Multi-grid. ASPP: Atrous spatial pyramid pooling. OS: output stride. MS: Multi- scale inputs during test. Flip: Adding left-right ï¬ipped inputs. COCO: Model pretrained on MS-COCO.
ASPP: In Tab. 5, we experiment with the effect of in- corporating multi-grid in block4 and image-level features to the improved ASPP module. We ï¬rst ï¬x ASP P = (6, 12, 18) (i.e., employ rates = (6, 12, 18) for the three parallel 3 à 3 convolution branches), and vary the multi- grid value. Employing Multi Grid = (1, 2, 1) is better than Multi Grid = (1, 1, 1), while further improvement is attained by adopting Multi Grid = (1, 2, 4) in the con- text of ASP P = (6, 12, 18) (cf ., the âblock4â column in Tab. 3). If we additionally employ another parallel branch with rate = 24 for longer range context, the performance drops slightly by 0.12%. On the other hand, augmenting the ASPP module with image-level feature is effective, reaching the ï¬nal performance of 77.21%.
Inference strategy on val set: Similarly, we apply output stride = 8 during inference once the model is trained. As shown in Tab. 6, employing output stride = 8 brings 1.3% improvement over using output stride = 16, adopting multi-scale inputs and adding left-right ï¬ipped images fur- ther improve the performance by 0.94% and 0.32%, respec- tively. The best model with ASPP attains the performance of 79.77%, better than the best model with cascaded atrous convolution modules (79.35%), and thus is selected as our ï¬nal model for test set evaluation.
Comparison with DeepLabv2: Both our best cascaded
model (in Tab. 4) and ASPP model (in Tab. 6) (in both cases without DenseCRF post-processing or MS-COCO pre-training) already outperform DeepLabv2 (77.69% with DenseCRF and pretrained on MS-COCO in Tab. 4 of [11]) on the PASCAL VOC 2012 val set. The improvement mainly comes from including and ï¬ne-tuning batch normalization parameters [38] in the proposed models and having a better way to encode multi-scale context.
Appendix: We show more experimental results, such as the effect of hyper parameters and Cityscapes [14] results, in the appendix.
Qualitative results: We provide qualitative visual results of our best ASPP model in Fig. 6. As shown in the ï¬gure, our model is able to segment objects very well without any DenseCRF post-processing.
Failure mode: As shown in the bottom row of Fig. 6, our model has difï¬culty in segmenting (a) sofa vs. chair, (b) dining table and chair, and (c) rare view of objects.
Pretrained on COCO: For comparison with other state- of-art models, we further pretrain our best ASPP model on MS-COCO dataset [57]. From the MS-COCO train- val minus minival set, we only select the images that have annotation regions larger than 1000 pixels and contain the classes deï¬ned in PASCAL VOC 2012, resulting in about 60K images for training. Besides, the MS-COCO classes not deï¬ned in PASCAL VOC 2012 are all treated as back- ground class. After pretraining on MS-COCO dataset, our proposed model attains performance of 82.7% on val set when using output stride = 8, multi-scale inputs and adding left-right ï¬ipped images during inference. We adopt smaller initial learning rate = 0.0001 and same training protocol as in Sec. 4.1 when ï¬ne-tuning on PASCAL VOC 2012 dataset. Test set result and an effective bootstrapping method: We notice that PASCAL VOC 2012 dataset provides higher quality of annotations than the augmented dataset [29], es- pecially for the bicycle class. We thus further ï¬ne-tune our model on the ofï¬cial PASCAL VOC 2012 trainval set be- fore evaluating on the test set. Speciï¬cally, our model is trained with output stride = 8 (so that annotation details are kept) and the batch normalization parameters are frozen (see Sec. 4.1 for details). Besides, instead of performing pixel hard example mining as [85, 70], we resort to bootstrapping on hard images. In particular, we duplicate the images that contain hard classes (namely bicycle, chair, table, potted- plant, and sofa) in the training set. As shown in Fig. 7, the simple bootstrapping method is effective for segmenting the bicycle class. In the end, our âDeepLabv3â achieves the per- formance of 85.7% on the test set without any DenseCRF post-processing, as shown in Tab. 7.
Model pretrained on JFT-300M: Motivated by the re- cent work of [79], we further employ the ResNet-101 model which has been pretraind on both ImageNet and the JFT- 300M dataset [34, 13, 79], resulting in a performance of
Method mIOU Adelaide VeryDeep FCN VOC [85] LRR 4x ResNet-CRF [25] DeepLabv2-CRF [11] CentraleSupelec Deep G-CRF [8] HikSeg COCO [80] SegModel [75] Deep Layer Cascade (LC) [52] TuSimple [84] Large Kernel Matters [68] Multipath-Reï¬neNet [54] ResNet-38 MS COCO [86] PSPNet [95] IDW-CNN [83] CASIA IVA SDN [23] DIS [61] 79.1 79.3 79.7 80.2 81.4 81.8 82.7 83.1 83.6 84.2 84.9 85.4 86.3 86.6 86.8 85.7 86.9
DeepLabv3 DeepLabv3-JFT Table 7. Performance on PASCAL VOC 2012 test set.
# 86.9% on PASCAL VOC 2012 test set.
# 5. Conclusion
Our proposed model âDeepLabv3â employs atrous con- volution with upsampled ï¬lters to extract dense feature maps and to capture long range context. Speciï¬cally, to encode multi-scale information, our proposed cascaded module grad- ually doubles the atrous rates while our proposed atrous spa- tial pyramid pooling module augmented with image-level features probes the features with ï¬lters at multiple sampling rates and effective ï¬eld-of-views. Our experimental results show that the proposed model signiï¬cantly improves over previous DeepLab versions and achieves comparable perfor- mance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.
Acknowledgments We would like to acknowledge valu- able discussions with Zbigniew Wojna, the help from Chen Sun and Andrew Howard, and the support from Google Mobile Vision team.
# A. Effect of hyper-parameters
In this section, we follow the same training protocol as in the main paper and experiment with the effect of some hyper-parameters.
New training protocol: As mentioned in the main paper, we change the training protocol in [10, 11] with three main differences: (1) larger crop size, (2) upsampling logits during training, and (3) ï¬ne-tuning batch normalization. Here, we quantitatively measure the effect of the changes. As shown
| le.
Figure 6. Visualization results on the val set when employing our best ASPP model. The last row shows a failure mode.
(a) Image (b) G.T. (c) w/o bootstrapping (d) w/ bootstrapping Figure 7. Bootstrapping on hard images improves segmentation accuracy for rare and ï¬nely annotated classes such as bicycle.
in Tab. 8, DeepLabv3 attains the performance of 77.21% on the PASCAL VOC 2012 val set [20] when adopting the new training protocol setting as in the main paper. When training DeepLabv3 without ï¬ne-tuning the batch normal- ization, the performance drops to 75.95%. If we do not upsample the logits during training (and instead downsam- ple the groundtruths), the performance decreases to 76.01%. Furthermore, if we employ smaller value of crop size (i.e., 321 as in [10, 11]), the performance signiï¬cantly decreases to 67.22%, demonstrating that boundary effect resulted from small crop size hurts the performance of DeepLabv3 which employs large atrous rates in the Atrous Spatial Pyramid Pooling (ASPP) module.
Varying batch size: Since it is important to train DeepLabv3 with ï¬ne-tuning the batch normalization, we further experiment with the effect of different batch sizes. As shown in Tab. 9, employing small batch size is inefï¬cient to train the model, while using larger batch size leads to better performance.
Output stride: The value of output stride determines the output feature map resolution and in turn affects the largest batch size we could use during training. In Tab. 10, we quantitatively measure the effect of employing different output stride values during both training and evaluation on the PASCAL VOC 2012 val set. We ï¬rst ï¬x the evaluation output stride = 16, vary the training output stride and ï¬t the largest possible batch size for all the settings (we are able to ï¬t batch size 6, 16, and 24 for training output stride equal to 8, 16, and 32, respectively). As shown in the top rows of Tab. 10, employing training output stride = 8 only attains the performance of 74.45% because we could not ï¬t large batch size in this setting which degrades the performance while ï¬ne-tuning the batch normalization parameters. When employing training output stride = 32, we could ï¬t large batch size but we lose feature map details. On the other hand, employing training output stride = 16 strikes the best trade- off and leads to the best performance. In the bottom rows of Tab. 10, we increase the evaluation output stride = 8. All settings improve the performance except the one where training output stride = 32. We hypothesize that we lose too much feature map details during training, and thus the model could not recover the details even when employing
Crop Size UL BN | mlOU 513 v v | 77.21 513 v 75.95 513 v | 76.01 321 v | 67.22
Table 8. Effect of hyper-parameters during training on PASCAL VOC 2012 val set at output stride=16. UL: Upsampling Logits. BN: Fine-tuning batch normalization.
batch size mIOU 4 8 12 16 64.43 75.76 76.49 77.21
Table 9. Effect of batch size on PASCAL VOC 2012 val set. We em- ploy output stride=16 during both training and evaluation. Large batch size is required while training the model with ï¬ne-tuning the batch normalization parameters.
train output stride eval output stride mIOU 8 16 32 16 16 16 74.45 77.21 75.90 8 16 32 8 8 8 75.62 78.51 75.75
Table 10. Effect of output stride on PASCAL VOC 2012 val set. Employing output stride=16 during training leads to better perfor- mance for both eval output stride = 8 and 16.
output stride = 8 during evaluation.
# B. Asynchronous training
In this section, we experiment DeepLabv3 with Tensor- Flow asynchronous training [1]. We measure the effect of training the model with multiple replicas on PASCAL VOC 2012 semantic segmentation dataset. Our baseline employs simply one replica and requires training time 3.65 days with a K80 GPU. As shown in Tab. 11, we found that the perfor- mance of using multiple replicas does not drop compared to the baseline. However, training time with 32 replicas is signiï¬cantly reduced to 2.74 hours.
# C. DeepLabv3 on Cityscapes dataset
Cityscapes [14] is a large-scale dataset containing high quality pixel-level annotations of 5000 images (2975, 500, and 1525 for the training, validation, and test sets respec- tively) and about 20000 coarsely annotated images. Follow- ing the evaluation protocol [14], 19 semantic labels are used for evaluation without considering the void label.
num replicas mIOU relative training time 1 2 4 8 16 32 77.21 77.15 76.79 77.02 77.18 76.69 1.00x 0.50x 0.25x 0.13x 0.06x 0.03x
Table 11. Evaluation performance on PASCAL VOC 2012 val set when adopting asynchronous training.
OS=16 OS=8 MS_ Flip | mlIOU v 77.23 v 77.82 v v 79.06 v v v 79.30
Table 12. DeepLabv3 on the Cityscapes val set when trained with only train ï¬ne set. OS: output stride. MS: Multi-scale inputs during inference. Flip: Adding left-right ï¬ipped inputs.
We ï¬rst evaluate the proposed DeepLabv3 model on the validation set when training with only 2975 images (i.e., train ï¬ne set). We adopt the same training protocol as before except that we employ 90K training iterations, crop size equal to 769, and running inference on the whole image, instead of on the overlapped regions as in [11]. As shown in Tab. 12, DeepLabv3 attains the performance of 77.23% when evaluating at output stride = 16. Evaluating the model at output stride = 8 improves the performance to 77.82%. When we employ multi-scale inputs (we could ï¬t scales = {0.75, 1, 1.25} on a K40 GPU) and add left-right ï¬ipped inputs, the model achieves 79.30%.
In order to compete with other state-of-art models, we further train DeepLabv3 on the trainval coarse set (i.e., the 3475 ï¬nely annotated images and the extra 20000 coarsely annotated images). We adopt more scales and ï¬ner output stride during inference. In particular, we perform in- ference with scales = {0.75, 1, 1.25, 1.5, 1.75, 2} and eval- uation output stride = 4 with CPUs, which contributes extra 0.8% and 0.1% respectively on the validation set compared to using only three scales and output stride = 8. In the end, as shown in Tab. 13, our proposed DeepLabv3 achieves the performance of 81.3% on the test set. Some results on val set are visualized in Fig. 8.
# References
[1] M. Abadi, A. Agarwal, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467, 2016.
[2] A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional ï¬ltering using the permutohedral lattice. In Eurographics, 2010.
Method Coarse mIOU DeepLabv2-CRF [11] 70.4 Deep Layer Cascade [52] 71 ML-CRNN [21] 71.2 Adelaide_context [55] 71.6 FRRN [70] 71.8 LRR-4x [25 v 71.8 RefineNet [54] 73.6 FoveaNet [51] 74.1 Ladder DenseNet [46] 74.3 PEARL [42] 75.4 Global-Local-Refinement [93] 77.3 SAC-_multiple [94] 78.1 SegModel [75] v 79.2 TuSimple_Coarse [84] v 80.1 Netwarp [24 v 80.5 ResNet-38 [86] v 80.6 PSPNet [95] v 81.2 DeepLabv3 v 81.3
Table 13. Performance on Cityscapes test set. Coarse: Use train extra set (coarse annotations) as well. Only a few top models with known references are listed in this table.
[3] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv:1511.00561, 2015.
[4] A. Brandt. Multi-level adaptive solutions to boundary-value problems. Mathematics of computation, 31(138):333â390, 1977.
[5] W. L. Briggs, V. E. Henson, and S. F. McCormick. A multigrid tutorial. SIAM, 2000.
[6] W. Byeon, T. M. Breuel, F. Raue, and M. Liwicki. Scene labeling with lstm recurrent neural networks. In CVPR, 2015. [7] H. Caesar, J. Uijlings, and V. Ferrari. COCO-Stuff: Thing and stuff classes in context. arXiv:1612.03716, 2016.
[8] S. Chandra and I. Kokkinos. Fast, exact and multi-scale in- ference for semantic image segmentation with deep Gaussian CRFs. arXiv:1603.08358, 2016.
[9] L.-C. Chen, J. T. Barron, G. Papandreou, K. Murphy, and A. L. Yuille. Semantic image segmentation with task-speciï¬c edge detection using cnns and a discriminatively trained domain transform. In CVPR, 2016.
[10] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
[11] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915, 2016.
[12] L.-C. Chen, Y. Yang, J. Wang, W. Xu, and A. L. Yuille. At- tention to scale: Scale-aware semantic image segmentation. In CVPR, 2016.
[13] F. Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv:1610.02357, 2016.
Figure 8. Visualization results on Cityscapes val set when training with only train ï¬ne set.
[14] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016.
[15] J. Dai, K. He, and J. Sun. Convolutional feature masking for joint object and stuff segmentation. arXiv:1412.1283, 2014. [16] J. Dai, K. He, and J. Sun. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmenta- tion. In ICCV, 2015.
R-fcn: Object detection via region-based fully convolutional networks. arXiv:1605.06409, 2016.
[18] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. arXiv:1703.06211, Deformable convolutional networks. 2017.
[19] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. arXiv:1411.4734, 2014.
[20] M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserma. The pascal visual object classes challenge a retrospective. IJCV, 2014.
[21] H. Fan, X. Mei, D. Prokhorov, and H. Ling. Multi-level contextual rnns with attention model for scene labeling. arXiv:1607.02537, 2016.
[22] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. PAMI, 2013. [23] J. Fu, J. Liu, Y. Wang, and H. Lu. Stacked deconvolutional network for semantic segmentation. arXiv:1708.04943, 2017. [24] R. Gadde, V. Jampani, and P. V. Gehler. Semantic video cnns
through representation warping. In ICCV, 2017.
[25] G. Ghiasi and C. C. Fowlkes. Laplacian reconstruction and reï¬nement for semantic segmentation. arXiv:1605.02264, 2016.
[26] A. Giusti, D. Ciresan, J. Masci, L. Gambardella, and J. Schmidhuber. Fast image scanning with deep max-pooling convolutional neural networks. In ICIP, 2013.
[27] S. Gould, R. Fulton, and D. Koller. Decomposing a scene into geometric and semantically consistent regions. In ICCV. IEEE, 2009.
[28] K. Grauman and T. Darrell. The pyramid match kernel: Dis- criminative classiï¬cation with sets of image features. In ICCV, 2005.
[29] B. Hariharan, P. Arbel´aez, L. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In ICCV, 2011.
[30] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hyper- columns for object segmentation and ï¬ne-grained localization. In CVPR, 2015.
[31] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014.
[32] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385, 2015.
[33] X. He, R. S. Zemel, and M. Carreira-Perpindn. Multiscale conditional random ï¬elds for image labeling. In CVPR, 2004. [34] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge
in a neural network. In NIPS, 2014.
[35] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[36] M. Holschneider, R. Kronland-Martinet, J. Morlet, and P. Tchamitchian. A real-time algorithm for signal analysis with the help of the wavelet transform. In Wavelets: Time- Frequency Methods and Phase Space, pages 289â297. 1989. [37] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, and K. Murphy. Speed/accuracy trade-offs for modern convolutional object detectors. In CVPR, 2017.
[38] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167, 2015.
[39] M. A. Islam, M. Rochan, N. D. Bruce, and Y. Wang. Gated feedback reï¬nement network for dense image labeling. In CVPR, 2017.
[40] S. D. Jain, B. Xiong, and K. Grauman. Fusionseg: Learn- ing to combine motion and appearance for fully automatic segmention of generic objects in videos. In CVPR, 2017. [41] V. Jampani, M. Kiefel, and P. V. Gehler. Learning sparse high dimensional ï¬lters: Image ï¬ltering, dense crfs and bilateral neural networks. In CVPR, 2016.
[42] X. Jin, X. Li, H. Xiao, X. Shen, Z. Lin, J. Yang, Y. Chen, J. Dong, L. Liu, Z. Jie, J. Feng, and S. Yan. Video scene parsing with predictive feature learning. In ICCV, 2017. [43] P. Kohli, P. H. Torr, et al. Robust higher order potentials for enforcing label consistency. IJCV, 82(3):302â324, 2009. [44] S. Kong and C. Fowlkes. Recurrent scene parsing with per- spective understanding in the loop. arXiv:1705.07238, 2017. [45] P. Kr¨ahenb¨uhl and V. Koltun. Efï¬cient inference in fully connected crfs with gaussian edge potentials. In NIPS, 2011. [46] I. KreËso, S. ËSegvi´c, and J. Krapac. Ladder-style densenets for semantic segmentation of large natural images. In ICCV CVRSUAD workshop, 2017.
Imagenet classiï¬cation with deep convolutional neural networks. In NIPS, 2012.
[48] L. Ladicky, C. Russell, P. Kohli, and P. H. Torr. Associative hierarchical crfs for object class image segmentation. In ICCV, 2009.
[49] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of fea- tures: Spatial pyramid matching for recognizing natural scene categories. In CVPR, 2006.
[50] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computa- tion, 1(4):541â551, 1989.
[51] X. Li, Z. Jie, W. Wang, C. Liu, J. Yang, X. Shen, Z. Lin, Q. Chen, S. Yan, and J. Feng. Foveanet: Perspective-aware urban scene parsing. arXiv:1708.02421, 2017.
[52] X. Li, Z. Liu, P. Luo, C. C. Loy, and X. Tang. Not all pixels are equal: Difï¬culty-aware semantic segmentation via deep layer cascade. arXiv:1704.01344, 2017.
[53] X. Liang, X. Shen, D. Xiang, J. Feng, L. Lin, and S. Yan. Semantic object parsing with local-global long short-term memory. arXiv:1511.04510, 2015.
[54] G. Lin, A. Milan, C. Shen, and I. Reid. Reï¬nenet: Multi- path reï¬nement networks with identity mappings for high- resolution semantic segmentation. arXiv:1611.06612, 2016. [55] G. Lin, C. Shen, I. Reid, et al. Efï¬cient piecewise train- ing of deep structured models for semantic segmentation. arXiv:1504.01013, 2015.
[56] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. arXiv:1612.03144, 2016.
[57] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV, 2014.
[58] W. Liu, A. Rabinovich, and A. C. Berg. Parsenet: Looking wider to see better. arXiv:1506.04579, 2015.
[59] Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang. Semantic image segmentation via deep parsing network. In ICCV, 2015. [60] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [61] P. Luo, G. Wang, L. Lin, and X. Wang. Deep dual learning
for semantic image segmentation. In ICCV, 2017.
[62] M. Mostajabi, P. Yadollahpour, and G. Shakhnarovich. Feed- forward semantic segmentation with zoom-out features. In CVPR, 2015.
[63] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The role of context for object detection and semantic segmentation in the wild. In CVPR, 2014.
[64] H. Noh, S. Hong, and B. Han. Learning deconvolution net- work for semantic segmentation. In ICCV, 2015.
[65] G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille. Weakly- and semi-supervised learning of a dcnn for semantic image segmentation. In ICCV, 2015.
[66] G. Papandreou, I. Kokkinos, and P.-A. Savalle. Modeling local and global deformations in deep learning: Epitomic convolution, multiple instance learning, and sliding window detection. In CVPR, 2015.
[67] G. Papandreou and P. Maragos. Multigrid geometric active contour models. TIP, 16(1):229â240, 2007.
[68] C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun. Large kernel mattersâimprove semantic segmentation by global convolu- tional network. arXiv:1703.02719, 2017.
[69] P. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene labeling. In ICML, 2014.
[70] T. Pohlen, A. Hermans, M. Mathias, and B. Leibe. Full- resolution residual networks for semantic segmentation in street scenes. arXiv:1611.08323, 2016.
[71] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
[72] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
[73] A. G. Schwing and R. Urtasun. Fully connected deep struc- tured networks. arXiv:1503.02351, 2015.
[74] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and
detection using convolutional networks. arXiv:1312.6229, 2013.
[75] F. Shen, R. Gan, S. Yan, and G. Zeng. Semantic segmentation via structured patch prediction, context crf and guidance crf. In CVPR, 2017.
[76] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context. IJCV, 2009.
[77] A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Be- yond skip connections: Top-down modulation for object de- tection. arXiv:1612.06851, 2016.
[78] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [79] C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017.
[80] H. Sun, D. Xie, and S. Pu. Mixed context networks for semantic segmentation. arXiv:1610.05854, 2016.
[81] D. Terzopoulos. Image analysis using multigrid relaxation methods. TPAMI, (2):129â139, 1986.
[82] R. Vemulapalli, O. Tuzel, M.-Y. Liu, and R. Chellappa. Gaus- sian conditional random ï¬eld network for semantic segmenta- tion. In CVPR, 2016.
[83] G. Wang, P. Luo, L. Lin, and X. Wang. Learning object inter- actions and descriptions for semantic image segmentation. In CVPR, 2017.
[84] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell. Understanding convolution for semantic segmen- tation. arXiv:1702.08502, 2017.
Bridging category-level and instance-level semantic image segmen- tation. arXiv:1605.06885, 2016.
[86] Z. Wu, C. Shen, and A. van den Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. arXiv:1611.10080, 2016.
[87] F. Xia, P. Wang, L.-C. Chen, and A. L. Yuille. Zoom better to see clearer: Huamn part segmentation with auto zoom net. arXiv:1511.06881, 2015.
[88] Z. Yan, H. Zhang, Y. Jia, T. Breuel, and Y. Yu. Combining the best of convolutional layers and recurrent layers: A hybrid network for semantic segmentation. arXiv:1603.04871, 2016. [89] J. Yao, S. Fidler, and R. Urtasun. Describing the scene as a whole: Joint object detection, scene classiï¬cation and seman- tic segmentation. In CVPR, 2012.
[90] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
[91] S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv:1605.07146, 2016.
[92] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvo- lutional networks for mid and high level feature learning. In ICCV, 2011.
[93] R. Zhang, S. Tang, M. Lin, J. Li, and S. Yan. Global-residual and local-boundary reï¬nement networks for rectifying scene parsing predictions. IJCAI, 2017.
[94] R. Zhang, S. Tang, Y. Zhang, J. Li, and S. Yan. Scale-adaptive convolutions for scene parsing. In ICCV, 2017.
[95] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. arXiv:1612.01105, 2016.
[96] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional random ï¬elds as recurrent neural networks. In ICCV, 2015.
[97] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Tor- ralba. Scene parsing through ade20k dataset. In CVPR, 2017. | {
"id": "1605.07146"
} |
1706.05125 | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator). | http://arxiv.org/pdf/1706.05125 | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra | cs.AI, cs.CL | null | null | cs.AI | 20170616 | 20170616 | 2017
7 1 0 2
n u J 6 1 ] I A . s c [
1 v 5 2 1 5 0 . 6 0 7 1 : v i X r a
# Deal or No Deal? End-to-End Learning for Negotiation Dialogues
# Mike Lewis1, Denis Yarats1, Yann N. Dauphin1, Devi Parikh2,1 and Dhruv Batra2,1 1Facebook AI Research 2Georgia Institute of Technology
{mikelewis,denisy,ynd}@fb.com {parikh,dbatra}@gatech.edu
# Abstract
Much of human dialogue occurs in semi- cooperative settings, where agents with different goals attempt to agree on com- mon decisions. Negotiations require com- plex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negoti- ations on a multi-issue bargaining task, where agents who cannot observe each otherâs reward functions must reach an agreement (or a deal) via natural language dialogue. For the ï¬rst time, we show it is possible to train end-to-end models for ne- gotiation, which must learn both linguistic and reasoning skills with no annotated di- alogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continu- ations of the conversation, and ï¬nd that this technique dramatically improves per- formance. Our code and dataset are pub- licly available.1
that end-to-end neural models can be trained to negotiate by maximizing the likelihood of human actions. This approach is scalable and domain- independent, but does not model the strategic skills required for negotiating well. We fur- ther show that models can be improved by train- ing and decoding to maximize reward instead of likelihoodâby training with self-play reinforce- ment learning, and using rollouts to estimate the expected reward of utterances during decoding.
To study semi-cooperative dialogue, we gather a dataset of 5808 dialogues between humans on a negotiation task. Users were shown a set of items with a value for each, and asked to agree how to divide the items with another user who has a dif- ferent, unseen, value function (Figure 1).
We ï¬rst train recurrent neural networks to imi- tate human actions. We ï¬nd that models trained to maximise the likelihood of human utterances can generate ï¬uent language, but make comparatively poor negotiators, which are overly willing to com- promise. We therefore explore two methods for improving the modelâs strategic reasoning skillsâ both of which attempt to optimise for the agentâs goals, rather than simply imitating humans:
# 1 Introduction
Intelligent agents often need to cooperate with oth- ers who have different goals, and typically use natural language to agree on decisions. Negotia- tion is simultaneously a linguistic and a reasoning problem, in which an intent must be formulated and then verbally realised. Such dialogues contain both cooperative and adversarial elements, and re- quire agents to understand, plan, and generate ut- terances to achieve their goals (Traum et al., 2008; Asher et al., 2012).
Firstly, instead of training to optimise likeli- hood, we show that our agents can be consider- ably improved using self play, in which pre-trained models practice negotiating with each other in or- der to optimise performance. To avoid the models diverging from human language, we interleave re- inforcement learning updates with supervised up- dates. For the ï¬rst time, we show that end-to- end dialogue agents trained using reinforcement learning outperform their supervised counterparts in negotiations with humans.
Secondly, we introduce a new form of planning for dialogue called dialogue rollouts, in which an agent simulates complete dialogues during decod- ing to estimate the reward of utterances. We show
We collect the ï¬rst large dataset of natural lan- guage negotiations between two people, and show
# 1https://github.com/facebookresearch/end-to-end-negotiator
Figure 1: A dialogue in our Mechanical Turk interface, which we used to collect a negotiation dataset.
that decoding to maximise the reward function (rather than likelihood) signiï¬cantly improves per- formance against both humans and machines.
Analysing the performance of our agents, we ï¬nd evidence of sophisticated negotiation strate- gies. For example, we ï¬nd instances of the model feigning interest in a valueless issue, so that it can later âcompromiseâ by conceding it. Deceit is a complex skill that requires hypothesising the other agentâs beliefs, and is learnt relatively late in child development (Talwar and Lee, 2002). Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals.
The rest of the paper proceeds as follows: §2 de- scribes the collection of a large dataset of human- human negotiation dialogues. §3 describes a base- line supervised model, which we then show can be improved by goal-based training (§4) and de- coding (§5). §6 measures the performance of our models and humans on this task, and §7 gives a detailed analysis and suggests future directions.
# 2 Data Collection
# 2.1 Overview
agreement has been made, both agents indepen- dently output what they think the agreed decision was. If conï¬icting decisions are made, both agents are given zero reward.
# 2.2 Task
Our task is an instance of multi issue bargaining (Fershtman, 1990), and is based on DeVault et al. (2015). Two agents are both shown the same col- lection of items, and instructed to divide them so that each item assigned to one agent.
Each agent is given a different randomly gen- erated value function, which gives a non-negative value for each item. The value functions are con- strained so that: (1) the total value for a user of all items is 10; (2) each item has non-zero value to at least one user; and (3) some items have non- zero value to both users. These constraints enforce that it is not possible for both agents to receive a maximum score, and that no item is worthless to both agents, so the negotiation will be competitive. After 10 turns, we allow agents the option to com- plete the negotiation with no agreement, which is worth 0 points to both users. We use 3 item types (books, hats, balls), and between 5 and 7 total items in the pool. Figure 1 shows our interface.
To enable end-to-end training of negotiation agents, we ï¬rst develop a novel negotiation task and curate a dataset of human-human dialogues for this task. This task and dataset follow our proposed general framework for studying semi- is cooperative dialogue. shown an input specifying a space of possible ac- tions and a reward function which will score the outcome of the negotiation. Agents then sequen- tially take turns of either sending natural language messages, or selecting that a ï¬nal decision has been reached. When one agent selects that an
# 2.3 Data Collection
We collected a set of human-human dialogues us- ing Amazon Mechanical Turk. Workers were paid $0.15 per dialogue, with a $0.05 bonus for max- imal scores. We only used workers based in the United States with a 95% approval rating and at least 5000 previous HITs. Our data collection in- terface was adapted from that of Das et al. (2016). We collected a total of 5808 dialogues, based on 2236 unique scenarios (where a scenario is the
Crowd Sourced Dialogue Agent 1 Input 3xbook value=1 value=3 2xhat value=1 1xball Agent 2 Input 3xbook value=2 value=1 2xhat value=2 1xball Dialogue Agent 1: I want the books and the hats, you get the ball Agent 2: Give me a book too and we have a deal Agent 1: Ok, deal Agent 2: <choose> Agent 1 Output 2xbook 2xhat Agent 2 Output 1xbook 1xball
Perspective: Agent 1 Input 3xbook value=1 value=3 2xhat value=1 1xball Output 2xbook 2xhat Dialogue write: I want the books and the hats, you get the ball read: Give me a book too and we have a deal write: Ok, deal read: <choose>
Perspective: Agent 2 Input 3xbook value=2 value=1 2xhat value=2 1xball Output 1xbook 1xball Dialogue read: I want the books and the hats, you get the ball write: Give me a book too and we have a deal read: Ok, deal write: <choose>
Figure 2: Converting a crowd-sourced dialogue (left) into two training examples (right), from the per- spective of each user. The perspectives differ on their input goals, output choice, and in special tokens marking whether a statement was read or written. We train conditional language models to predict the dialogue given the input, and additional models to predict the output given the dialogue.
available items and values for the two users). We held out a test set of 252 scenarios (526 dialogues). Holding out test scenarios means that models must generalise to new situations.
ment has been made. Output o is six integers de- scribing how many of each of the three item types are assigned to each agent. See Figure 2.
# 3.2 Supervised Learning
# 3 Likelihood Model
We propose a simple but effective baseline model for the conversational agent, in which a sequence- to-sequence model is trained to produce the com- plete dialogue, conditioned on an agentâs input.
We train a sequence-to-sequence network to gen- erate an agentâs perspective of the dialogue condi- tioned on the agentâs input goals (Figure 3a).
The model uses 4 recurrent neural networks, implemented as GRUs (Cho et al., 2014): GRUw, GRUg, GRUââo , and GRUââo .
# 3.1 Data Representation
Each dialogue is converted into two training ex- amples, showing the complete conversation from the perspective of each agent. The examples differ on their input goals, output choice, and whether utterances were read or written.
Training examples contain an input goal g, specifying the available items and their values, a dialogue x, and an output decision o specifying which items each agent will receive. Speciï¬cally, we represent g as a list of six integers correspond- ing to the count and value of each of the three item types. Dialogue x is a list of tokens x0..T contain- ing the turns of each agent interleaved with sym- bols marking whether a turn was written by the agent or their partner, terminating in a special to- ken indicating one agent has marked that an agree-
The agentâs input goals g are encoded using GRUg. We refer to the ï¬nal hidden state as hg. The model then predicts each token xt from left to right, conditioned on the previous tokens and hg. At each time step t, GRUw takes as input the pre- vious hidden state htâ1, previous token xtâ1 (em- bedded with a matrix E), and input encoding hg. Conditioning on the input at each time step helps the model learn dependencies between language and goals.
ht = GRUw(htâ1, [Extâ1, hg]) (1)
The token at each time step is predicted with a softmax, which uses weight tying with the embed- ding matrix E (Mao et al., 2015):
pθ(xt|x0..tâ1, g) â exp(ET ht) (2)
Input Encoder Output Decoder Input Encoder Output Decoder write: Take one hat read: I need two write: deal . . . write: Take one hat read: I need two write: deal (a) Supervised Training (b) Decoding, and Reinforcement Learning . . .
Figure 3: Our model: tokens are predicted conditioned on previous words and the input, then the output is predicted using attention over the complete dialogue. In supervised training (3a), we train the model to predict the tokens of both agents. During decoding and reinforcement learning (3b) some tokens are sampled from the model, but some are generated by the other agent and are only encoded by the model.
Note that the model predicts both agentâs words, enabling its use as a forward model in Section 5.
At the end of the dialogue, the agent outputs a set of tokens o representing the decision. We generate each output conditionally independently, using a separate classiï¬er for each. The classi- ï¬ers share bidirectional GRUo and attention mech- anism (Bahdanau et al., 2014) over the dialogue, and additionally conditions on the input goals.
ââo t = GRUââo (h h ââo t = GRUââo (h h ââo ââo ho t = [h t , h t ] t = W [tanh(W â²ho ha exp(w · ha t ) Ptâ² exp(w · ha tâ² ) hs = tanh(W s[hg, X t
ââo tâ1, [Ext, ht]) ââo t+1, [Ext, ht])
(3)
(4)
(5)
t )] (6)
αt = (7)
αtht]) (8)
The output tokens are predicted using softmax:
# 3.3 Decoding
During decoding, the model must generate an output token xt conditioned on dialogue history x0..tâ1 and input goals g, by sampling from pθ:
xt â¼ pθ(xt|x0..tâ1, g) (11)
If the model generates a special end-of-turn to- ken, it then encodes a series of tokens output by the other agent, until its next turn (Figure 3b).
The dialogue ends when either agent outputs a special end-of-dialogue token. The model then outputs a set of choices o. We choose each item independently, but enforce consistency by check- ing the solution is in a feasible set O:
oâ = argmax oâO Y i pθ(oi|x0..T , g) (12)
In our task, a solution is feasible if each item is as- signed to exactly one agent. The space of solutions is small enough to be tractably enumerated.
pθ(oi|x0..t, g) â exp(W oihs) (9)
# 4 Goal-based Training
The model is trained to minimize the negative log likelihood of the token sequence x0..T con- ditioned on the input goals g, and of the outputs o conditioned on x and g. The two terms are weighted with a hyperparameter α.
L(θ) = â X x,g X t log pθ(xt|x0..tâ1, g)
Supervised learning aims to imitate the actions of human users, but does not explicitly attempt to Instead, we explore maximise an agentâs goals. pre-training with supervised learning, and then ï¬ne-tuning against the evaluation metric using re- inforcement learning. Similar two-stage learning strategies have been used previously (e.g. Li et al. (2016); Das et al. (2017)).
# Token prediction loss {z
# | â α X x,g,o
# } log pθ(oj|x0..T , g)
X j (10)
|
# Output choice prediction loss {z
}
the Neural Conversational Model (Vinyals and Le, 2015), our approach shares all parameters for reading and generating tokens.
During reinforcement learning, an agent A at- tempts to improve its parameters from conversa- tions with another agent B. While the other agent B could be a human, in our experiments we used our ï¬xed supervised model that was trained to im- itate humans. The second model is ï¬xed as we found that updating the parameters of both agents led to divergence from human language. In effect,
read: You get one book and Iâll take every- thing else. write: Great deal, thanks! write: No way, I need all 3 hats read: Any time read: No problem read: Iâll give you 2 read: Ok, ï¬ne choose: 1x book choose: 1x book choose: 2x hat choose: 3x hat 1 1 6 9 Dialogue history Candidate responses Simulation of rest of dialogue Score
Figure 4: Decoding through rollouts: The model ï¬rst generates a small set of candidate responses. For each candidate it simulates the future conversation by sampling, and estimates the expected future reward by averaging the scores. The system outputs the candidate with the highest expected reward.
agent A learns to improve by simulating conversa- tions with the help of a surrogate forward model. Agent A reads its goals g and then generates tokens x0..n by sampling from pθ. When x gener- ates an end-of-turn marker, it then reads in tokens xn+1..m generated by agent B. These turns alter- nate until one agent emits a token ending the di- alogue. Both agents then output a decision o and collect a reward from the environment (which will be 0 if they output different decisions). We denote the subset of tokens generated by A as X A (e.g. tokens with incoming arrows in Figure 3b).
After a complete dialogue has been generated, we update agent Aâs parameters based on the out- come of the negotiation. Let rA be the score agent A achieved in the completed dialogue, T be the length of the dialogue, γ be a discount factor that rewards actions at the end of the dialogue more strongly, and µ be a running average of completed dialogue rewards so far2. We deï¬ne the future re- ward R for an action xt â X A as follows:
Algorithm 1 Dialogue Rollouts algorithm. 1: procedure ROLLOUT(x0..i, g) 2: uâ â â
for c â {1..C} do â² C candidate moves 3: 4: 5: j â i do â² Rollout to end of turn 6: 7: 8: 9: 10: 11: 12: j â j + 1 xj â¼ pθ(xj|x0..jâ1, g) while xk /â {read:, choose:} â² u is candidate move u â xi+1..xj for s â {1..S} do â² S samples per move â² Start rollout from end of u k â j while xk 6= choose: do â² Rollout to end of dialogue 13: 14: k â k + 1 xk â¼ pθ(xk|x0..kâ1, g) 15: 16: â² Calculate rollout output and reward o â argmaxoâ²âO p(oâ²|x0..k, g) R(u) â R(u) + r(o)p(oâ²|x0..k, g) 17: if R(u) > R(uâ) then 18: uâ â u 19: return uâ â² Return best move
R(xt) = X xtâX A γT ât(rA(o) â µ) (13)
We then optimise the expected reward of each action xt â X A:
# 5 Goal-based Decoding
LRL θ = Extâ¼pθ(xt|x0..tâ1,g)[R(xt)] (14)
The gradient of LRL FORCE (Williams, 1992): is calculated as in REIN- θ
âθLRL θ =X xtâX A Ext[R(xt)âθ log(pθ(xt|x0..tâ1, g))]
(15)
Likelihood-based decoding (§3.3) may not be op- timal. For instance, an agent may be choosing be- tween accepting an offer, or making a counter of- fer. The former will often have a higher likelihood under our model, as there are fewer ways to agree than to make another offer, but the latter may lead to a better outcome. Goal-based decoding also al- lows more complex dialogue strategies. For exam- ple, a deceptive utterance is likely to have a low model score (as users were generally honest in the supervised data), but may achieve high reward.
2As all rewards are non-negative, we instead re-scale them by subtracting the mean reward found during self play. Shift- ing in this way can reduce the variance of our estimator.
We instead explore decoding by maximising ex- pected reward. We achieve this by using pθ as a
forward model for the complete dialogue, and then deterministically computing the reward. Rewards for an utterance are averaged over samples to cal- culate expected future reward (Figure 4).
We use a two stage process: First, we gener- ate c candidate utterances U = u0..c, represent- ing possible complete turns that the agent could make, which are generated by sampling from pθ until the end-of-turn token is reached. Let x0..nâ1 be current dialogue history. We then calculate the expected reward R(u) of candidate utterance u = xn,n+k by repeatedly sampling xn+k+1,T from pθ, then choosing the best output o using Equation 12, and ï¬nally deterministically comput- ing the reward r(o). The reward is scaled by the probability of the output given the dialogue, be- cause if the agents select different outputs then they both receive 0 reward. R(xn..n+k) = Ex(n+k+1..T ;o)â¼pθ [r(o)pθ(o|x0..T )] (16)
We then return the utterance maximizing R.
uâ = argmax R(u) uâU (17)
We use 5 rollouts for each of 10 candidate turns.
# 6 Experiments
# 6.1 Training Details
We implement our models using PyTorch. All hyper-parameters were chosen on a development dataset. The input tokens are embedded into a 64-dimensional space, while the dialogue tokens are embedded with 256-dimensional embeddings (with no pre-training). The input GRUg has a hidden layer of size 64 and the dialogue GRUw is of size 128. The output GRUââo and GRUââo both have a hidden state of size 256, the size of hs is 256 as well. During supervised training, we optimise using stochastic gradient descent with a minibatch size of 16, an initial learning rate of 1.0, Nesterov momentum with µ=0.1 (Nesterov, 1983), and clipping gradients whose L2 norm ex- ceeds 0.5. We train the model for 30 epochs and pick the snapshot of the model with the best val- idation perplexity. We then annealed the learn- ing rate by a factor of 5 each epoch. We weight the terms in the loss function (Equation 10) using α=0.5. We do not train against output decisions where humans selected different agreements. To- kens occurring fewer than 20 times are replaced with an âunknownâ token.
During reinforcement learning, we use a learn- ing rate of 0.1, clip gradients above 1.0, and use a discount factor of γ=0.95. After every 4 rein- forcement learning updates, we make a supervised update with mini-batch size 16 and learning rate 0.5, and we clip gradients at 1.0. We used 4086 simulated conversations.
When sampling words from pθ, we reduce the variance by doubling the values of logits (i.e. us- ing temperature of 0.5).
# 6.2 Comparison Systems
We compare the performance of the following: LIKELIHOOD uses supervised training and decod- ing (§3), RL is ï¬ne-tuned with goal-based self- play (§4), ROLLOUTS uses supervised training combined with goal-based decoding using rollouts (§5), and RL+ROLLOUTS uses rollouts with a base model trained with reinforcement learning.
# 6.3 Intrinsic Evaluation
For development, we use measured the perplexity of user generated utterances, conditioned on the input and previous dialogue.
Results are shown in Table 3, and show that the simple LIKELIHOOD model produces the most human-like responses, and the alternative training and decoding strategies cause a divergence from human language. Note however, that this diver- gence may not necessarily correspond to lower quality languageâit may also indicate different strategic decisions about what to say. Results in §6.4 show all models could converse with humans.
# 6.4 End-to-End Evaluation
We measure end-to-end performance in dialogues both with the likelihood-based agent and with hu- mans on Mechanical Turk, on held out scenarios. Humans were told that they were interacting with other humans, as they had been during the collection of our dataset (and few appeared to re- alize they were in conversation with machines).
We measure the following statistics:
Score: The average score for each agent (which could be a human or model), out of 10. Agreement: The percentage of dialogues where both agents agreed on the same decision. Pareto Optimality: The percentage of Pareto optimal solutions for agreed deals (a solution is Pareto optimal if neither agentâs score can be im- proved without lowering the otherâs score). Lower scores indicate inefï¬cient negotiations.
vs. Human vs. LIKELIHOOD % Agreed 87.9 89.9 92.9 94.4 % Pareto Optimal 66.2 69.1 78.3 82.4 % Agreed 76.5 67.3 72.1 57.2 Score (agreed) 6.2 vs. 6.2 7.9 vs. 4.7 7.9 vs. 5.5 8.8 vs. 4.5 Score (all) 5.4 vs. 5.5 7.1 vs. 4.2 7.3 vs. 5.1 8.3 vs. 4.2 Score (all) 4.7 vs. 5.8 4.3 vs. 5.0 5.2 vs. 5.4 4.6 vs. 4.2 Score (agreed) 6.2 vs. 7.6 6.4 vs. 7.5 7.1 vs. 7.4 8.0 vs. 7.1 % Pareto Optimal 49.6 58.6 63.7 74.8 Model LIKELIHOOD RL ROLLOUTS RL+ROLLOUTS
Table 1: End task evaluation on heldout scenarios, against the LIKELIHOOD model and humans from Mechanical Turk. The maximum score is 10. Score (all) gives 0 points when agents failed to agree.
Metric Number of Dialogues Average Turns per Dialogue Average Words per Turn % Agreed Average Score (/10) % Pareto Optimal Dataset 5808 6.6 7.6 80.1 6.0 76.9
# 7 Analysis
Table 1 shows large gains from goal-based meth- ods. In this section, we explore the strengths and weaknesses of our models.
Table 2: sourced dialogues between humans. Statistics on our dataset of crowd-
Goal-based models negotiate harder. The RL+ROLLOUTS model has much longer dialogues with humans than LIKELIHOOD (7.2 turns vs. 5.3 on average), indicating that the model is accepting deals less quickly, and negotiating harder.
Model LIKELIHOOD Valid PPL Test PPL Test Avg. Rank 5.47 5.86 - - 5.62 6.03 - - 521.8 517.6 844.1 859.8 RL ROLLOUTS RL+ROLLOUTS
Table 3: Intrinsic evaluation showing the average perplexity of tokens and rank of complete turns (out of 2083 unique human messages from the test set). Lower is more human-like for both.
Firstly, we see that the RL and ROLLOUTS models achieve signiï¬cantly better results when negotiat- ing with the LIKELIHOOD model, particularly the RL+ROLLOUTS model. The percentage of Pareto optimal solutions also increases, showing a bet- ter exploration of the solution space. Compared to human-human negotiations (Table 2), the best models achieve a higher agreement rate, better scores, and similar Pareto efï¬ciency. This result conï¬rms that attempting to maximise reward can outperform simply imitating humans.
A negative consequence of this more aggres- sive negotiation strategy is that humans were more likely to walk away with no deal, which is re- ï¬ected in the lower agreement rates. Even though failing to agree was worth 0 points, people often preferred this course over capitulating to an un- compromising opponentâa factor not well cap- tured by the simulated partner in reinforcement learning training or rollouts (as reï¬ected by the larger gains from goal-based models in dialogues with the LIKELIHOOD model). In particular, the goal-based models are prone to simply rephrasing the same demand each turn, which is a more effec- tive strategy against the LIKELIHOOD model than humans. Future work should address this issue.
Figure 5 shows an example of our goal-based model stubbornly negotiating until it achieves a good outcome.
Models learn to be deceptive. Deception can be an effective negotiation tactic. We found numer- ous cases of our models initially feigning interest in a valueless item, only to later âcompromiseâ by conceding it. Figure 7 shows an example.
Similar trends hold in dialogues with humans, with goal-based reasoning outperforming imita- tion learning. The ROLLOUTS model achieves comparable scores to its human partners, and the RL+ROLLOUTS model actually achieves higher scores. However, we also ï¬nd signiï¬cantly more cases of the goal-based models failing to agree a deal with humansâlargely a consequence of their more aggressive negotiation tactics (see §7).
Models produce meaningful novel sentences. One interesting question is whether our models are capable of generating novel sentences in the new circumstances they ï¬nd themselves in, or if they simply repeat messages from the training data ver- batim. We ï¬nd that 76% of messages produced by the LIKELIHOOD model in self-play were found in the training data. We manually examined the novel
Input Human RL+ROLLOUTS RL+ROLLOUTS You can have the balls if i can have the hat and book Human 1xbook value=3 1xhat value=1 3xball value=2 1xbook value=6 1xhat value=4 3xball value=0 Nope, give me the books and you can have everything else RL+ROLLOUTS Ok. I will take the book and one hat I get all the balls? Human RL+ROLLOUTS Yes. I can do that. Human Output RL+ROLLOUTS Human Ok, deal Reward 10/10 6/10 1xbook 1xhat 3xball
Figure 5: Example of a model stubbornly holding out for maximum points in a dialogue.
Input Human RL+ROLLOUTS Human RL+ROLLOUTS Human RL+ROLLOUTS Ok, you can have one book and one ball Human RL+ROLLOUTS Deal Output RL+ROLLOUTS Human
Figure 6: Example of model compromising.
utterances produced by our model, and found that the overwhelming majority were ï¬uent English sentences in isolationâshowing that the model has learnt a good language model for the domain (in addition to results that show it uses language effectively to achieve its goals). These results sug- gest that although neural models are prone to the safer option of repeating sentences from training data, they are capable of generalising when nec- essary. Future work should choose domains that force a higher degree of diversity in utterances.
Maintaining multi-sentence coherence is chal- lenging. One common linguistic error we see RL+ROLLOUTS make is to start a message by in- dicating agreement (e.g. I agree or Deal), but then going on to propose a counter offerâa behaviour that human partners found frustrating. One ex- planation is that the model has learnt that in the supervised data, messages beginning with I agree are often at the end of the dialogue, and partners rarely reply with further negotiationâso the mod- els using rollouts and reinforcement learning be- lieve this tactic will help their offer to be accepted.
# 8 Related Work
Most work on goal orientated dialogue systems has assumed that state representations are anno-
Input RL+ROLLOUTS Human RL+ROLLOUTS You can have the ball if I can have everything else. Human 4xbook value=0 2xhat value=5 1xball value=0 4xbook value=1 2xhat value=2 1xball value=2 No canât do that the ball is not going to give me anything I need the hats and you can have the rest Deal RL+ROLLOUTS Human RL+ROLLOUTS Great! Thank you! Output RL+ROLLOUTS Human Reward 10/10 6/10 2xhat 4xbook 1xball
Figure 7: Dialogue in which the modelâs initial in- terest in the valueless books allows it to compro- mise while achieving a maximum score.
tated in the training data (Williams and Young, 2007; Henderson et al., 2014; Wen et al., 2016). The use of state annotations allows a cleaner separation of the reasoning and natural language aspects of dialogues, but our end-to-end ap- proach makes data collection cheaper and al- lows tasks where it is unclear how to annotate state. Bordes and Weston (2016) explore end-to- end goal orientated dialogue with a supervised modelâwe show improvements over supervised learning with goal-based training and decoding. Recently, He et al. (2017) use task-speciï¬c rules to combine the task input and dialogue history into a more structured state representation than ours.
learning (RL) has been ap- plied in many dialogue settings. RL has been widely used to improve dialogue man- agers, which manage transitions between dia- logue states (Singh et al., 2002; Pietquin et al., 2011; Rieser and Lemon, 2011; GaËsic et al., 2013; In contrast, our end-to- Fatemi et al., 2016). end approach has no explicit dialogue manager. Li et al. (2016) improve metrics such as diver- sity for non-goal-orientated dialogue using RL, which would make an interesting extension to our work. Das et al. (2017) use reinforcement learning to improve cooperative bot-bot dialogues. RL has also been used to allow agents to invent new lan- guages (Das et al., 2017; Mordatch and Abbeel, 2017). To our knowledge, our model is the ï¬rst to use RL to improve the performance of an end-to- end goal orientated dialogue system in dialogues with humans.
Work on learning end-to-end dialogues has con- centrated on âchatâ settings, without explicit goals (Ritter et al., 2011; Vinyals and Le, 2015; Li et al., 2015). These dialogues contain a much greater di- versity of vocabulary than our domain, but do not
have the challenging adversarial elements. Such models are notoriously hard to evaluate (Liu et al., 2016), because the huge diversity of reasonable responses, whereas our task has a clear objec- tive. Our end-to-end approach would also be much more straightforward to integrate into a general- purpose dialogue agent than one that relied on an- notated dialogue states (Dodge et al., 2016).
There is a substantial literature on multi-agent bargaining in game-theory, e.g. Nash Jr (1950). There has also been computational work on mod- elling negotiations (Baarslag et al., 2013)âour work differs in that agents communicate in unre- stricted natural language, rather than pre-speciï¬ed symbolic actions, and our focus on improving per- formance relative to humans rather than other au- tomated systems. Our task is based on that of DeVault et al. (2015), who study natural language negotiations for pedagogical purposesâtheir ver- sion includes speech rather than textual dialogue, and embodied agents, which would make interest- ing extensions to our work. The only automated natural language negotiations systems we are aware of have ï¬rst mapped language to domain- speciï¬c logical forms, and then focused on choos- ing the next dialogue act (Rosenfeld et al., 2014; Cuay´ahuitl et al., 2015; Keizer et al., 2017). Our end-to-end approach is the ï¬rst to to learn com- prehension, reasoning and generation skills in a domain-independent data driven way.
Our use of a combination of supervised and reinforcement learning for training, and stochas- tic rollouts for decoding, builds on strategies used in game playing agents such as AlphaGo (Silver et al., 2016). Our work is a step towards real-world applications for these techniques. Our use of rollouts could be extended by choos- ing the other agentâs responses based on sam- pling, using Monte Carlo Tree Search (MCTS) (Kocsis and Szepesv´ari, 2006). However, our set- ting has a higher branching factor than in domains where MCTS has been successfully applied, such as Go (Silver et al., 2016)âfuture work should explore scaling tree search to dialogue modelling.
# 9 Conclusion
We have introduced end-to-end learning of natu- ral language negotiations as a task for AI, argu- ing that it challenges both linguistic and reason- ing skills while having robust evaluation metrics. We gathered a large dataset of human-human ne-
gotiations, which contain a variety of interesting tactics. We have shown that it is possible to train dialogue agents end-to-end, but that their ability can be much improved by training and decoding to maximise their goals, rather than likelihood. There remains much potential for future work, particularly in exploring other reasoning strate- gies, and in improving the diversity of utterances without diverging from human language. We will also explore other negotiation tasks, to investi- gate whether models can learn to share negotiation strategies across domains.
# Acknowledgments
We would like to thank Luke Zettlemoyer and the anonymous EMNLP reviewers for their insightful comments, and the Mechanical Turk workers who helped us collect data.
# References
Nicholas Asher, Alex Lascarides, Oliver Lemon, Markus Guhe, Verena Rieser, Philippe Muller, Ster- gos Afantenos, Farah Benamara, Laure Vieu, Pascal Denis, et al. 2012. Modelling Strategic Conversa- tion: The STAC project. Proceedings of SemDial page 27.
Tim Baarslag, Katsuhide Fujita, Enrico H Gerding, Koen Hindriks, Takayuki Ito, Nicholas R Jennings, Catholijn Jonker, Sarit Kraus, Raz Lin, Valentin Robu, et al. 2013. Evaluating Practical Negotiating Agents: Results and Analysis of the 2011 Interna- tional Competition. Artiï¬cial Intelligence 198:73â 103.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural Machine Translation by Jointly arXiv preprint Learning to Align and Translate. arXiv:1409.0473 .
Antoine Bordes and Jason Weston. 2016. Learning End-to-End Goal-oriented Dialog. arXiv preprint arXiv:1605.07683 .
Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259 .
Heriberto Cuay´ahuitl, Simon Keizer, and Oliver Strategic Dialogue Management Lemon. 2015. via Deep Reinforcement Learning. arXiv preprint arXiv:1511.08099 .
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, arXiv and Dhruv Batra. 2016. Visual Dialog. preprint arXiv:1611.08669 .
Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning Coopera- tive Visual Dialog Agents with Deep Reinforcement Learning. arXiv preprint arXiv:1703.06585 .
David DeVault, Johnathan Mell, and Jonathan Gratch. 2015. Toward Natural Turn-taking in a Virtual Hu- In AAAI Spring Sympo- man Negotiation Agent. sium on Turn-taking and Coordination in Human- Machine Interaction. AAAI Press, Stanford, CA.
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H. Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating Pre- requisite Qualities for Learning End-to-End Dialog Systems. ICLR abs/1511.06931.
Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy Networks with Two-stage Training for Dialogue Systems. arXiv preprint arXiv:1606.03152 .
The Importance of the Agenda in Bargaining. Games and Economic Be- havior 2(3):224â238.
Milica GaËsic, Catherine Breslin, Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thomson, Pirros Tsiakoulis, and Steve Young. 2013. POMDP- based Dialogue Manager Adaptation to Extended Domains. In Proceedings of SIGDIAL.
H. He, A. Balakrishnan, M. Eric, and P. Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In As- sociation for Computational Linguistics (ACL).
Matthew Henderson, Blaise Thomson, and Jason Williams. 2014. The Second Dialog State Tracking Challenge. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue. volume 263.
Simon Keizer, Markus Guhe, Heriberto Cuay´ahuitl, Ioannis Efstathiou, Klaus-Peter Engelbrecht, Mihai Dobre, Alexandra Lascarides, and Oliver Lemon. 2017. Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation In Proceedings of the European Dialogue agents. Chapter of the Association for Computational Lin- guistics (EACL 2017).
Levente Kocsis and Csaba Szepesv´ari. 2006. Bandit based Monte-Carlo Planning. In European confer- ence on machine learning. Springer, pages 282â293.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A Diversity-promoting Ob- jective Function for Neural Conversation Models. arXiv preprint arXiv:1510.03055 .
Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep Rein- forcement Learning for Dialogue Generation. arXiv preprint arXiv:1606.01541 .
Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue Sys- tem: An Empirical Study of Unsupervised Evalua- tion Metrics for Dialogue Response Generation. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing.
Junhua Mao, Xu Wei, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan L. Yuille. 2015. Learning Like a Child: Fast Novel Visual Concept Learning From Sentence Descriptions of Images. In The IEEE In- ternational Conference on Computer Vision (ICCV).
Igor Mordatch and Pieter Abbeel. 2017. Emergence of Grounded Compositional Language in Multi-Agent Populations. arXiv preprint arXiv:1703.04908 .
The Bargaining Problem. Econometrica: Journal of the Econometric Society pages 155â162.
Yurii Nesterov. 1983. A Method of Solving a Convex Programming Problem with Convergence Rate O (1/k2). In Soviet Mathematics Doklady. volume 27, pages 372â376.
Olivier Pietquin, Matthieu Geist, Senthilkumar Chan- dramohan, and Herv´e Frezza-Buet. 2011. Sample- efï¬cient Batch Reinforcement Learning for Dia- ACM Trans. logue Management Optimization. Speech Lang. Process. 7(3):7:1â7:21.
Verena Rieser and Oliver Lemon. 2011. Reinforcement Learning for Adaptive Dialogue Systems: A Data- driven Methodology for Dialogue Management and Natural Language Generation. Springer Science & Business Media.
Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven Response Generation in Social Me- dia. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, pages 583â593.
Avi Rosenfeld, Inon Zuckerman, Erel Segal-Halevi, Osnat Drein, and Sarit Kraus. 2014. NegoChat: A In Proceedings of Chat-based Negotiation Agent. the 2014 International Conference on Autonomous Agents and Multi-agent Systems. International Foun- dation for Autonomous Agents and Multiagent Sys- tems, Richland, SC, AAMAS â14, pages 525â532.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Ju- lian Schrittwieser, Ioannis Antonoglou, Veda Pan- neershelvam, Marc Lanctot, et al. 2016. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature 529(7587):484â489.
Satinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. 2002. Optimizing Dialogue Man- agement with Reinforcement Learning: Experi- ments with the NJFun System. Journal of Artiï¬cial Intelligence Research 16:105â133.
Victoria Talwar and Kang Lee. 2002. Development of lying to conceal a transgression: Childrenâs con- trol of expressive behaviour during verbal decep- tion. International Journal of Behavioral Develop- ment 26(5):436â444.
David Traum, Stacy C. Marsella, Jonathan Gratch, Jina Lee, and Arno Hartholt. 2008. Multi-party, Multi- issue, Multi-strategy Negotiation for Multi-modal In Proceedings of the 8th Inter- Virtual Agents. national Conference on Intelligent Virtual Agents. Springer-Verlag, Berlin, Heidelberg, IVA â08, pages 117â130.
Oriol Vinyals and Quoc Le. 2015. A Neural Conversa- tional Model. arXiv preprint arXiv:1506.05869 .
Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A Network- based End-to-End Trainable Task-oriented Dialogue System. arXiv preprint arXiv:1604.04562 .
Jason D Williams and Steve Young. 2007. Partially Observable Markov Decision Processes for Spoken Dialog Systems. Computer Speech & Language 21(2):393â422.
Ronald J Williams. 1992. Simple Statistical Gradient- following Algorithms for Connectionist Reinforce- ment Learning. Machine learning 8(3-4):229â256. | {
"id": "1606.03152"
} |
1706.05098 | An Overview of Multi-Task Learning in Deep Neural Networks | Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks. | http://arxiv.org/pdf/1706.05098 | Sebastian Ruder | cs.LG, cs.AI, stat.ML | 14 pages, 8 figures | null | cs.LG | 20170615 | 20170615 | 7 1 0 2
n u J 5 1 ] G L . s c [
1 v 8 9 0 5 0 . 6 0 7 1 : v i X r a
# An Overview of Multi-Task Learning in Deep Neural Networksâ
Sebastian Ruder Insight Centre for Data Analytics, NUI Galway Aylien Ltd., Dublin ruder.sebastian@gmail.com
# Abstract
Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.
# Introduction
In Machine Learning (ML), we typically care about optimizing for a particular metric, whether this is a score on a certain benchmark or a business KPI. In order to do this, we generally train a single model or an ensemble of models to perform our desired task. We then ï¬ne-tune and tweak these models until their performance no longer increases. While we can generally achieve acceptable performance this way, by being laser-focused on our single task, we ignore information that might help us do even better on the metric we care about. Speciï¬cally, this information comes from the training signals of related tasks. By sharing representations between related tasks, we can enable our model to generalize better on our original task. This approach is called Multi-Task Learning (MTL).
Multi-task learning has been used successfully across all applications of machine learning, from natural language processing [Collobert and Weston, 2008] and speech recognition [Deng et al., 2013] to computer vision [Girshick, 2015] and drug discovery [Ramsundar et al., 2015]. MTL comes in many guises: joint learning, learning to learn, and learning with auxiliary tasks are only some names that have been used to refer to it. Generally, as soon as you ï¬nd yourself optimizing more than one loss function, you are effectively doing multi-task learning (in contrast to single-task learning). In those scenarios, it helps to think about what you are trying to do explicitly in terms of MTL and to draw insights from it.
Even if you are only optimizing one loss as is the typical case, chances are there is an auxiliary task that will help you improve upon your main task. [Caruana, 1998] summarizes the goal of MTL succinctly: âMTL improves generalization by leveraging the domain-speciï¬c information contained in the training signals of related tasks".
Over the course of this article, I will try to give a general overview of the current state of multi-task learning, in particular when it comes to MTL with deep neural networks. I will ï¬rst motivate MTL from different perspectives in Section 2. I will then introduce the two most frequently employed methods for MTL in Deep Learning in Section 3. Subsequently, in Section 4, I will describe
âThis paper originally appeared as a blog post at http://sebastianruder.com/multi-task/index. html on 29 May 2017.
mechanisms that together illustrate why MTL works in practice. Before looking at more advanced neural network-based MTL methods, I will provide some context in Section 5 by discussing the literature in MTL. I will then introduce some more powerful recently proposed methods for MTL in deep neural networks in Section 6. Finally, I will talk about commonly used types of auxiliary tasks and discuss what makes a good auxiliary task for MTL in Section 7.
# 2 Motivation
We can motivate multi-task learning in different ways: Biologically, we can see multi-task learning as being inspired by human learning. For learning new tasks, we often apply the knowledge we have acquired by learning related tasks. For instance, a baby ï¬rst learns to recognize faces and can then apply this knowledge to recognize other objects.
From a pedagogical perspective, we often learn tasks ï¬rst that provide us with the necessary skills to master more complex techniques. This is true for learning the proper way of falling in martial arts, e.g. Judo as much as learning to program. Taking an example out of pop culture, we can also consider The Karate Kid (1984)2. In the movie, sensei Mr Miyagi teaches the karate kid seemingly unrelated tasks such as sanding the ï¬oor and waxing a car. In hindsight, these, however, turn out to equip him with invaluable skills that are relevant for learning karate.
Finally, we can motivate multi-task learning from a machine learning point of view: We can view multi-task learning as a form of inductive transfer. Inductive transfer can help improve a model by introducing an inductive bias, which causes a model to prefer some hypotheses over others. For instance, a common form of inductive bias is ¢; regularization, which leads to a preference for sparse solutions. In the case of MTL, the inductive bias is provided by the auxiliary tasks, which cause the model to prefer hypotheses that explain more than one task. As we will see shortly, this generally leads to solutions that generalize better.
# 3 Two MTL methods for Deep Learning
So far, we have focused on theoretical motivations for MTL. To make the ideas of MTL more concrete, we will now look at the two most commonly used ways to perform multi-task learning in deep neural networks. In the context of Deep Learning, multi-task learning is typically done with either hard or soft parameter sharing of hidden layers.
Task A] |Task B| {Task C) Task- i f i specific layers Shared x layers
Figure 1: Hard parameter sharing for multi-task learning in deep neural networks
# 2Thanks to Margaret Mitchell and Adrian Benton for the inspiration
2
# 3.1 Hard parameter sharing
Hard parameter sharing is the most commonly used approach to MTL in neural networks and goes back to [Caruana, 1993]. It is generally applied by sharing the hidden layers between all tasks, while keeping several task-speciï¬c output layers as can be seen in Figure 1.
Hard parameter sharing greatly reduces the risk of overï¬tting. In fact, [Baxter, 1997] showed that the risk of overï¬tting the shared parameters is an order N â where N is the number of tasks â smaller than overï¬tting the task-speciï¬c parameters, i.e. the output layers. This makes sense intuitively: The more tasks we are learning simultaneously, the more our model has to ï¬nd a representation that captures all of the tasks and the less is our chance of overï¬tting on our original task.
# 3.2 Soft parameter sharing
In soft parameter sharing on the other hand, each task has its own model with its own parameters. The distance between the parameters of the model is then regularized in order to encourage the parameters to be similar, as evidenced in Figure[2] [Duong et al., 2015} for instance use ¢ distance for regularization, while | Yang and Hospedales, 2017b] use the trace norm.
Task A Task B Task C ft t _ ft i i t t * * | L_| Constrained i ¥ * layers
Figure 2: Soft parameter sharing for multi-task learning in deep neural networks
The constraints used for soft parameter sharing in deep neural networks have been greatly inspired by regularization techniques for MTL that have been developed for other models, which we will soon discuss.
# 4 Why does MTL work?
Even though an inductive bias obtained through multi-task learning seems intuitively plausible, in order to understand MTL better, we need to look at the mechanisms that underlie it. Most of these have ï¬rst been proposed by [Caruana, 1998]. For all examples, we will assume that we have two related tasks A and B, which rely on a common hidden layer representation F .
# Implicit data augmentation
MTL effectively increases the sample size that we are using for training our model. As all tasks are at least somewhat noisy, when training a model on some task A, our aim is to learn a good representation for task A that ideally ignores the data-dependent noise and generalizes well. As different tasks have different noise patterns, a model that learns two tasks simultaneously is able to learn a more general representation. Learning just task A bears the risk of overï¬tting to task A, while learning A and B jointly enables the model to obtain a better representation F through averaging the noise patterns.
# 4.2 Attention focusing
If a task is very noisy or data is limited and high-dimensional, it can be difï¬cult for a model to differentiate between relevant and irrelevant features. MTL can help the model focus its attention on those features that actually matter as other tasks will provide additional evidence for the relevance or irrelevance of those features.
3
# 4.3 Eavesdropping
Some features G are easy to learn for some task B, while being difï¬cult to learn for another task A. This might either be because A interacts with the features in a more complex way or because other features are impeding the modelâs ability to learn G. Through MTL, we can allow the model to eavesdrop, i.e. learn G through task B. The easiest way to do this is through hints [Abu-Mostafa, 1990], i.e. directly training the model to predict the most important features.
# 4.4 Representation bias
MTL biases the model to prefer representations that other tasks also prefer. This will also help the model to generalize to new tasks in the future as a hypothesis space that performs well for a sufï¬ciently large number of training tasks will also perform well for learning novel tasks as long as they are from the same environment [Baxter, 2000].
# 4.5 Regularization
Finally, MTL acts as a regularizer by introducing an inductive bias. As such, it reduces the risk of overï¬tting as well as the Rademacher complexity of the model, i.e. its ability to ï¬t random noise.
# 5 MTL in non-neural models
In order to better understand MTL in deep neural networks, we will now look to the existing literature on MTL for linear models, kernel methods, and Bayesian algorithms. In particular, we will discuss two main ideas that have been pervasive throughout the history of multi-task learning: enforcing sparsity across tasks through norm regularization; and modelling the relationships between tasks.
Note that many approaches to MTL in the literature deal with a homogenous setting: They assume that all tasks are associated with a single output, e.g. the multi-class MNIST dataset is typically cast as 10 binary classiï¬cation tasks. More recent approaches deal with a more realistic, heterogeneous setting where each task corresponds to a unique set of outputs.
# 5.1 Block-sparse regularization
Notation In order to better connect the following approaches, let us ï¬rst introduce some notation. We have T tasks. For each task t, we have a model mt with parameters at of dimensionality d. We can write the parameters as a column vector at:
tT ait at = Qd,t
We now stack these column vectors a1, . . . , aT column by column to form a matrix A â RdÃT . The i-th row of A then contains the parameter ai,· corresponding to the i-th feature of the model for every task, while the j-th column of A contains the parameters a·,j corresponding to the j-th model.
Many existing methods make some sparsity assumption with regard to the parameters of our models. assume that all models share a small set of features. In terms of our task parameter matrix A, this means that all but a few rows are 0, which corresponds to only a few features being used across all tasks. In order to enforce this, they generalize the ¢; norm to the MTL setting. Recall that the ¢; norm is a constraint on the sum of the parameters, which forces all but a few parameters to be exactly 0. It is also known as lasso (least absolute shrinkage and selection operator).
While in the single-task setting, the £,; norm is computed based on the parameter vector a; of the respective task t, for MTL we compute it over our task parameter matrix A. In order to do this, we first compute an ¢, norm across each row a; containing the parameter corresponding to the i-th feature across all tasks, which yields a vector b = [||a1||q--- ||@al|g] ⬠R?. We then compute the ¢; norm of this vector, which forces all but a few entries of b, i.e. rows in A to be 0.
4
As we can see, depending on what constraint we would like to place on each row, we can use a different £,. In general, we refer to these mixed-norm constraints as ¢, /¢, norms. They are also known as block-sparse regularization, as they lead to entire rows of A being set to 0. [Zhang and Huang, 2008) use ¢; /â¬,. regularization, while [Argyriou and Pontil, 2007] use a mixed @; /23 norm. The latter is also known as group lasso and was first proposed by .
[Argyriou and Pontil, 2007] also show that the problem of optimizing the non-convex group lasso can be made convex by penalizing the trace norm of A, which forces A to be low-rank and thereby constrains the column parameter vectors a·,1, . . . , a·,t to live in a low-dimensional subspace. [Lounici et al., 2009] furthermore establish upper bounds for using the group lasso in multi-task learning.
As much as this block-sparse regularization is intuitively plausible, it is very dependent on the extent to which the features are shared across tasks. [Negahban and Wainwright, 2008] show that if features do not overlap by much, ¢; /¢, regularization might actually be worse than element-wise ¢; regularization.
For this reason, improve upon block-sparse models by proposing a method that combines block-sparse and element-wise sparse regularization. They decompose the task parameter matrix A into two matrices B and S where A = B + S. B is then enforced to be block-sparse using 01/50 regularization, while S' is made element-wise sparse using lasso. Recently, (Liu et al., 2016] propose a distributed version of group-sparse regularization.
# 5.2 Learning task relationships
While the group-sparsity constraint forces our model to only consider a few features, these features are largely used across all tasks. All of the previous approaches thus assume that the tasks used in multi-task learning are closely related. However, each task might not be closely related to all of the available tasks. In those cases, sharing information with an unrelated task might actually hurt performance, a phenomenon known as negative transfer.
Rather than sparsity, we would thus like to leverage prior knowledge indicating that some tasks are related while others are not. In this scenario, a constraint that enforces a clustering of tasks might be more appropriate. [Evgeniou et al., 2005] suggest to impose a clustering constraint by penalizing both the norms of our task column vectors a·,1, . . . , a·,t as well as their variance with the following constraint:
d T Q= lal? +2 las al? t=1
where @ = (ye, a.4)/T is the mean parameter vector. This penalty enforces a clustering of the task parameter vectors a.4,..., a.., towards their mean that is controlled by \. They apply this constraint to kernel methods, but it is equally applicable to linear models.
A similar constraint for SVMs was also proposed by [Evgeniou and Pontil, 2004]. Their constraint is inspired by Bayesian methods and seeks to make all models close to some mean model. In SVMs, the loss thus trades off having a large margin for each SVM with being close to the mean model.
[Jacob et al., 2009] make the assumptions underlying cluster regularization more explicit by formaliz- ing a cluster constraint on A under the assumption that the number of clusters C is known in advance. They then decompose the penalty into three separate norms:
e A global penalty which measures how large our column parameter vectors are on average: OQmean(A) = \lal|?.
e A measure of between-cluster variance that measures how close to each other the clusters are: Qpetween(A) = 77 Tell@e â all? where T;, is the number of tasks in the c-th cluster and a, is the mean vector of the task parameter vectors in the c-th cluster.
e A measure of within-cluster variance that gauges how compact each cluster is: Quithin = an te ||a.,4 â G|] where J(c) is the set of tasks in the c-th cluster.
s(e) ||a.,4 â G|] where J(c) is the set of tasks in the c-th cluster.
c=1
The ï¬nal constraint then is the weighted sum of the three norms:
â¦(A) = λ1â¦mean(A) + λ2â¦between(A) + λ3â¦within(A)
5
As this constraint assumes clusters are known in advance, they introduce a convex relaxation of the above penalty that allows to learn the clusters at the same time.
In another scenario, in clusters but have an inherent structure. [Kim and Xing, 2010] extend the group lasso to deal with tasks that occur in a tree structure, while [Chen et al., 2010] apply it to tasks with graph structures.
While the previous approaches to modelling the relationship between tasks employ norm regulariza- tion, other approaches do so without regularization: [Thrun and OâSullivan, 1996] were the ï¬rst ones who presented a task clustering algorithm using k-nearest neighbour, while [Ando and Tong, 2005] learn a common structure from multiple related tasks with an application to semi-supervised learning.
Much other work on learning task relationships for multi-task learning uses Bayesian methods: [Heskes, 2000] propose a Bayesian neural network for multi-task learning by placing a prior on the model parameters to encourage similar parameters across tasks. [Lawrence and Platt, 2004] extend Gaussian processes (GP) to MTL by inferring parameters for a shared covariance matrix. As this is computationally very expensive, they adopt a sparse approximation scheme that greedily selects the most informative examples. [Yu et al., 2005] also use GP for MTL by assuming that all models are sampled from a common prior.
[Bakker and Heskes, 2003] place a Gaussian as a prior distribution on each task-speciï¬c layer. In order to encourage similarity between different tasks, they propose to make the mean task-dependent and introduce a clustering of the tasks using a mixture distribution. Importantly, they require task characteristics that deï¬ne the clusters and the number of mixtures to be speciï¬ed in advance.
Building on this, [Xue et al., 2007] draw the distribution from a Dirichlet process and enable the model to learn the similarity between tasks as well as the number of clusters. They then share the same model among all tasks in the same cluster. [Daumé III, 2009] propose a hierarchical Bayesian model, which learns a latent task hierarchy, while [Zhang and Yeung, 2010] use a GP-based regularization for MTL and extend a previous GP-based approach to be more computationally feasible in larger settings.
Other approaches focus on the online multi-task learning setting: [Cavallanti et al., 2010] adapt some existing methods such as the approach by [Evgeniou et al., 2005] to the online setting. They also propose a MTL extension of the regularized Perceptron, which encodes task relatedness in a matrix. They use different forms of regularization to bias this task relatedness matrix, e.g. the closeness of the task characteristic vectors or the dimension of the spanned subspace. Importantly, similar to some earlier approaches, they require the task characteristics that make up this matrix to be provided in advance. [Saha et al., 2011] then extend the previous approach by learning the task relationship matrix.
[Kang et al., 2011] assume that tasks form disjoint groups and that the tasks within each group lie in a low-dimensional subspace. Within each group, tasks share the same feature representation whose parameters are learned jointly together with the group assignment matrix using an alternating minimization scheme. However, a total disjointness between groups might not be the ideal way, as the tasks might still share some features that are helpful for prediction.
[Kumar and Daumé III, 2012] in turn allow two tasks from different groups to overlap by assuming that there exist a small number of latent basis tasks. They then model the parameter vector at of every actual task t as a linear combination of these: at = Lst where L â RkÃd is a matrix containing the parameter vectors of k latent tasks, while st â Rk is a vector containing the coefï¬cients of the linear combination. In addition, they constrain the linear combination to be sparse in the latent tasks; the overlap in the sparsity patterns between two tasks then controls the amount of sharing between these. Finally, [Crammer and Mansour, 2012] learn a small pool of shared hypotheses and then map each task to a single hypothesis.
# 6 Recent work on MTL for Deep Learning
While many recent Deep Learning approaches have used multi-task learning â either explicitly or implicitly â as part of their model (prominent examples will be featured in the next section), they all employ the two approaches we introduced earlier, hard and soft parameter sharing. In contrast, only a few papers have looked at developing better mechanisms for MTL in deep neural networks.
6
# 6.1 Deep Relationship Networks
In MTL for computer vision, approaches often share the convolutional layers, while learning task- speciï¬c fully-connected layers. [Long and Wang, 2015] improve upon these models by proposing Deep Relationship Networks. In addition to the structure of shared and task-speciï¬c layers, which can be seen in Figure 3, they place matrix priors on the fully connected layers, which allow the model to learn the relationship between tasks, similar to some of the Bayesian models we have looked at before. This approach, however, still relies on a pre-deï¬ned structure for sharing, which may be adequate for well-studied computer vision problems, but prove error-prone for novel tasks.
learn learn Jearn Gos00» [e) oO [e) oO (2) [e) oO input "i Conv3 conv ConvS feb
Figure 3: A Deep Relationship Network with shared convolutional and task-speciï¬c fully connected layers with matrix priors [Long and Wang, 2015]
# 6.2 Fully-Adaptive Feature Sharing
Starting at the other extreme, [Lu et al., 2016] propose a bottom-up approach that starts with a thin network and dynamically widens it greedily during training using a criterion that promotes grouping of similar tasks. The widening procedure, which dynamically creates branches can be seen in Figure 4. However, the greedy method might not be able to discover a model that is globally optimal, while assigning each branch to exactly one task does not allow the model to learn more complex interactions between tasks.
Round 1 Round 2 Round 3. m5) cae ot QP) ee VY Layer L-1 | c> Layer L-a Layer L-1 Layer L-2 Layer L-2 | c= Layer L-2
Figure 4: The widening procedure for fully-adaptive feature sharing [Lu et al., 2016]
# 6.3 Cross-stitch Networks
[Misra et al., 2016] start out with two separate model architectures just as in soft parameter sharing. They then use what they refer to as cross-stitch units to allow the model to determine in what way the task-speciï¬c networks leverage the knowledge of the other task by learning a linear combination of the output of the previous layers. Their architecture can be seen in Figure 5, in which they only place cross-stitch units after pooling and fully-connected layers.
# 6.4 Low supervision
In contrast, in natural language processing (NLP), recent work focused on ï¬nding better task hier- archies for multi-task learning: [Søgaard and Goldberg, 2016] show that low-level tasks, i.e. NLP tasks typically used for preprocessing such as part-of-speech tagging and named entity recognition, should be supervised at lower layers when used as auxiliary task.
7
conv], pooll conv, pool? __conv_convd_conv5, poolâ fet fer fe8 | | | VASE yoRomion OY cma Wa Woy Waâ a a1 = units halt eBoy a yosane a xseT,
Figure 5: Cross-stitch networks for two tasks [Misra et al., 2016]
# 6.5 A Joint Many-Task Model
Building on this ï¬nding, [Hashimoto et al., 2016] pre-deï¬ne a hierarchical architecture consisting of several NLP tasks, which can be seen in Figure 6, as a joint model for multi-task learning.
t Entailment Entailment Entaiiment encoder encoder a Relatedness semantic Relatedness Relatedness âencoder syntactic level word level | word representation âword representation Sentencey Sentences
Figure 6: A Joint Many-Task Model [Hashimoto et al., 2016]
# 6.6 Weighting losses with uncertainty
Instead of learning the structure of sharing, [Kendall et al., 2017] take an orthogonal approach by considering the uncertainty of each task. They then adjust each taskâs relative weight in the cost function by deriving a multi-task loss function based on maximizing the Gaussian likelihood with task-dependant uncertainty. Their architecture for per-pixel depth regression, semantic and instance segmentation can be seen in Figure 7.
. Semantic Semantic En Decoder Uncertainty Input Image Instance Task Uncertainty Instance Decoder Encoder Depth Decoder Task Uncertainty
Figure 7: Uncertainty-based loss function weighting for multi-task learning [Kendall et al., 2017]
8
# 6.7 Tensor factorisation for MTL
More recent work seeks to generalize existing approaches to MTL to Deep Learning: [Yang and Hospedales, 2017a] generalize some of the previously discussed matrix factorisation approaches using tensor factorisation to split the model parameters into shared and task-speciï¬c parameters for every layer.
# 6.8 Sluice Networks
Finally, we propose Sluice Networks [Ruder et al., 2017], a model that generalizes Deep Learning- based MTL approaches such as hard parameter sharing and cross-stitch networks, block-sparse regularization approaches, as well as recent NLP approaches that create a task hierarchy. The model, which can be seen in Figure 8, allows to learn what layers and subspaces should be shared, as well as at what layers the network has learned the best representations of the input sequences.
Gaga Gaga [2] | LH Ga22 Ga32 ( a Q{e Gat Ge.3a : al Gp22 G32
Figure 8: A sluice network for two tasks [Ruder et al., 2017]
# 6.9 What should I share in my model?
Having surveyed these recent approaches, let us now brieï¬y summarize and draw a conclusion on what to share in our deep MTL models. Most approaches in the history of MTL have focused on the scenario where tasks are drawn from the same distribution [Baxter, 1997]. While this scenario is beneï¬cial for sharing, it does not always hold. In order to develop robust models for MTL, we thus have to be able to deal with unrelated or only loosely related tasks.
While early work in MTL for Deep Learning has pre-speciï¬ed which layers to share for each task pairing, this strategy does not scale and heavily biases MTL architectures. Hard parameter sharing, a technique that was originally proposed by [Caruana, 1993], is still the norm 20 years later. While useful in many scenarios, hard parameter sharing quickly breaks down if tasks are not closely related or require reasoning on different levels. Recent approaches have thus looked towards learning what to share and generally outperform hard parameter sharing. In addition, giving our models the capacity to learn a task hierarchy is helpful, particularly in cases that require different granularities.
As mentioned initially, we are doing MTL as soon as we are optimizing more than one loss function. Rather than constraining our model to compress the knowledge of all tasks into the same parameter space, it is thus helpful to draw on the advances in MTL that we have discussed and enable our model to learn how the tasks should interact with each other.
# 7 Auxiliary tasks
MTL is a natural ï¬t in situations where we are interested in obtaining predictions for multiple tasks at once. Such scenarios are common for instance in ï¬nance or economics forecasting, where we might want to predict the value of many possibly related indicators, or in bioinformatics where we might want to predict symptoms for multiple diseases simultaneously. In scenarios such as drug discovery, where tens or hundreds of active compounds should be predicted, MTL accuracy increases continuously with the number of tasks [Ramsundar et al., 2015].
9
In most situations, however, we only care about performance on one task. In this section, we will thus look at how we can ï¬nd a suitable auxiliary task in order to still reap the beneï¬ts of multi-task learning.
# 7.1 Related task
Using a related task as an auxiliary task for MTL is the classical choice. To get an idea what a related task can be, we will present some prominent examples. [Caruana, 1998] uses tasks that predict different characteristics of the road as auxiliary tasks for predicting the steering direction in a self-driving car; [Zhang et al., 2014] use head pose estimation and facial attribute inference as auxiliary tasks for facial landmark detection; [Liu et al., 2015] jointly learn query classiï¬cation and web search; [Girshick, 2015] jointly predicts the class and the coordinates of an object in an image; ï¬nally, [Arık et al., 2017] jointly predict the phoneme duration and frequency proï¬le for text-to-speech.
# 7.2 Adversarial
Often, labeled data for a related task is unavailable. In some circumstances, however, we have access to a task that is opposite of what we want to achieve. This data can be leveraged using an adversarial loss, which does not seek to minimize but maximize the training error using a gradient reversal layer. This setup has found recent success in domain adaptation [Ganin and Lempitsky, 2015]. The adversarial task in this case is predicting the domain of the input; by reversing the gradient of the adversarial task, the adversarial task loss is maximized, which is beneï¬cial for the main task as it forces the model to learn representations that cannot distinguish between domains.
# 7.3 Hints
As mentioned before, MTL can be used to learn features that might not be easy to learn just using the original task. An effective way to achieve this is to use hints, i.e. predicting the features as an auxiliary task. Recent examples of this strategy in the context of natural language processing are [Yu and Jiang, 2016] who predict whether an input sentence contains a positive or negative sentiment word as auxiliary tasks for sentiment analysis and [Cheng et al., 2015] who predict whether a name is present in a sentence as auxiliary task for name error detection.
# 7.4 Focusing attention
Similarly, the auxiliary task can be used to focus attention on parts of the image that a network might normally ignore. For instance, for learning to steer [Caruana, 1998] a single-task model might typically ignore lane markings as these make up only a small part of the image and are not always present. Predicting lane markings as auxiliary task, however, forces the model to learn to represent them; this knowledge can then also be used for the main task. Analogously, for facial recognition, one might learn to predict the location of facial landmarks as auxiliary tasks, since these are often distinctive.
# 7.5 Quantization smoothing
For many tasks, the training objective is quantized, i.e. while a continuous scale might be more plausible, labels are available as a discrete set. This is the case in many scenarios that require human assessment for data gathering, such as predicting the risk of a disease (e.g. low/medium/high) or sentiment analysis (positive/neutral/negative). Using less quantized auxiliary tasks might help in these cases, as they might be learned more easily due to their objective being smoother.
# 7.6 Predicting inputs
In some scenarios, it is impractical to use some features as inputs as they are unhelpful for predicting the desired objective. However, they might still be able to guide the learning of the task. In those cases, the features can be used as outputs rather than inputs. [Caruana and de Sa, 1997] present several problems where this is applicable.
10
# 7.7 Using the future to predict the present
In many situations, some features only become available after the predictions are supposed to be made. For instance, for self-driving cars, more accurate measurements of obstacles and lane markings can be made once the car is passing them. [Caruana, 1998] also gives the example of pneumonia prediction, after which the results of additional medical trials will be available. For these examples, the additional data cannot be used as features as it will not be available as input at runtime. However, it can be used as an auxiliary task to impart additional knowledge to the model during training.
# 7.8 Representation learning
The goal of an auxiliary task in MTL is to enable the model to learn representations that are shared or helpful for the main task. All auxiliary tasks discussed so far do this implicitly: They are closely related to the main task, so that learning them likely allows the model to learn beneï¬cial representations. A more explicit modelling is possible, for instance by employing a task that is known to enable a model to learn transferable representations. The language modelling objective as employed by [Cheng et al., 2015] and [Rei, 2017] fulï¬ls this role. In a similar vein, an autoencoder objective can also be used as an auxiliary task.
# 7.9 What auxiliary tasks are helpful?
In this section, we have discussed different auxiliary tasks that can be used to leverage MTL even if we only care about one task. We still do not know, though, what auxiliary task will be useful in practice. Finding an auxiliary task is largely based on the assumption that the auxiliary task should be related to the main task in some way and that it should be helpful for predicting the main task.
However, we still do not have a good notion of when two tasks should be considered similar or related. [Caruana, 1998] deï¬nes two tasks to be similar if they use the same features to make a decision. [Baxter, 2000] argues only theoretically that related tasks share a common optimal hypothesis class, i.e. have the same inductive bias. [Ben-David and Schuller, 2003] propose that two tasks are F-related if the data for both tasks can be generated from a ï¬xed probability distribution using a set of transformations F. While this allows to reason over tasks where different sensors collect data for the same classiï¬cation problem, e.g. object recognition with data from cameras with different angles and lighting conditions, it is not applicable to tasks that do not deal with the same problem. [Xue et al., 2007] ï¬nally argue that two tasks are similar if their classiï¬cation boundaries, i.e. parameter vectors are close.
In spite of these early theoretical advances in understanding task relatedness, we have not made much recent progress towards this goal. Task similarity is not binary, but resides on a spectrum. Allowing our models to learn what to share with each task might allow us to temporarily circumvent the lack of theory and make better use even of only loosely related tasks. However, we also need to develop a more principled notion of task similarity with regard to MTL in order to know which tasks we should prefer.
Recent work [Alonso and Plank, 2017] has found auxiliary tasks with compact and uniform label distributions to be preferable for sequence tagging problems in NLP, which we have conï¬rmed in experiments [Ruder et al., 2017]. In addition, gains have been found to be more likely for main tasks that quickly plateau with non-plateauing auxiliary tasks [Bingel and Søgaard, 2017]. These experiments, however, have so far been limited in scope and recent ï¬ndings only provide the ï¬rst clues towards a deeper understanding of multi-task learning in neural networks.
# 8 Conclusion
In this overview, I have reviewed both the history of literature in multi-task learning as well as more recent work on MTL for Deep Learning. While MTL is being more frequently used, the 20-year old hard parameter sharing paradigm is still pervasive for neural-network based MTL. Recent advances on learning what to share, however, are promising. At the same time, our understanding of tasks â their similarity, relationship, hierarchy, and beneï¬t for MTL â is still limited and we need to study them more thoroughly to gain a better understanding of the generalization capabilities of MTL with regard to deep neural networks.
11
# References
[Abu-Mostafa, 1990] Abu-Mostafa, Y. S. (1990). Learning from hints in neural networks. Journal of Complexity, 6(2):192â198.
[Alonso and Plank, 2017] Alonso, H. M. and Plank, B. (2017). When is multitask learning effective? Multitask learning for semantic sequence prediction under varying data conditions. In EACL. [Ando and Tong, 2005] Ando, R. K. and Tong, Z. (2005). A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. Journal of Machine Learning Research, 6:1817â1853.
[Argyriou and Pontil, 2007] Argyriou, A. and Pontil, M. (2007). Multi-Task Feature Learning. In Advances in Neural Information Processing Systems.
[Arık et al., 2017] Arık, S. Ã., Chrzanowski, M., Coates, A., Diamos, G., Gibiansky, A., Kang, Y., Li, X., Miller, J., Raiman, J., Sengupta, S., and Shoeybi, M. (2017). Deep Voice: Real-time Neural Text-to-Speech. In ICML 2017.
[Bakker and Heskes, 2003] Bakker, B. and Heskes, T. (2003). Task Clustering and Gating for BayesianMultitask Learning. Journal of Machine Learning Research, 1(1):83â99.
[Baxter, 1997] Baxter, J. (1997). A Bayesian/information theoretic model of learning to learn via multiple task sampling. Machine Learning, 28:7â39.
[Baxter, 2000] Baxter, J. (2000). A Model of Inductive Bias Learning. Journal of Artiï¬cial Intelli- gence Research, 12:149â198.
[Ben-David and Schuller, 2003] Ben-David, S. and Schuller, R. (2003). Exploiting task relatedness for multiple task learning. Learning Theory and Kernel Machines, pages 567â580.
[Bingel and Søgaard, 2017] Bingel, J. and Søgaard, A. (2017). Identifying beneï¬cial task relations for multi-task learning in deep neural networks. In EACL.
[Caruana, 1993] Caruana, R. (1993). Multitask learning: A knowledge-based source of inductive bias. In Proceedings of the Tenth International Conference on Machine Learning.
[Caruana, 1998] Caruana, R. (1998). Multitask Learning. Autonomous Agents and Multi-Agent Systems, 27(1):95â133.
[Caruana and de Sa, 1997] Caruana, R. and de Sa, V. R. (1997). Promoting poor features to supervi- sors: Some inputs work better as outputs. Advances in Neural Information Processing Systems 9: Proceedings of The 1996 Conference, 9:389.
[Cavallanti et al., 2010] Cavallanti, G., Cesa-Bianchi, N., and Gentile, C. (2010). Linear Algorithms for Online Multitask Classiï¬cation. Journal of Machine Learning Research, 11:2901â2934. [Chen et al., 2010] Chen, X., Kim, S., Lin, Q., Carbonell, J. G., and Xing, E. P. (2010). Graph- Structured Multi-task Regression and an Efï¬cient Optimization Method for General Fused Lasso. pages 1â21.
[Cheng et al., 2015] Cheng, H., Fang, H., and Ostendorf, M. (2015). Open-Domain Name Error Detection using a Multi-Task RNN. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 737â746.
[Collobert and Weston, 2008] Collobert, R. and Weston, J. (2008). A uniï¬ed architecture for natural language processing. Proceedings of the 25th international conference on Machine learning - ICML â08, 20(1):160â167.
[Crammer and Mansour, 2012] Crammer, K. and Mansour, Y. (2012). Learning Multiple Tasks Using Shared Hypotheses. Neural Information Processing Systems (NIPS), pages 1484â1492. [Daumé III, 2009] Daumé III, H. (2009). Bayesian multitask learning with latent hierarchies. pages
135â142.
[Deng et al., 2013] Deng, L., Hinton, G. E., and Kingsbury, B. (2013). New types of deep neural network learning for speech recognition and related applications: An overview. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8599â8603.
[Duong et al., 2015] Duong, L., Cohn, T., Bird, S., and Cook, P. (2015). Low Resource Dependency Parsing: Cross-lingual Parameter Sharing in a Neural Network Parser. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), pages 845â850.
12
[Evgeniou et al., 2005] Evgeniou, T., Micchelli, C. A., and Pontil, M. (2005). Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615â637.
[Evgeniou and Pontil, 2004] Evgeniou, T. and Pontil, M. (2004). Regularized multi-task learning. International Conference on Knowledge Discovery and Data Mining, page 109.
[Ganin and Lempitsky, 2015] Ganin, Y. and Lempitsky, V. (2015). Unsupervised Domain Adaptation by Backpropagation. In Proceedings of the 32nd International Conference on Machine Learning., volume 37.
[Girshick, 2015] Girshick, R. (2015). Fast R-CNN. Conference on Computer Vision, pages 1440â1448. In Proceedings of the IEEE International
[Hashimoto et al., 2016] Hashimoto, K., Xiong, C., Tsuruoka, Y., and Socher, R. (2016). A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks.
[Heskes, 2000] Heskes, T. (2000). Empirical Bayes for Learning to Learn. Proceedings of the Seventeenth International Conference on Machine Learning, pages 367â364.
[Jacob et al., 2009] Jacob, L., Vert, J.-p., Bach, F. R., and Vert, J.-p. (2009). Clustered Multi-Task Learning: A Convex Formulation. Advances in Neural Information Processing Systems 21, pages 745â752.
[Jalali et al., 2010] Jalali, A., Ravikumar, P., Sanghavi, S., and Ruan, C. (2010). A Dirty Model for Multi-task Learning. Advances in Neural Information Processing Systems.
[Kang et al., 2011] Kang, Z., Grauman, K., and Sha, F. (2011). Learning with whom to share in multi-task feature learning. Proceedings of the 28th International Conference on Machine Learning, (4):4â5.
[Kendall et al., 2017] Kendall, A., Gal, Y., and Cipolla, R. (2017). Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics.
[Kim and Xing, 2010] Kim, S. and Xing, E. P. (2010). Tree-Guided Group Lasso for Multi-Task Regression with Structured Sparsity. 27th International Conference on Machine Learning, pages 1â14.
[Kumar and Daumé III, 2012] Kumar, A. and Daumé III, H. (2012). Learning Task Grouping and Overlap in Multi-task Learning. Proceedings of the 29th International Conference on Machine Learning, pages 1383â1390.
[Lawrence and Platt, 2004] Lawrence, N. D. and Platt, J. C. (2004). Learning to learn with the informative vector machine. Twenty-ï¬rst international conference on Machine learning - ICML â04, page 65.
[Liu et al., 2016] Liu, S., Pan, S. J., and Ho, Q. (2016). Distributed Multi-task Relationship Learning. In Proceedings of the 19th International Conference on Artiï¬cial Intelligence and Statistics (AISTATS), pages 751â760.
[Liu et al., 2015] Liu, X., Gao, J., He, X., Deng, L., Duh, K., and Wang, Y.-Y. (2015). Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classiï¬cation and Information Retrieval. NAACL-2015, pages 912â921.
[Long and Wang, 2015] Long, M. and Wang, J. (2015). Learning Multiple Tasks with Deep Rela- tionship Networks. arXiv preprint arXiv:1506.02117.
[Lounici et al., 2009] Lounici, K., Pontil, M., Tsybakov, A. B., and van de Geer, S. (2009). Taking Advantage of Sparsity in Multi-Task Learning. Stat, (1).
[Lu et al., 2016] Lu, Y., Kumar, A., Zhai, S., Cheng, Y., Javidi, T., and Feris, R. (2016). Fully- adaptive Feature Sharing in Multi-Task Networks with Applications in Person Attribute Classiï¬ca- tion.
[Misra et al., 2016] Misra, I., Shrivastava, A., Gupta, A., and Hebert, M. (2016). Cross-stitch Networks for Multi-task Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[Negahban and Wainwright, 2008] Negahban, S. and Wainwright, M. J. (2008). Joint support re- covery under high-dimensional scaling: Beneï¬ts and perils of $ell_{1,infty}$-regularization. Advances in Neural Information Processing Systems, pages 1161â1168.
[Ramsundar et al., 2015] Ramsundar, B., Kearnes, S., Riley, P., Webster, D., Konerding, D., and Pande, V. (2015). Massively Multitask Networks for Drug Discovery.
13
[Rei, 2017] Rei, M. (2017). Semi-supervised Multitask Learning for Sequence Labeling. In Pro- ceedings of ACL 2017.
[Ruder et al., 2017] Ruder, S., Bingel, J., Augenstein, I., and Søgaard, A. (2017). Sluice networks: Learning what to share between loosely related tasks.
[Saha et al., 2011] Saha, A., Rai, P., Daumé, H., and Venkatasubramanian, S. (2011). Online learning of multiple tasks and their relationships. Journal of Machine Learning Research, 15:643â651. [Søgaard and Goldberg, 2016] Søgaard, A. and Goldberg, Y. (2016). Deep multi-task learning with low level tasks supervised at lower layers. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 231â235.
[Thrun and OâSullivan, 1996] Thrun, S. and OâSullivan, J. (1996). Discovering Structure in Multiple Learning Tasks: The TC Algorithm. Proceedings of the Thirteenth International Conference on Machine Learning, 28(1):5â5.
[Xue et al., 2007] Xue, Y., Liao, X., Carin, L., and Krishnapuram, B. (2007). Multi-Task Learning for Classiï¬cation with Dirichlet Process Priors. Journal of Machine Learning Research, 8:35â63. [Yang and Hospedales, 2017a] Yang, Y. and Hospedales, T. (2017a). Deep Multi-task Representation
Learning: A Tensor Factorisation Approach. In Proceedings of ICLR 2017.
[Yang and Hospedales, 2017b] Yang, Y. and Hospedales, T. M. (2017b). Trace Norm Regularised Deep Multi-Task Learning. In Workshop track - ICLR 2017.
[Yu and Jiang, 2016] Yu, J. and Jiang, J. (2016). Learning Sentence Embeddings with Auxiliary Tasks for Cross-Domain Sentiment Classiï¬cation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP2016), pages 236â246.
[Yu et al., 2005] Yu, K., Tresp, V., and Schwaighofer, A. (2005). Learning Gaussian processes from multiple tasks. Proceedings of the International Conference on Machine Learning (ICML), 22:1012â1019.
[Yuan and Lin, 2006] Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49â67.
[Zhang and Huang, 2008] Zhang, C. H. and Huang, J. (2008). The sparsity and bias of the lasso selection in high-dimensional linear regression. Annals of Statistics, 36(4):1567â1594.
[Zhang and Yeung, 2010] Zhang, Y. and Yeung, D.-y. (2010). A Convex Formulation for Learning Task Relationships in Multi-Task Learning. Uai, pages 733â442.
[Zhang et al., 2014] Zhang, Z., Luo, P., Loy, C. C., and Tang, X. (2014). Facial Landmark Detection by Deep Multi-task Learning. In European Conference on Computer Vision, pages 94â108.
14 | {
"id": "1506.02117"
} |
1706.04599 | On Calibration of Modern Neural Networks | Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions. | http://arxiv.org/pdf/1706.04599 | Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger | cs.LG | ICML 2017 | null | cs.LG | 20170614 | 20170803 | 7 1 0 2
g u A 3 ] G L . s c [
2 v 9 9 5 4 0 . 6 0 7 1 : v i X r a
# On Calibration of Modern Neural Networks
# Chuan Guo * 1 Geoff Pleiss * 1 Yu Sun * 1 Kilian Q. Weinberger 1
# Abstract
Conï¬dence calibration â the problem of predict- ing probability estimates representative of the true correctness likelihood â is important for classiï¬cation models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normal- ization are important factors inï¬uencing calibra- tion. We evaluate the performance of various post-processing calibration methods on state-of- the-art architectures with image and document classiï¬cation datasets. Our analysis and exper- iments not only offer insights into neural net- work learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling â a single- parameter variant of Platt Scaling â is surpris- ingly effective at calibrating predictions.
0.2 LeNet (1998) ResNet (2016) CIFAR-100 CIFAR-100 iT I col > > 0.8 : Sug : : a 2) 3 on5 a ch B e's oar 3 3 q 0.6 ete _l gl a oI | & 0-4 va ea RS aT 1% 1 0.0 J.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0 Outputs 0.8 |= Gap 0.6 ip 4 0.4 Accuracy 0.2 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Confidence
# 1. Introduction
Figure 1. Conï¬dence histograms (top) and reliability diagrams (bottom) for a 5-layer LeNet (left) and a 110-layer ResNet (right) on CIFAR-100. Refer to the text below for detailed illustration.
Recent advances in deep learning have dramatically im- proved neural network accuracy (Simonyan & Zisserman, 2015; Srivastava et al., 2015; He et al., 2016; Huang et al., 2016; 2017). As a result, neural networks are now entrusted with making complex decisions in applications, such as ob- ject detection (Girshick, 2015), speech recognition (Han- nun et al., 2014), and medical diagnosis (Caruana et al., 2015). In these settings, neural networks are an essential component of larger decision making pipelines.
If the detection network is not able to conï¬dently predict the presence or absence of immediate obstructions, the car should rely more on the output of other sensors for braking. Alternatively, in automated health care, control should be passed on to human doctors when the conï¬dence of a dis- ease diagnosis network is low (Jiang et al., 2012). Specif- ically, a network should provide a calibrated conï¬dence measure in addition to its prediction. In other words, the probability associated with the predicted class label should reï¬ect its ground truth correctness likelihood.
In real-world decision making systems, classiï¬cation net- works must not only be accurate, but also should indicate when they are likely to be incorrect. As an example, con- sider a self-driving car that uses a neural network to detect pedestrians and other obstructions (Bojarski et al., 2016).
1Cornell University. Correspondence to: Chuan Guo <cg563@cornell.edu>, Geoff Pleiss <geoff@cs.cornell.edu>, Yu Sun <ys646@cornell.edu>.
Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s).
Calibrated conï¬dence estimates are also important for model interpretability. Humans have a natural cognitive in- tuition for probabilities (Cosmides & Tooby, 1996). Good conï¬dence estimates provide a valuable extra bit of infor- mation to establish trustworthiness with the user â espe- cially for neural networks, whose classiï¬cation decisions are often difï¬cult to interpret. Further, good probability estimates can be used to incorporate neural networks into other probabilistic models. For example, one can improve performance by combining network outputs with a lan-
guage model in speech recognition (Hannun et al., 2014; Xiong et al., 2016), or with camera information for object detection (Kendall & Cipolla, 2016).
In 2005, Niculescu-Mizil & Caruana (2005) showed that neural networks typically produce well-calibrated proba- bilities on binary classiï¬cation tasks. While neural net- works today are undoubtedly more accurate than they were a decade ago, we discover with great surprise that mod- ern neural networks are no longer well-calibrated. This is visualized in Figure 1, which compares a 5-layer LeNet (left) (LeCun et al., 1998) with a 110-layer ResNet (right) (He et al., 2016) on the CIFAR-100 dataset. The top row shows the distribution of prediction conï¬dence (i.e. prob- abilities associated with the predicted label) as histograms. The average conï¬dence of LeNet closely matches its accu- racy, while the average conï¬dence of the ResNet is substan- tially higher than its accuracy. This is further illustrated in the bottom row reliability diagrams (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005), which show ac- curacy as a function of conï¬dence. We see that LeNet is well-calibrated, as conï¬dence closely approximates the ex- pected accuracy (i.e. the bars align roughly along the diag- onal). On the other hand, the ResNetâs accuracy is better, but does not match its conï¬dence.
Our goal is not only to understand why neural networks have become miscalibrated, but also to identify what meth- ods can alleviate this problem. In this paper, we demon- strate on several computer vision and NLP tasks that neu- ral networks produce conï¬dences that do not represent true probabilities. Additionally, we offer insight and intuition into network training and architectural trends that may cause miscalibration. Finally, we compare various post- processing calibration methods on state-of-the-art neural networks, and introduce several extensions of our own. Surprisingly, we ï¬nd that a single-parameter variant of Platt scaling (Platt et al., 1999) â which we refer to as temper- ature scaling â is often the most effective method at ob- taining calibrated probabilities. Because this method is straightforward to implement with existing deep learning frameworks, it can be easily adopted in practical settings.
# 2. Deï¬nitions
The problem we address in this paper is supervised multi- class classiï¬cation with neural networks. The input X â X and label Y â Y = {1, . . . , K} are random variables that follow a ground truth joint distribution Ï(X, Y ) = Ï(Y |X)Ï(X). Let h be a neural network with h(X) = ( ËY , ËP ), where ËY is a class prediction and ËP is its associ- ated conï¬dence, i.e. probability of correctness. We would like the conï¬dence estimate ËP to be calibrated, which in- tuitively means that ËP represents a true probability. For example, given 100 predictions, each with conï¬dence of
0.8, we expect that 80 should be correctly classified. More formally, we define perfect calibration as P(Y=Y|P=p)=p, vel.)
P = p, âp â [0, 1] (1)
where the probability is over the joint distribution. In all practical settings, achieving perfect calibration is impos- sible. Additionally, the probability in (1) cannot be com- puted using ï¬nitely many samples since ËP is a continuous random variable. This motivates the need for empirical ap- proximations that capture the essence of (1).
Reliability Diagrams (e.g. Figure 1 bottom) are a visual representation of model calibration (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005). These diagrams plot expected sample accuracy as a function of conï¬dence. If the model is perfectly calibrated â i.e. if (1) holds â then the diagram should plot the identity function. Any devia- tion from a perfect diagonal represents miscalibration.
To estimate the expected accuracy from ï¬nite samples, we group predictions into M interval bins (each of size 1/M ) and calculate the accuracy of each bin. Let Bm be the set of indices of samples whose prediction conï¬dence falls into the interval Im = ( mâ1
M , m M ]. The accuracy of Bm is 1 |Bm|
1 ~ ace(Bm) = IBnl Ss 1(Hi = yi), â¢GCBm
where Ëyi and yi are the predicted and true class labels for sample i. Basic probability tells us that acc(Bm) is an un- biased and consistent estimator of P( ËY = Y | ËP â Im). We deï¬ne the average conï¬dence within bin Bm as
conf(Bm) = a Ss Dis |Bm| iâ¬Bm
where Ëpi is the conï¬dence for sample i. acc(Bm) and conf(Bm) approximate the left-hand and right-hand sides of (1) respectively for bin Bm. Therefore, a perfectly cal- ibrated model will have acc(Bm) = conf(Bm) for all m â {1, . . . , M }. Note that reliability diagrams do not dis- play the proportion of samples in a given bin, and thus can- not be used to estimate how many samples are calibrated.
Expected Calibration Error (ECE). While reliability diagrams are useful visual tools, it is more convenient to have a scalar summary statistic of calibration. Since statis- tics comparing two distributions cannot be comprehensive, previous works have proposed variants, each with a unique emphasis. One notion of miscalibration is the difference in expectation between conï¬dence and accuracy, i.e.
g[P@=Â¥i8=1)-A]
Expected Calibration Error (Naeini et al., 2015) â or ECE â approximates (2) by partitioning predictions into M equally-spaced bins (similar to the reliability diagrams) and
Varying Depth ResNet - CIFAR-100 Varying Width ResNet-14 - CIFAR-100 Using Normalization ConvNet - CIFAR-100 Varying Weight Decay ResNet-110 - CIFAR-100 0.7 0.6 â= Error Error Gg Error â Error == ECE ECE Gig ECE == ECE fa 0.5 oO 0.4 : FE 3 SSS S03 I 0.2 °° Vanna: 0.0 - - 0 20 40 60 80 100120 0 50 100 150 200 250 300 Without With 10°? 10-4 10-° 10-7 Depth Filters per layer Batch Normalization Weight decay
Figure 2. The effect of network depth (far left), width (middle left), Batch Normalization (middle right), and weight decay (far right) on miscalibration, as measured by ECE (lower is better).
taking a weighted average of the binsâ accuracy/confidence difference. More Precisely ECE = S| Pr m=1 acc(B,,) â conf(By)}, (3)
where n is the number of samples. The difference between acc and conf for a given bin represents the calibration gap (red bars in reliability diagrams â e.g. Figure 1). We use ECE as the primary empirical metric to measure calibra- tion. See Section S1 for more analysis of this metric.
# 3. Observing Miscalibration
The architecture and training procedures of neural net- works have rapidly evolved in recent years. In this sec- tion we identify some recent changes that are responsible for the miscalibration phenomenon observed in Figure 1. Though we cannot claim causality, we ï¬nd that increased model capacity and lack of regularization are closely re- lated to model miscalibration.
Maximum Calibration Error (MCE). In high-risk ap- plications where reliable conï¬dence measures are abso- lutely necessary, we may wish to minimize the worst-case deviation between conï¬dence and accuracy:
max [P(Y = Y|P= p) - >|. 4
The Maximum Calibration Error (Naeini et al., 2015) â or MCE â estimates this deviation. Similarly to ECE, this ap- proximation involves binning:
MCE = max mâ{1,...,M } |acc(Bm) â conf(Bm)| . (5)
We can visualize MCE and ECE on reliability diagrams. MCE is the largest calibration gap (red bars) across all bins, whereas ECE is a weighted average of all gaps. For per- fectly calibrated classiï¬ers, MCE and ECE both equal 0.
Negative log likelihood is a standard measure of a prob- abilistic modelâs quality (Friedman et al., 2001). It is also referred to as the cross entropy loss in the context of deep learning (Bengio et al., 2015). Given a probabilistic model ËÏ(Y |X) and n samples, NLL is deï¬ned as:
Model capacity. The model capacity of neural networks has increased at a dramatic pace over the past few years. It is now common to see networks with hundreds, if not thousands of layers (He et al., 2016; Huang et al., 2016) and hundreds of convolutional ï¬lters per layer (Zagoruyko & Komodakis, 2016). Recent work shows that very deep or wide models are able to generalize better than smaller ones, while exhibiting the capacity to easily ï¬t the training set (Zhang et al., 2017).
Although increasing depth and width may reduce classi- ï¬cation error, we observe that these increases negatively affect model calibration. Figure 2 displays error and ECE as a function of depth and width on a ResNet trained on CIFAR-100. The far left ï¬gure varies depth for a network with 64 convolutional ï¬lters per layer, while the middle left ï¬gure ï¬xes the depth at 14 layers and varies the number of convolutional ï¬lters per layer. Though even the small- est models in the graph exhibit some degree of miscalibra- tion, the ECE metric grows substantially with model ca- pacity. During training, after the model is able to correctly classify (almost) all training samples, NLL can be further minimized by increasing the conï¬dence of predictions. In- creased model capacity will lower training NLL, and thus the model will be more (over)conï¬dent on average.
L=- Yo sts (6) (yi|x:))
# (Friedman
It is a standard result (Friedman et al., 2001) that, in expec- tation, NLL is minimized if and only if ËÏ(Y |X) recovers the ground truth conditional distribution Ï(Y |X).
Batch Normalization (Ioffe & Szegedy, 2015) improves the optimization of neural networks by minimizing distri- bution shifts in activations within the neural networkâs hid-
NLL Overfitting on CIFAR-100 45 â Test error âTest NLL = 40 ao) 3 I 2 35 a Zz a & 30) ~ g i) 25 20 0 100 200 300 400 500 Epoch
Figure 3. Test error and NLL of a 110-layer ResNet with stochas- tic depth on CIFAR-100 during training. NLL is scaled by a con- stant to ï¬t in the ï¬gure. Learning rate drops by 10x at epochs 250 and 375. The shaded area marks between epochs at which the best validation loss and best validation error are produced.
den layers. Recent research suggests that these normal- ization techniques have enabled the development of very deep architectures, such as ResNets (He et al., 2016) and DenseNets (Huang et al., 2017). It has been shown that Batch Normalization improves training time, reduces the need for additional regularization, and can in some cases improve the accuracy of networks.
While it is difï¬cult to pinpoint exactly how Batch Normal- ization affects the ï¬nal predictions of a model, we do ob- serve that models trained with Batch Normalization tend to be more miscalibrated. In the middle right plot of Figure 2, we see that a 6-layer ConvNet obtains worse calibration when Batch Normalization is applied, even though classi- ï¬cation accuracy improves slightly. We ï¬nd that this result holds regardless of the hyperparameters used on the Batch Normalization model (i.e. low or high learning rate, etc.).
Weight decay, which used to be the predominant regu- larization mechanism for neural networks, is decreasingly utilized when training modern neural networks. Learning theory suggests that regularization is necessary to prevent overï¬tting, especially as model capacity increases (Vapnik, 1998). However, due to the apparent regularization effects of Batch Normalization, recent research seems to suggest that models with less L2 regularization tend to generalize better (Ioffe & Szegedy, 2015). As a result, it is now com- mon to train models with little weight decay, if any at all. The top performing ImageNet models of 2015 all use an or- der of magnitude less weight decay than models of previous years (He et al., 2016; Simonyan & Zisserman, 2015).
We ï¬nd that training with less weight decay has a negative impact on calibration. The far right plot in Figure 2 dis-
plays training error and ECE for a 110-layer ResNet with varying amounts of weight decay. The only other forms of regularization are data augmentation and Batch Normal- ization. We observe that calibration and accuracy are not optimized by the same parameter setting. While the model exhibits both over-regularization and under-regularization with respect to classiï¬cation error, it does not appear that calibration is negatively impacted by having too much weight decay. Model calibration continues to improve when more regularization is added, well after the point of achieving optimal accuracy. The slight uptick at the end of the graph may be an artifact of using a weight decay factor that impedes optimization.
NLL can be used to indirectly measure model calibra- In practice, we observe a disconnect between NLL tion. and accuracy, which may explain the miscalibration in Fig- ure 2. This disconnect occurs because neural networks can overï¬t to NLL without overï¬tting to the 0/1 loss. We ob- serve this trend in the training curves of some miscalibrated models. Figure 3 shows test error and NLL (rescaled to match error) on CIFAR-100 as training progresses. Both error and NLL immediately drop at epoch 250, when the learning rate is dropped; however, NLL overï¬ts during the remainder of training. Surprisingly, overï¬tting to NLL is beneï¬cial to classiï¬cation accuracy. On CIFAR-100, test error drops from 29% to 27% in the region where NLL overï¬ts. This phenomenon renders a concrete explanation of miscalibration: the network learns better classiï¬cation accuracy at the expense of well-modeled probabilities.
We can connect this ï¬nding to recent work examining the generalization of large neural networks. Zhang et al. (2017) observe that deep neural networks seemingly violate the common understanding of learning theory that large mod- els with little regularization will not generalize well. The observed disconnect between NLL and 0/1 loss suggests that these high capacity models are not necessarily immune from overï¬tting, but rather, overï¬tting manifests in proba- bilistic error rather than classiï¬cation error.
# 4. Calibration Methods
In this section, we ï¬rst review existing calibration meth- ods, and introduce new variants of our own. All methods are post-processing steps that produce (calibrated) proba- bilities. Each method requires a hold-out validation set, which in practice can be the same set used for hyperparam- eter tuning. We assume that the training, validation, and test sets are drawn from the same distribution.
# 4.1. Calibrating Binary Models
We ï¬rst introduce calibration in the binary setting, i.e. Y = {0, 1}. For simplicity, throughout this subsection,
we assume the model outputs only the conï¬dence for the positive class.1 Given a sample xi, we have access to Ëpi â the networkâs predicted probability of yi = 1, as well as zi â R â which is the networkâs non-probabilistic output, or logit. The predicted probability Ëpi is derived from zi us- ing a sigmoid function Ï; i.e. Ëpi = Ï(zi). Our goal is to produce a calibrated probability Ëqi based on yi, Ëpi, and zi.
Histogram binning (Zadrozny & Elkan, 2001) is a sim- In a nutshell, all ple non-parametric calibration method. uncalibrated predictions Ëpi are divided into mutually ex- clusive bins B1, . . . , BM . Each bin is assigned a calibrated score θm; i.e. if Ëpi is assigned to bin Bm, then Ëqi = θm. At test time, if prediction Ëpte falls into bin Bm, then the cali- brated prediction Ëqte is θm. More precisely, for a suitably chosen M (usually small), we ï¬rst deï¬ne bin boundaries 0 = a1 ⤠a2 ⤠. . . ⤠aM +1 = 1, where the bin Bm is deï¬ned by the interval (am, am+1]. Typically the bin boundaries are either chosen to be equal length intervals or to equalize the number of samples in each bin. The predic- tions θi are chosen to minimize the bin-wise squared loss:
Mon : ~ 2 jinn So YE Ulam < Bi < amt) Om = yi)â, m=1i=1
where 1 is the indicator function. Given ï¬xed bins bound- aries, the solution to (7) results in θm that correspond to the average number of positive-class samples in bin Bm.
Isotonic regression (Zadrozny & Elkan, 2002), arguably the most common non-parametric calibration method, learns a piecewise constant function f to transform un- calibrated outputs; ic. g; = f(p;). Specifically, iso- tonic regression produces f to minimize the square loss 1 (f (bi) â yi)â. Because f is constrained to be piece- wise constant, we can write the optimization problem as:
Mon min Ss Ss Lam < pi < Am41) (Om â yi)â m=1 i=1 subjectto 0=a, <ag<...<auai=1, 0, < 02 <1... < Om.
where M is the number of intervals; a1, . . . , aM +1 are the interval boundaries; and θ1, . . . , θM are the function val- ues. Under this parameterization, isotonic regression is a strict generalization of histogram binning in which the bin boundaries and bin predictions are jointly optimized.
Bayesian Binning into Quantiles (BBQ) (Naeini et al., 2015) is a extension of histogram binning using Bayesian
1 This is in contrast with the setting in Section 2, in which the model produces both a class prediction and conï¬dence.
model averaging. Essentially, BBQ marginalizes out all possible binning schemes to produce g;. More formally, a binning scheme s is a pair (IV, Z) where M is the number of bins, and T is a corresponding partitioning of [0, 1] into disjoint intervals (0 = aj < ag <... < ay4i1 = 1). The parameters of a binning scheme are 0,,..., 9,7. Under this framework, histogram binning and isotonic regression both produce a single binning scheme, whereas BBQ considers a space S of all possible binning schemes for the valida- tion dataset D. BBQ performs Bayesian averaging of the probabilities produced by each scheme:? S
P(Gte | Pte: D) = > P(Gie, S = 8 | Bie D) sES = SO Plate | Pte, S=s,D)P(S=s | D). ses
where P(Ëqte | Ëpte, S = s, D) is the calibrated probability using binning scheme s. Using a uniform prior, the weight P(S = s | D) can be derived using Bayesâ rule:
P(D | S=s) P(S=s|D)= . Vives P(D | S=s')
The parameters θ1, . . . , θM can be viewed as parameters of M independent binomial distributions. Hence, by placing a Beta prior on θ1, . . . , θM , we can obtain a closed form expression for the marginal likelihood P(D | S = s). This allows us to compute P(Ëqte | Ëpte, D) for any test input.
Platt scaling (Platt et al., 1999) is a parametric approach to calibration, unlike the other approaches. The non- probabilistic predictions of a classiï¬er are used as features for a logistic regression model, which is trained on the val- idation set to return probabilities. More speciï¬cally, in the context of neural networks (Niculescu-Mizil & Caruana, 2005), Platt scaling learns scalar parameters a, b â R and outputs Ëqi = Ï(azi + b) as the calibrated probability. Pa- rameters a and b can be optimized using the NLL loss over the validation set. It is important to note that the neural networkâs parameters are ï¬xed during this stage.
# 4.2. Extension to Multiclass Models
For classiï¬cation problems involving K > 2 classes, we return to the original problem formulation. The network outputs a class prediction Ëyi and conï¬dence score Ëpi for each input xi. In this case, the network logits zi are vectors, where Ëyi = argmaxk z(k) , and Ëpi is typically derived using the softmax function ÏSM: exp(z(k) j=1 exp(z(j)
EE, The goal is to produce a calibrated confidence g; and (pos- sibly new) class prediction g based on y;, 9, pi, and z;.
2 Because the validation dataset is ï¬nite, S is as well.
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 9.19% 4.3% 4.6% 4.12% 4.52% 3.28% 3.02% 16.53% 12.67% 15.0% 10.37% 4.85% 6.28% 5.48% 0.44% 4.34% 1.74% 0.58% 0.67% 0.72% 0.44% 1.56% 2.66% 2.46% 3.01% 2.68% 6.48% 4.52% 4.36% 0.14% 5.22% 4.12% 4.29% 1.84% 0.81% 0.54% 1.11% 0.9% 1.08% 0.74% 0.61% 0.81% 1.85% 1.59% 4.99% 5.46% 4.16% 3.58% 5.85% 5.77% 4.51% 3.59% 2.35% 3.77% 5.18% 3.51% 4.77% 3.56% 0.28% 0.22% 1.85% 2.35% 0.83% 0.6% 0.54% 0.33% 0.93% 1.26% 0.96% 2.32% 1.18% 2.02% 1.99% 1.86% 0.17% 3.0% 2.37% 0.88% 0.64% 0.6% 0.41% 1.15% 1.32% 0.9% 2.57% 1.09% 2.09% 2.24% 2.23% 0.27% 21.13% 10.5% 1.0% 0.72% 0.72% 0.41% 1.16% 25.49% 20.09% 24.44% 21.87% 13.24% - - 0.17% 20 News Reuters SST Binary SST Fine Grained DAN 3 DAN 3 TreeLSTM TreeLSTM 8.02% 0.85% 6.63% 6.71% 3.6% 1.75% 1.93% 2.09% 5.52% 4.98% 1.15% 0.97% 1.65% 2.27% 1.65% 2.61% 4.11% 0.91% 1.84% 2.56% 4.61% 0.66% 1.84% 2.98% 9.1% 1.58% 1.84% 2.39%
Table 1. ECE (%) (with M = 15 bins) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a modelâs name denotes the network depth.
Extension of binning methods. One common way of ex- tending binary calibration methods to the multiclass setting is by treating the problem as KKâ one-versus-all problems (Zadrozny & Elkan, 2002). For k = 1,...,K, we forma binary calibration problem where the label is 1(y; = k) and the predicted probability is osy(z;)). This gives us J¢ calibration models, each for a particular class. At test time, we obtain an unnormalized probability vector a, heey |, where qhâ is the calibrated probability for class k. The new class prediction gj is the argmax of the vector, and the new confidence ¢} is the max of the vector normalized by vy @. This extension can be applied to histogram binning, isotonic regression, and BBQ.
Matrix and vector scaling are two multi-class exten- sions of Platt scaling. Let zi be the logits vector produced before the softmax layer for input xi. Matrix scaling ap- plies a linear transformation Wzi + b to the logits:
T is called the temperature, and it âsoftensâ the softmax (i.e. raises the output entropy) with T > 1. As T > ov, the probability g; approaches 1/K, which represents max- imum uncertainty. With 7â = 1, we recover the original probability p;. As Jâ â 0, the probability collapses to a point mass (i.e. g; = 1). T is optimized with respect to NLL on the validation set. Because the parameter T does not change the maximum of the softmax function, the class prediction gj remains unchanged. In other words, temper- ature scaling does not affect the modelâs accuracy.
Temperature scaling is commonly used in settings such as knowledge distillation (Hinton et al., 2015) and statistical mechanics (Jaynes, 1957). To the best of our knowledge, we are not aware of any prior use in the context of calibrat- ing probabilistic models.3 The model is equivalent to max- imizing the entropy of the output probability distribution subject to certain constraints on the logits (see Section S2).
Gi = max osm(Waz; + b), 8 §; = argmax (Wz; + b)(*), ®) k
The parameters W and b are optimized with respect to NLL on the validation set. As the number of parameters for matrix scaling grows quadratically with the number of classes K, we deï¬ne vector scaling as a variant where W is restricted to be a diagonal matrix.
Temperature scaling, the simplest extension of Platt scaling, uses a single scalar parameter T > 0 for all classes. Given the logit vector zi, the new conï¬dence prediction is (9)
ÏSM(zi/T )(k). Ëqi = max k
# 4.3. Other Related Works
Calibration and conï¬dence scores have been studied in var- ious contexts in recent years. Kuleshov & Ermon (2016) study the problem of calibration in the online setting, where the inputs can come from a potentially adversarial source. Kuleshov & Liang (2015) investigate how to produce cal- ibrated probabilities when the output space is a structured object. Lakshminarayanan et al. (2016) use ensembles of networks to obtain uncertainty estimates. Pereyra et al. (2017) penalize overconï¬dent predictions as a form of reg- ularization. Hendrycks & Gimpel (2017) use conï¬dence
3To highlight the connection with prior works we deï¬ne tem- perature scaling in terms of 1 T instead of a multiplicative scalar.
scores to determine if samples are out-of-distribution.
Bayesian neural networks (Denker & Lecun, 1990; MacKay, 1992) return a probability distribution over out- puts as an alternative way to represent model uncertainty. Gal & Ghahramani (2016) draw a connection between Dropout (Srivastava et al., 2014) and model uncertainty, claiming that sampling models with dropped nodes is a way to estimate the probability distribution over all pos- sible models for a given sample. Kendall & Gal (2017) combine this approach with a model that outputs a predic- tive mean and variance for each data point. This notion of uncertainty is not restricted to classiï¬cation problems. Ad- ditionally, neural networks can be used in conjunction with Bayesian models that output complete distributions. For example, deep kernel learning (Wilson et al., 2016a;b; Al- Shedivat et al., 2016) combines deep neural networks with Gaussian processes on classiï¬cation and regression prob- lems. In contrast, our framework, which does not augment the neural network model, returns a conï¬dence score rather than returning a distribution of possible outputs.
# 5. Results
We apply the calibration methods in Section 4 to image classiï¬cation and document classiï¬cation neural networks. For image classiï¬cation we use 6 datasets:
1. Caltech-UCSD Birds 200 bird species. train/validation/test sets. al., 2010): et 5994/2897/2897 images for (Welinder
2. Stanford Cars (Krause et al., 2013): 196 classes of cars by make, model, and year. 8041/4020/4020 im- ages for train/validation/test.
3. ImageNet 2012 (Deng et al., 2009): Natural scene im- ages from 1000 classes. 1.3 million/25,000/25,000 images for train/validation/test.
4. CIFAR-10/CIFAR-100 (Krizhevsky & Hinton, 2009): Color from 10/100 classes. 45,000/5,000/10,000 images for train/validation/test. 5. Street View House Numbers (SVHN) (Netzer et al., 32 Ã 32 colored images of cropped 2011): out house numbers from Google Street View. 598,388/6,000/26,032 images for train/validation/test.
We train state-of-the-art convolutional networks: ResNets (He et al., 2016), ResNets with stochastic depth (SD) (Huang et al., 2016), Wide ResNets (Zagoruyko & Ko- modakis, 2016), and DenseNets (Huang et al., 2017). We use the data preprocessing, training procedures, and hyper- parameters as described in each paper. For Birds and Cars, we ï¬ne-tune networks pretrained on ImageNet.
For document classiï¬cation we experiment with 4 datasets:
1. 20 News: News articles, partitioned into 20 cate-
gories by content. 9034/2259/7528 documents for train/validation/test.
2. Reuters: News articles, partitioned into 8 cate- 4388/1097/2189 documents for gories by topic. train/validation/test.
3. Stanford Sentiment Treebank (SST) (Socher et al., 2013): Movie reviews, represented as sentence parse trees that are annotated by sentiment. Each sample in- cludes a coarse binary label and a ï¬ne grained 5-class label. As described in (Tai et al., 2015), the train- ing/validation/test sets contain 6920/872/1821 docu- ments for binary, and 544/1101/2210 for ï¬ne-grained.
On 20 News and Reuters, we train Deep Averaging Net- works (DANs) (Iyyer et al., 2015) with 3 feed-forward layers and Batch Normalization. On SST, we train TreeLSTMs (Long Short Term Memory) (Tai et al., 2015). For both models we use the default hyperparmaeters sug- gested by the authors.
Calibration Results. Table 1 displays model calibration, as measured by ECE (with M = 15 bins), before and af- ter applying the various methods (see Section S3 for MCE, NLL, and error tables). It is worth noting that most datasets and models experience some degree of miscalibration, with ECE typically between 4 to 10%. This is not architecture speciï¬c: we observe miscalibration on convolutional net- works (with and without skip connections), recurrent net- works, and deep averaging networks. The two notable ex- ceptions are SVHN and Reuters, both of which experience ECE values below 1%. Both of these datasets have very low error (1.98% and 2.97%, respectively); and therefore the ratio of ECE to error is comparable to other datasets.
Our most important discovery is the surprising effective- ness of temperature scaling despite its remarkable simplic- ity. Temperature scaling outperforms all other methods on the vision tasks, and performs comparably to other methods on the NLP datasets. What is perhaps even more surpris- ing is that temperature scaling outperforms the vector and matrix Platt scaling variants, which are strictly more gen- eral methods. In fact, vector scaling recovers essentially the same solution as temperature scaling â the learned vec- tor has nearly constant entries, and therefore is no different than a scalar transformation. In other words, network mis- calibration is intrinsically low dimensional.
The only dataset that temperature scaling does not calibrate is the Reuters dataset. In this instance, only one of the above methods is able to improve calibration. Because this dataset is well-calibrated to begin with (ECE ⤠1%), there is not much room for improvement with any method, and post-processing may not even be necessary to begin with. It is also possible that our measurements are affected by dataset split or by the particular binning scheme.
Uncal. - CIFAR-100 ResNet-110 (SD) ResNet-110 (SD) HE Outputs HM Outputs [= Gap âBeecez267] | a ece-096 Temp. Scale - CIFAR-100 Hist. Bin. - CIFAR-100 ResNet-110 (SD) Iso. Reg. - CIFAR-100 ResNet-110 (SD) HM Outputs [= Gap HM Outputs [=I Gap
0.0 0.2 04 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 0.6 0.8 1.0 Confidence
Figure 4. Reliability diagrams for CIFAR-100 before (far left) and after calibration (middle left, middle right, far right).
Matrix scaling performs poorly on datasets with hundreds of classes (i.e. Birds, Cars, and CIFAR-100), and fails to converge on the 1000-class ImageNet dataset. This is expected, since the number of parameters scales quadrat- ically with the number of classes. Any calibration model with tens of thousands (or more) parameters will overï¬t to a small validation set, even when applying regularization.
Binning methods improve calibration on most datasets, but do not outperform temperature scaling. Additionally, bin- ning methods tend to change class predictions which hurts accuracy (see Section S3). Histogram binning, the simplest binning method, typically outperforms isotonic regression and BBQ, despite the fact that both methods are strictly more general. This further supports our ï¬nding that cali- bration is best corrected by simple models.
Reliability diagrams. Figure 4 contains reliability dia- grams for 110-layer ResNets on CIFAR-100 before and af- ter calibration. From the far left diagram, we see that the uncalibrated ResNet tends to be overconï¬dent in its pre- dictions. We then can observe the effects of temperature scaling (middle left), histogram binning (middle right), and isotonic regression (far right) on calibration. All three dis- played methods produce much better conï¬dence estimates. Of the three methods, temperature scaling most closely re- covers the desired diagonal function. Each of the bins are well calibrated, which is remarkable given that all the prob- abilities were modiï¬ed by only a single parameter. We in- clude reliability diagrams for other datasets in Section S4.
Computation time. All methods scale linearly with the number of validation set samples. Temperature scaling is by far the fastest method, as it amounts to a one- dimensional convex optimization problem. Using a conju- gate gradient solver, the optimal temperature can be found in 10 iterations, or a fraction of a second on most modern hardware. In fact, even a naive line-search for the optimal temperature is faster than any of the other methods. The
computational complexity of vector and matrix scaling are linear and quadratic respectively in the number of classes, reï¬ecting the number of parameters in each method. For CIFAR-100 (K = 100), ï¬nding a near-optimal vector scal- ing solution with conjugate gradient descent requires at least 2 orders of magnitude more time. Histogram binning and isotonic regression take an order of magnitude longer than temperature scaling, and BBQ takes roughly 3 orders of magnitude more time.
Ease of implementation. BBQ is arguably the most dif- ï¬cult to implement, as it requires implementing a model averaging scheme. While all other methods are relatively easy to implement, temperature scaling may arguably be the most straightforward to incorporate into a neural net- work pipeline. In Torch7 (Collobert et al., 2011), for ex- ample, we implement temperature scaling by inserting a nn.MulConstant between the logits and the softmax, whose parameter is 1/T . We set T = 1 during training, and subsequently ï¬nd its optimal value on the validation set.4
# 6. Conclusion
Modern neural networks exhibit a strange phenomenon: probabilistic error and miscalibration worsen even as clas- siï¬cation error is reduced. We have demonstrated that recent advances in neural network architecture and train- ing â model capacity, normalization, and regularization â have strong effects on network calibration. It remains future work to understand why these trends affect cali- bration while improving accuracy. Nevertheless, simple techniques can effectively remedy the miscalibration phe- nomenon in neural networks. Temperature scaling is the simplest, fastest, and most straightforward of the methods, and surprisingly is often the most effective.
4 For an example implementation, see http://github. com/gpleiss/temperature_scaling.
# Acknowledgments
The authors are supported in part by the III-1618134, III- 1526012, and IIS-1149882 grants from the National Sci- ence Foundation, as well as the Bill and Melinda Gates Foundation and the Ofï¬ce of Naval Research.
# References
Al-Shedivat, Maruan, Wilson, Andrew Gordon, Saatchi, Yunus, Hu, Zhiting, and Xing, Eric P. Learning scal- able deep kernels with recurrent structure. arXiv preprint arXiv:1610.08936, 2016.
Bengio, Yoshua, Goodfellow, Ian J, and Courville, Aaron. Deep learning. Nature, 521:436â444, 2015.
Bojarski, Mariusz, Del Testa, Davide, Dworakowski, Daniel, Firner, Bernhard, Flepp, Beat, Goyal, Prasoon, Jackel, Lawrence D, Monfort, Mathew, Muller, Urs, Zhang, Jiakai, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
Caruana, Rich, Lou, Yin, Gehrke, Johannes, Koch, Paul, Sturm, Marc, and Elhadad, Noemie. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD, 2015.
Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Cl´ement. Torch7: A matlab-like environment for ma- chine learning. In BigLearn Workshop, NIPS, 2011.
Cosmides, Leda and Tooby, John. Are humans good intu- itive statisticians after all? rethinking some conclusions from the literature on judgment under uncertainty. cog- nition, 58(1):1â73, 1996.
DeGroot, Morris H and Fienberg, Stephen E. The compar- ison and evaluation of forecasters. The statistician, pp. 12â22, 1983.
Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, Imagenet: A large-scale hierarchical and Fei-Fei, Li. image database. In CVPR, pp. 248â255, 2009.
Denker, John S and Lecun, Yann. Transforming neural-net In NIPS, pp. output levels to probability distributions. 853â859, 1990.
Friedman, Jerome, Hastie, Trevor, and Tibshirani, Robert. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001.
Gal, Yarin and Ghahramani, Zoubin. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016.
Girshick, Ross. Fast r-cnn. In ICCV, pp. 1440â1448, 2015.
Hannun, Awni, Case, Carl, Casper, Jared, Catanzaro, Bryan, Diamos, Greg, Elsen, Erich, Prenger, Ryan, Satheesh, Sanjeev, Sengupta, Shubho, Coates, Adam, et al. Deep speech: Scaling up end-to-end speech recog- nition. arXiv preprint arXiv:1412.5567, 2014.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In CVPR, pp. 770â778, 2016.
Hendrycks, Dan and Gimpel, Kevin. A baseline for de- tecting misclassiï¬ed and out-of-distribution examples in neural networks. In ICLR, 2017.
Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the knowledge in a neural network. 2015.
Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep networks with stochastic depth. In ECCV, 2016.
Huang, Gao, Liu, Zhuang, Weinberger, Kilian Q, and van der Maaten, Laurens. Densely connected convolu- tional networks. In CVPR, 2017.
Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015.
Iyyer, Mohit, Manjunatha, Varun, Boyd-Graber, Jordan, and Daum´e III, Hal. Deep unordered composition rivals syntactic methods for text classiï¬cation. In ACL, 2015.
Jaynes, Edwin T. Information theory and statistical me- chanics. Physical review, 106(4):620, 1957.
Jiang, Xiaoqian, Osl, Melanie, Kim, Jihoon, and Ohno- Machado, Lucila. Calibrating predictive model estimates to support personalized medicine. Journal of the Amer- ican Medical Informatics Association, 19(2):263â274, 2012.
Kendall, Alex and Cipolla, Roberto. Modelling uncertainty in deep learning for camera relocalization. 2016.
Kendall, Alex and Gal, Yarin. What uncertainties do we need in bayesian deep learning for computer vision? arXiv preprint arXiv:1703.04977, 2017.
Krause, Jonathan, Stark, Michael, Deng, Jia, and Fei-Fei, Li. 3d object representations for ï¬ne-grained catego- rization. In IEEE Workshop on 3D Representation and Recognition (3dRR), Sydney, Australia, 2013.
Krizhevsky, Alex and Hinton, Geoffrey. Learning multiple layers of features from tiny images, 2009.
Kuleshov, Volodymyr and Ermon, Stefano. Reliable con- ï¬dence estimation via online learning. arXiv preprint arXiv:1607.03594, 2016.
Supplementary Materials: On Calibration of Modern Neural Networks
Kuleshov, Volodymyr and Liang, Percy. Calibrated struc- tured prediction. In NIPS, pp. 3474â3482, 2015.
Srivastava, Rupesh Kumar, Greff, Klaus, and Schmid- arXiv preprint huber, J¨urgen. Highway networks. arXiv:1505.00387, 2015.
Lakshminarayanan, Balaji, Pritzel, Alexander, and Blun- dell, Charles. Simple and scalable predictive uncer- tainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474, 2016.
Tai, Kai Sheng, Socher, Richard, and Manning, Christo- Improved semantic representations from tree- pher D. structured long short-term memory networks. 2015.
LeCun, Yann, Bottou, L´eon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â 2324, 1998.
MacKay, David JC. A practical bayesian framework for backpropagation networks. Neural computation, 4(3): 448â472, 1992.
Naeini, Mahdi Pakdaman, Cooper, Gregory F, and Hauskrecht, Milos. Obtaining well calibrated probabili- ties using bayesian binning. In AAAI, pp. 2901, 2015.
Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading dig- its in natural images with unsupervised feature learning. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.
Niculescu-Mizil, Alexandru and Caruana, Rich. Predicting In ICML, good probabilities with supervised learning. pp. 625â632, 2005.
Pereyra, Gabriel, Tucker, George, Chorowski, Jan, Kaiser, Åukasz, and Hinton, Geoffrey. Regularizing neural networks by penalizing conï¬dent output distributions. arXiv preprint arXiv:1701.06548, 2017.
Vapnik, Vladimir N. Statistical Learning Theory. Wiley- Interscience, 1998.
Welinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., and Perona, P. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Insti- tute of Technology, 2010.
Wilson, Andrew G, Hu, Zhiting, Salakhutdinov, Ruslan R, and Xing, Eric P. Stochastic variational deep kernel learning. In NIPS, pp. 2586â2594, 2016a.
Wilson, Andrew Gordon, Hu, Zhiting, Salakhutdinov, Rus- lan, and Xing, Eric P. Deep kernel learning. In AISTATS, pp. 370â378, 2016b.
Xiong, Wayne, Droppo, Jasha, Huang, Xuedong, Seide, Frank, Seltzer, Mike, Stolcke, Andreas, Yu, Dong, Achieving human parity in and Zweig, Geoffrey. arXiv preprint conversational speech recognition. arXiv:1610.05256, 2016.
Zadrozny, Bianca and Elkan, Charles. Obtaining cal- ibrated probability estimates from decision trees and naive bayesian classiï¬ers. In ICML, pp. 609â616, 2001.
Zadrozny, Bianca and Elkan, Charles. Transforming classi- ï¬er scores into accurate multiclass probability estimates. In KDD, pp. 694â699, 2002.
Platt, John et al. Probabilistic outputs for support vec- tor machines and comparisons to regularized likelihood methods. Advances in large margin classiï¬ers, 10(3): 61â74, 1999.
Simonyan, Karen and Zisserman, Andrew. Very deep con- volutional networks for large-scale image recognition. In ICLR, 2015.
Zagoruyko, Sergey and Komodakis, Nikos. Wide residual networks. In BMVC, 2016.
Zhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Ben- jamin, and Vinyals, Oriol. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
Socher, Richard, Perelygin, Alex, Wu, Jean, Chuang, Ja- son, Manning, Christopher D., Ng, Andrew, and Potts, Christopher. Recursive deep models for semantic com- positionality over a sentiment treebank. In EMNLP, pp. 1631â1642, 2013.
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014.
# Supplementary Materials for: On Calibration of Modern Neural Networks
# S1. Further Information on Calibration Metrics
We can connect the ECE metric with our exact miscalibra- tion definition, which is restated here: e||P(Â¥=Â¥ | P=p) ~|
e||P(Â¥=Â¥ | P=p) ~|
Let F ËP (p) be the cumulative distribution function of ËP so that F ËP (b) â F ËP (a) = P( ËP â [a, b]). Using the Riemann- Stieltjes integral we have
e(le(? =r P=») -o Are Y | P =p) ~p|adFp(p) M YS [PO =¥1P = pm) = Pm| PCP ⬠Im) m= x
The ï¬rst two constraint ensure that q is a probability dis- tribution, while the last constraint limits the scope of distri- butions. Intuitively, the constraint speciï¬es that the average true class logit is equal to the average weighted logit.
Proof. We solve this constrained optimization problem us- ing the Lagrangian. We ï¬rst ignore the constraint q(zi)(k) and later show that the solution satisï¬es this condition. Let λ, β1, . . . , βn â R be the Lagrangian multipliers and deï¬ne
n ae n [K + > 2 q(2,)) - 2M) =1 Lk=1 4) tog g(a) £378 Sale) ~ 1). i=l k=1
m=1 Im represents
where I,, represents the interval of bin By. IP = YIP =Pm)âPm| is closely approximated by |acc(Bm) â p(Bm)| for n large. Hence ECE using bins converges to the M-term Riemann-Stieltjes sum of zn[[e(=Â¥ iP») al)
k=1 Taking the derivative with respect to q(zi)(k) gives
â âq(zi)(k) L = ânK â log q(zi)(k) + λz(k) i + βi.
Setting the gradient of the Lagrangian L to 0 and rearrang- ing gives
# S2. Further Information on Temperature Scaling
q(zi)(k) = eλz(k) i +βiânK.
Since D4,
k=1 q(zi)(k) = 1 for all i, we must have
Here we derive the temperature scaling model using the en- tropy maximization principle with an appropriate balanced equation. Claim 1. Given n samplesâ logit vectors z1, . . . , zn and class labels y1, . . . , yn, temperature scaling is the unique solution q to the following entropy maximization problem:
yet) (ht) _ ©" Zi) = ââ yk et j=l
which recovers the temperature scaling model by setting T = 1 λ .
sty 3 (zi) log q(zi)⢠max i=1k=1 subject to q(z;) >0 Vi,k K q(zi)⢠= Vi k=1 n n K 5 k 7 1) =P Mala.
Figure S1 visualizes Claim 1. We see that, as training con- tinues, the model begins to overï¬t with respect to NLL (red line). This results in a low-entropy softmax distribution over classes (blue line), which explains the modelâs over- conï¬dence. Temperature scaling not only lowers the NLL but also raises the entropy of the distribution (green line).
# S3. Additional Tables
Tables S1, S2, and S3 display the MCE, test error, and NLL for all the experimental settings outlined in Section 5.
Supplementary Materials: On Calibration of Modern Neural Networks
Entropy vs. NLL on CIFARâ100
3.5 â Entropy & NLL after Calibration ââ Entropy before Calibration 31 NLL before Calibration ââ Optimal T Selected Entropy / NLL / T 100 200 300 Epoch 400 500
Figure S1. Entropy and NLL for CIFAR-100 before and after calibration. The optimal T selected by temperature scaling rises throughout optimization, as the pre-calibration entropy decreases steadily. The post-calibration entropy and NLL on the validation set coincide (which can be derived from the gradient optimality condition of T ).
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 30.06% 41.55% 33.78% 34.52% 27.97% 22.44% 8.02% 35.5% 26.42% 33.11% 21.52% 10.25% 14.07% 12.2% 19.36% 25.35% 5.16% 26.87% 17.0% 12.19% 7.77% 16.49% 7.03% 9.12% 6.22% 9.36% 18.61% 13.14% 14.57% 11.16% 16.59% 11.72% 15.23% 9.31% 7.8% 72.64% 16.45% 19.26% 6.19% 9.22% 19.54% 14.57% 18.34% 82.35% 10.36% 10.9% 10.95% 9.12% 14.87% 11.88% 10.59% 8.67% 3.64% 9.96% 11.57% 10.96% 8.74% 8.85% 18.67% 9.09% 9.08% 20.23% 8.56% 15.45% 9.11% 4.58% 5.14% 4.74% 8.85% 5.33% 19.4% 5.22% 12.29% 12.29% 18.05% 9.81% 8.59% 27.39% 15.55% 4.43% 3.17% 19.39% 2.5% 8.85% 6.31% 8.82% 8.65% 9.61% 9.61% 30.78% 38.67% 29.65% 22.89% 10.74% 9.65% 4.36% 16.89% 45.62% 35.6% 44.73% 38.64% 18.77% - - 18.76% 20 News Reuters SST Binary SST Fine Grained DAN 3 DAN 3 TreeLSTM TreeLSTM 17.03% 14.01% 21.66% 27.85% 10.47% 16.78% 3.22% 28.35% 6.28% 9.13% 44.95% 36.18% 13.91% 36.43% 8.67% 19.0% 8.21% 25.46% 6.03% 44.75% 8.24% 18.88% 6.03% 11.47% 17.43% 19.39% 6.03% 11.78%
Table S1. MCE (%) (with M = 15 bins) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a modelâs name denotes the network depth. MCE seems very sensitive to the binning scheme and is less suited for small test sets.
# S4. Additional Reliability Diagrams
We include reliability diagrams for additional datasets: CIFAR-10 (Figure S2) and SST (Figure S3 and Figure S4). Note that, as mentioned in Section 2, the reliability dia-
grams do not represent the proportion of predictions that belong to a given bin.
Supplementary Materials: On Calibration of Modern Neural Networks
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 22.54% 14.28% 6.21% 5.64% 6.96% 5.91% 15.57% 27.83% 24.91% 28.0% 26.45% 44.92% 22.57% 22.31% 1.98% 55.02% 16.24% 6.45% 5.59% 7.3% 6.12% 15.63% 34.78% 33.78% 34.29% 34.78% 54.06% 48.32% 48.1% 2.06% 23.37% 37.76% 14.9% 19.25% 6.25% 6.36% 5.55% 5.62% 7.35% 7.01% 5.96% 6.0% 15.69% 15.64% 28.41% 28.56% 25.42% 25.17% 28.61% 29.08% 26.73% 26.4% 45.77% 46.82% 23.2% 47.58% 22.94% 47.6% 2.04% 2.04% 22.54% 14.28% 6.21% 5.64% 6.96% 5.91% 15.57% 27.83% 24.91% 28.0% 26.45% 44.92% 22.57% 22.31% 1.98% 22.99% 14.15% 6.37% 5.62% 7.1% 5.96% 15.53% 27.82% 24.99% 28.45% 26.25% 45.53% 22.54% 22.56% 2.0% 29.51% 17.98% 6.42% 5.69% 7.27% 6.0% 15.81% 38.77% 35.09% 37.4% 36.14% 52.44% - - 2.08% 20 News Reuters SST Binary SST Fine Grained DAN 3 DAN 3 TreeLSTM TreeLSTM 20.06% 2.97% 11.81% 49.5% 25.12% 7.81% 12.08% 49.91% 20.29% 20.81% 3.52% 3.93% 11.75% 11.26% 48.55% 49.86% 20.06% 2.97% 11.81% 49.5% 19.89% 2.83% 11.81% 49.77% 22.0% 3.52% 11.81% 48.51%
Table S2. Test error (%) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a modelâs name denotes the network depth. Error with temperature scaling is exactly the same as uncalibrated.
Dataset Model Uncalibrated Hist. Binning Isotonic BBQ Temp. Scaling Vector Scaling Matrix Scaling Birds Cars CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 CIFAR-100 ImageNet ImageNet SVHN ResNet 50 ResNet 50 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 ResNet 110 ResNet 110 (SD) Wide ResNet 32 DenseNet 40 LeNet 5 DenseNet 161 ResNet 152 ResNet 152 (SD) 0.9786 0.5488 0.3285 0.2959 0.3293 0.2228 0.4688 1.4978 1.1157 1.3434 1.0134 1.6639 0.9338 0.8961 0.0842 1.6226 0.7977 0.2532 0.2027 0.2778 0.212 0.529 1.4379 1.1985 1.4499 1.2156 2.2574 1.4716 1.4507 0.1137 1.4128 0.8793 0.2237 0.1867 0.2428 0.1969 0.4757 1.207 1.0317 1.2086 1.0615 1.8173 1.1912 1.1859 0.095 1.2539 0.6986 0.263 0.2159 0.2774 0.2087 0.4984 1.5466 1.1982 1.459 1.1572 1.9893 1.4272 1.3987 0.1062 0.8792 0.5311 0.2102 0.1718 0.2283 0.1750 0.459 1.0442 0.8613 1.0565 0.9026 1.6560 0.8885 0.8657 0.0821 0.9021 0.5299 0.2088 0.1709 0.2275 0.1757 0.4568 1.0485 0.8655 1.0648 0.9011 1.6648 0.8879 0.8742 0.0844 2.334 1.0206 0.2048 0.1766 0.2229 0.176 0.4607 2.5637 1.8182 2.5507 1.9639 2.1405 - - 0.0924 20 News Reuters SST Binary SST Fine Grained DAN 3 DAN 3 TreeLSTM TreeLSTM 0.7949 0.102 0.3367 1.1475 1.0499 0.2403 0.2842 1.1717 0.8968 0.1475 0.2908 1.1661 0.9519 0.1167 0.2778 1.149 0.7387 0.0994 0.2739 1.1168 0.7296 0.0990 0.2739 1.1085 0.9089 0.1491 0.2739 1.1112
Table S3. NLL (%) on standard vision and NLP datasets before calibration and with various calibration methods. The number following a modelâs name denotes the network depth. To summarize, NLL roughly follows the trends of ECE.
Supplementary Materials: On Calibration of Modern Neural Networks
2 q 8 8 a
Uncal. - CIFAR-10 Temp. Scale - CIFAR-10 Hist. Bin. - CIFAR-10 Iso. Reg. - CIFAR-10 ResNet-110 (SD) ResNet-110 (SD) ResNet-110 (SD) ResNet-110 (SD) 1.0 HE Outputs HE Outputs HE Outputs 0.8 if (â5 Gap (25) Gap (5) Gap 06 0.4 0.4 0.2 ¢
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 06 08 1.0 0.0 0.2 04 0.6 0.8 1.0 Confidence
Figure S2. Reliability diagrams for CIFAR-10 before (far left) and after calibration (middle left, middle right, far right).
Uncal. - SST-FG Temp. Scale - SST-FG Hist. Bin. - SST-FG Iso. Reg. - SST-FG 10 Tree LSTM Tree LSTM Tree LSTM Tree LSTM HE Outputs HE Outputs 0.8 (=) Gap [=>] Gap 0.6 0.4 0.2 âI Wifscs=2.56 0.0
# Accuracy
0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 06 0.8 10 00 0.2 04 06 0.8 1.0 0.0 0.2 04 06 0.8 1.0 Confidence
Figure S3. Reliability diagrams for SST Binary and SST Fine Grained before (far left) and after calibration (middle left, middle right, far right).
Uncal. - SST-BIN Temp. Scale - SST-BIN Hist. Bin. - SST-BIN Iso. Reg. - SST-BIN 10 Tree LSTM Tree LSTM Tree LSTM Tree LSTM , HE Outputs HE Outputs 0.8 (=) Gap [1 Gap 06 0.4 0.2 0.0
2 3 a
0.0 0.2 04 06 0.8 1.0 0.0 0.2 04 06 0.8 10 00 0.2 04 06 0.8 1.0 0.0 0.2 04 06 0.8 1.0 Confidence
Figure S4. Reliability diagrams for SST Binary and SST Fine Grained before (far left) and after calibration (middle left, middle right, far right). | {
"id": "1610.08936"
} |
1706.03872 | Six Challenges for Neural Machine Translation | We explore six challenges for neural machine translation: domain mismatch,
amount of training data, rare words, long sentences, word alignment, and beam
search. We show both deficiencies and improvements over the quality of
phrase-based statistical machine translation. | http://arxiv.org/pdf/1706.03872 | Philipp Koehn, Rebecca Knowles | cs.CL | 12 pages; First Workshop on Neural Machine Translation, 2017 | null | cs.CL | 20170612 | 20170612 | 7 1 0 2 n u J 2 1 ] L C . s c [
arXiv:1706.03872v1 |
1 v 2 7 8 3 0 . 6 0 7 1 : v i X r a
# Six Challenges for Neural Machine Translation
# Philipp Koehn Computer Science Department Johns Hopkins University phi@jhu.edu
Rebecca Knowles Computer Science Department Johns Hopkins University rknowles@jhu.edu
# Abstract
We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deï¬ciencies and improvements over the quality of phrase- based statistical machine translation.
# 1 Introduction
3. NMT systems that operate at the sub-word level (e.g. with byte-pair encoding) perform better than SMT systems on extremely low- frequency words, but still show weakness in translating low-frequency words belonging to highly-inï¬ected categories (e.g. verbs).
4. NMT systems have lower translation quality on very long sentences, but do comparably better up to a sentence length of about 60 words.
Neural machine translation has emerged as the most promising machine translation approach in recent years, showing superior performance on public benchmarks (Bojar et al., 2016) and rapid adoption in deployments by, e.g., Google (Wu et al., 2016), Systran (Crego et al., 2016), and WIPO (Junczys-Dowmunt et al., 2016). But there have also been reports of poor performance, such as the systems built under low-resource conditions in the DARPA LORELEI program.1
In this paper, we examine a number of chal- lenges to neural machine translation (NMT) and give empirical results on how well the technology currently holds up, compared to traditional statis- tical machine translation (SMT).
5. The attention model for NMT does not al- ways fulï¬ll the role of a word alignment model, but may in fact dramatically diverge.
6. Beam search decoding only improves trans- lation quality for narrow beams and deterio- rates when exposed to a larger search space.
We note a 7th challenge that we do not exam- ine empirically: NMT systems are much less in- terpretable. The answer to the question of why the training data leads these systems to decide on speciï¬c word choices during decoding is buried in large matrices of real-numbered values. There is a clear need to develop better analytics for NMT.
We ï¬nd that:
1. NMT systems have lower quality out of do- main, to the point that they completely sacri- ï¬ce adequacy for the sake of ï¬uency.
the compa- rable performance of NMT and SMT sys- tems. Bentivogli et al. (2016) considered dif- ferent linguistic categories for EnglishâGerman and Toral and S´anchez-Cartagena (2017) com- pared different broad aspects such as ï¬uency and reordering for nine language directions.
2. NMT systems have a steeper learning curve with respect to the amount of training data, resulting in worse quality in low-resource settings, but better performance in high- resource settings.
1https://www.nist.gov/itl/iad/mig/lorehlt16- evaluations
# 2 Experimental Setup
We use common toolkits for neural machine trans- lation (Nematus) and traditional phrase-based sta- tistical machine translation (Moses) with common data sets, drawn from WMT and OPUS.
# 2.1 Neural Machine Translation
While a variety of neural machine transla- tion approaches were initially proposed â such as the use of convolutional neural networks (Kalchbrenner and Blunsom, 2013) â practically all recent work has been focused on the attention- based encoder-decoder model (Bahdanau et al., 2015).
We use the toolkit Nematus2 (Sennrich et al., 2017) which has been shown to give state-of-the- art results (Sennrich et al., 2016a) at the WMT 2016 evaluation campaign (Bojar et al., 2016).
Unless noted otherwise, we use default settings, such as beam search and single model decoding. The training data is processed with byte-pair en- coding (Sennrich et al., 2016b) into subwords to ï¬t a 50,000 word vocabulary limit.
# 2.2 Statistical Machine Translation
Our machine translation systems are trained us- ing Moses3 (Koehn et al., 2007). We build phrase- based systems using standard features that are commonly used in recent system submissions to WMT (Williams et al., 2016; Ding et al., 2016a). While we use the shorthand SMT for these phrase-based systems, we note that there are other statistical machine translation approaches such as hierarchical phrase-based models (Chiang, 2007) and syntax-based models (Galley et al., 2004, 2006) that have been shown to give superior per- formance for language pairs such as Chineseâ English and GermanâEnglish.
# 2.3 Data Conditions
We carry out our experiments on EnglishâSpanish and GermanâEnglish. For these language pairs, large training data sets are available. We use datasets from the shared translation task organized alongside the Conference on Machine Translation (WMT)4. For the domain experiments, we use the OPUS corpus5 (Tiedemann, 2012).
Except for the domain experiments, we use the WMT test sets composed of news stories, which are characterized by a broad range of topic, for- mal language, relatively long sentences (about 30 words on average), and high standards for gram- mar, orthography, and style.
2https://github.com/rsennrich/nematus/ 3http://www.stat.org/moses/ 4http://www.statmt.org/wmt17/ 5http://opus.lingfil.uu.se/
Corpus Law (Acquis) Medical (EMEA) IT Koran (Tanzil) Subtitles Words 18,128,173 14,301,472 3,041,677 9,848,539 114,371,754 Sentences W/S 25.3 12.9 9.0 20.5 8.2 715,372 1,104,752 337,817 480,421 13,873,398
Table 1: Corpora used to train domain-speciï¬c systems, IT corpora are GNOME, KDE, PHP, Ubuntu, and OpenOfï¬ce.
# 3 Challenges
# 3.1 Domain Mismatch
A known challenge in translation is that in dif- ferent domains,6 words have different transla- tions and meaning is expressed in different styles. Hence, a crucial step in developing machine trans- lation systems targeted at a speciï¬c use case is domain adaptation. We expect that methods for domain adaptation will be developed for NMT. A currently popular approach is to train a general do- main system, followed by training on in-domain data for a few epochs (Luong and Manning, 2015; Freitag and Al-Onaizan, 2016).
Often, large amounts of training data are only available out of domain, but we still seek to have robust performance. To test how well NMT and SMT hold up, we trained ï¬ve different sys- tems using different corpora obtained from OPUS (Tiedemann, 2012). An additional system was trained on all the training data. Statistics about corpus sizes are shown in Table 1. Note that these domains are quite distant from each other, much more so than, say, Europarl, TED Talks, News Commentary, and Global Voices.
We trained both SMT and NMT systems for all domains. All systems were trained for German- English, with tuning and test sets sub-sampled from the data (these were not used in training). A common byte-pair encoding is used for all training runs.
See Figure 1 for results. While the in-domain NMT and SMT systems are similar (NMT is better for IT and Subtitles, SMT is better for Law, Med- ical, and Koran), the out-of-domain performance for the NMT systems is worse in almost all cases, sometimes dramatically so. For instance the Med-
6We use the customary deï¬nition of domain in machine translation: a domain is deï¬ned by a corpus from a speciï¬c source, and may differ from other domains in topic, genre, style, level of formality, etc.
System â Law Medical IT Koran Subtitles All Data 30.5 32.8 45.1 42.2 35.3 44.7 17.9 17.9 26.4 20.8 2.2 1.3 6.9 18.2 12.1 34.4 31.1 Law 3.5 2.8 6.0 2.0 0.6 8.5 43.5 39.4 10.2 3.9 Medical 2.0 1.4 5.8 6.5 5.3 1.9 42.1 39.8 IT 1.8 1.6 3.7 3.9 4.7 0.0 2.1 0.4 0.0 2.3 Koran 15.9 18.8 1.8 1.0 5.5 9.3 17.8 25.9 9.2 13.6 7.0 9.0 8.4 Subtitles 9.9 22.1
Figure 1: Quality of systems (BLEU), when trained on one domain (rows) and tested on another domain (columns). Comparably, NMT systems (left bars) show more degraded performance out of domain.
ical system leads to a BLEU score of 3.9 (NMT) vs. 10.2 (SMT) on the Law test set.
Figure 2 displays an example. When translating the sentence Schaue um dich herum. (reference: Look around you.) from the Subtitles corpus, we see mostly non-sensical and completely unre- lated output from the NMT system. For instance, the translation from the IT system is Switches to paused.
Note that the output of the NMT system is often quite ï¬uent (e.g., Take heed of your own souls.) but completely unrelated to the input, while the SMT output betrays its difï¬culties with coping with the out-of-domain input by leaving some words untranslated (e.g., Schaue by dich around.). This is of particular concern when MT is used for information gisting â the user will be mislead by hallucinated content in the NMT output.
Schaue um dich herum. Look around you. NMT: Look around you. SMT: Look around you. NMT: Sughum gravecorn. SMT: In order to implement dich Schaue . NMT: EMEA / MB / 049 / 01-EN-Final Work progamme for 2002 SMT: Schaue by dich around . NMT: Switches to paused. SMT: To Schaue by itself . NMT: Take heed of your own souls. SMT: And you see. Source Ref. All Law Medical IT Koran Subtitles NMT: Look around you. SMT: Look around you .
# 3.2 Amount of Training Data
A well-known property of statistical systems is increasing amounts of training data lead that to better results. In SMT systems, we have previously observed that doubling the amount of training data gives a ï¬xed increase in This holds true for both par- BLEU scores. allel and monolingual data (Turchi et al., 2008; Irvine and Callison-Burch, 2013).
Figure 2: Examples for the translation of a sen- tence from the Subtitles corpus, when translated with systems trained on different corpora. Per- formance out-of-domain is dramatically worse for NMT.
# BLEU Scores with Varying Amounts of Training Data
30 20 21.8 16.4 23.4 18.1 24.9 19.6 26.2 26.9 21.2 22.2 18.2 27.9 28.6 29.2 29.6 27.4 29.2 25.7 26.1 26.9 23.5 24.7 22.4 30.3 31.1 30.1 30.4 27.8 28.6 14.7 11.9 10 7.2 1.6 Phrase-Based with Big LM Phrase-Based Neural 0 106 107 108 Corpus Size (English Words)
Figure 3: BLEU scores for English-Spanish sys- tems trained on 0.4 million to 385.7 million words of parallel data. Quality for NMT starts much lower, outperforms SMT at about 15 mil- lion words, and even beats a SMT system with a big 2 billion word in-domain language model un- der high-resource conditions.
How do the data needs of SMT and NMT com- pare? NMT promises both to generalize better (ex- ploiting word similary in embeddings) and condi- tion on larger context (entire input and all prior output words).
We built English-Spanish systems on WMT data,7 about 385.7 million English words paired with Spanish. To obtain a learning curve, we used 1 1024 , 2 , and all of the data. For SMT, the language model was trained on the Spanish part of each subset, respectively. In addition to a NMT and SMT system trained on each subset, we also used all additionally provided monolingual data for a big language model in contrastive SMT sys- tems.
Results are shown in Figure 3. NMT ex- hibits a much steeper learning curve, starting with abysmal results (BLEU score of 1.6 vs. 16.4 for 1024 of the data), outperforming SMT 25.7 vs. 24.7 with 1 16 of the data (24.1 million words), and even beating the SMT system with a big language model with the full data set (31.1 for NMT, 28.4 for SMT, 30.4 for SMT+BigLM).
7Spanish was last represented in 2013, we used data from http://statmt.org/wmt13/translation-task.html
# Src:
A Republican strategy to counter the re-election of Obama Un ´organo de coordinaci´on para el anuncio de libre determinaci´on Lista de una estrategia para luchar contra la elecci´on de hojas de Ohio Explosi´on realiza una estrategia divisiva de luchar contra las elecciones de autor Una estrategia republicana para la eliminaci´on de la reelecci´on de Obama Estrategia siria para contrarrestar la reelecci´on del Obama .
1 1024 1 512 1 256 1 128 1 64 1 32 + Una estrategia republicana para contrarrestar la
reelecci´on de Obama
Figure 4: Translations of the ï¬rst sentence of the test set using NMT system trained on varying amounts of training data. Under low resource con- ditions, NMT produces ï¬uent output unrelated to the input.
The contrast between the NMT and SMT learn- ing curves is quite striking. While NMT is able to exploit increasing amounts of training data more effectively, it is unable to get off the ground with training corpus sizes of a few million words or less.
1024 of the training data, the output is completely unrelated to the input, some key words are properly translated with 1 256 of the data (estrategia for strat- egy, elecci´on or elecciones for election), and start- ing with 1
# 3.3 Rare Words
Conventional wisdom states that neural machine translation models perform particularly poorly on rare words, (Luong et al., 2015; Sennrich et al., 2016b; Arthur et al., 2016) due in part to the smaller vocabularies used by NMT systems. We examine this claim by comparing performance on rare word translation between NMT and SMT systems of similar quality for GermanâEnglish and ï¬nd that NMT systems actually outperform SMT systems on translation of very infrequent words. However, both NMT and SMT systems do continue to have difï¬culty translating some infrequent words, particularly those belonging to highly-inï¬ected categories.
For the neural machine translation model, we use a publicly available model8 with the train- ing settings of Edinburghâs WMT submission (Sennrich et al., 2016a). This was trained using
# 8https://github.com/rsennrich/wmt16-scripts/
70% 60% 50% 40% 0 â4 2 â â â3 1 8 â â 2 6 1 4 6 8 2 1 6 5 2 2 1 5 9 9 9 9 9 9 1 9 9 9 3 9 9 9 7 9 9 9 5 1 9 9 9 1 3 9 9 9 3 6 + 0 0 0 4 6 0% 5%
Figure 5: Precision of translation and deletion rates by source words type. SMT (light blue) and NMT (dark green). The horizontal axis represents the corpus frequency of the source types, with the axis labels showing the upper end of the bin. Bin width is proportional to the number of word types in that frequency range. The upper part of the graph shows the precision averaged across all word types in the bin. The lower part shows the proportion of source tokens in the bin that were deleted.
Nematus9 (Sennrich et al., 2017), with byte-pair encodings (Sennrich et al., 2016b) to allow for open-vocabulary NMT.
that we used was trained using Moses (Koehn et al., 2007), and the training data and parameters match those de- scribed in Johns Hopkins Universityâs submission to the WMT shared task (Ding et al., 2016b).
Both models have case-sensitive BLEU scores of 34.5 on the WMT 2016 news test set (for the NMT model, this reï¬ects the BLEU score re- sulting from translation with a beam size of 1). We use a single corpus for computing our lexi- cal frequency counts (a concatenation of Common Crawl, Europarl, and News Commentary).
described by follow the for examining the Koehn and Haddow (2012) effect of source word frequency on translation accuracy.10
The overall average precision is quite similar between the NMT and SMT systems, with the SMT system scoring 70.1% overall and the NMT system scoring 70.3%. This reï¬ects the similar overall quality of the MT systems. Figure 5 gives a detailed breakdown. The values above the hor- izontal axis represent precisions, while the lower portion represents what proportion of the words were deleted. The ï¬rst item of note is that the NMT system has an overall higher proportion of deleted words. Of the 64379 words examined, the NMT system is estimated to have deleted 3769 of them, while the SMT system deleted 2274. Both the NMT and SMT systems delete very frequent and very infrequent words at higher proportions than words that fall into the middle range. Across frequencies, the NMT systems delete a higher pro- portion of words than the SMT system does. (The related issue of translation length is discussed in more detail in Section 3.4.)
9https://github.com/rsennrich/nematus/ 10First, we automatically align the source sentence and the machine translation output. We use fast-align (Dyer et al., 2013) to align the full training corpus (source and reference) along with the test source and MT output. We use the sug- gested standard options for alignment and then symmetrize the alignment with grow-diag-ï¬nal-and.
The next interesting observation is what hap- pens with unknown words (words which were never observed in the training corpus). The SMT system translates these correctly 53.2% of the time, while the NMT system translates them cor-
Each source word is either unaligned (âdroppedâ) or aligned to one or more target language words. For each tar- get word to which the source word is aligned, we check if that target word appears in the reference translation. If the target word appears the same number of times in the MT out- put as in the reference, we award that alignment a score of one. If the target word appears more times in the MT output
than in the reference, we award fractional credit. If the target word does not appear in the reference, we award zero credit. We then average these scores over the full set of target words aligned to the given source word to compute the precision for that source word. Source words can then be binned by fre- quency and average translation precisions can be computed.
Label Adjective Named Entity Noun Number Verb Other Unobserved Observed Once 4 40 35 12 3 6 10 42 35 4 6 3
Table 2: Breakdown of the ï¬rst 100 tokens that were unobserved in training or observed once in training, by hand-annotated category.
rectly 60.1% of the time. This is reï¬ected in Fig- ure 5, where the SMT system shows a steep curve up from the unobserved words, while the NMT system does not see a great jump.
Both SMT and NMT systems actually have their worst performance on words that were ob- served a single time in the training corpus, drop- ping to 48.6% and 52.2%, respectively; even worse than for unobserved words. Table 2 shows a breakdown of the categories of words that were unobserved in the training corpus or observed only once. The most common categories across both are named entity (including entity and location names) and nouns. The named entities can of- ten be passed through unchanged (for example, the surname âElabdellaouiâ is broken into âE@@ lab@@ d@@ ell@@ a@@ ouiâ by the byte- pair encoding and is correctly passed through un- changed by both the NMT and SMT systems). Many of the nouns are compound nouns; when these are correctly translated, it may be attributed to compound-splitting (SMT) or byte-pair encod- ing (NMT). The factored SMT system also has ac- cess to the stemmed form of words, which can also play a similar role to byte-pair encoding in enabling translation of unobserved inï¬ected forms (e.g. adjectives, verbs). Unsurprisingly, there are many numbers that were unobserved in the train- ing data; these tend to be translated correctly (with occasional errors due to formatting of commas and periods, resolvable by post-processing).
The categories which involve more extensive inï¬ection (adjectives and verbs) are arguably the most interesting. Adjectives and verbs have worse accuracy rates and higher deletion rates than nouns across most word frequencies. We show examples in Figure 6 of situations where the NMT system succeeds and fails, and contrast it with the fail- ures of the SMT system. In Example 1, the NMT system successfully translates the unobserved ad- jective choreographiertes (choreographed), while
(1) ... choreographiertes Gesamtkunstwerk ... (2) ... die Polizei ihn einkesselte. (1) chore@@ ograph@@ iertes (2) ein@@ kes@@ sel@@ te (1) ... choreographed overall artwork ... (2) ... police stabbed him. (1) ... choreographiertes total work of art ... (2) ... police einkesselte him. (1) ... choreographed complete work of art ... (2) ... police closed in on him.
Figure 6: Examples of words that were unob- served in the training corpus, their byte-pair en- codings, and their translations.
the SMT system does not. In Example 2, the SMT system simply passes the German verb einkesselte (closed in on) unchanged into the out- put, while the NMT system fails silently, selecting the ï¬uent-sounding but semantically inappropriate âstabbedâ instead.
While there remains room for improvement, NMT systems (at least those using byte-pair en- coding) perform better on very low-frequency words then SMT systems do. Byte-pair encoding is sometimes sufï¬cient (much like stemming or compound-splitting) to allow the successful trans- lation of rare words even though it does not nec- essarily split words at morphological boundaries. As with the ï¬uent-sounding but semantically inap- propriate examples from domain-mismatch, NMT may sometimes fail similarly when it encounters unknown words even in-domain.
# 3.4 Long Sentences
A well-known ï¬aw of early encoder-decoder NMT models was the inability to properly translate long 2014; Pouget-Abadie et al., 2014). The introduction of the attention model remedied this problem somewhat. But how well?
We used the large English-Spanish system from the learning curve experiments (Section 3.2), and used it to translate a collection of news test sets from the WMT shared tasks. We broke up these sets into buckets based on source sentence length (1-9 subword tokens, 10-19 subword tokens, etc.) and computed corpus-level BLEU scores for each. Figure 7 shows the results. While overall NMT is better than SMT, the SMT system outperforms NMT on sentences of length 60 and higher. Qual- ity for the two systems is relatively close, except for the very long sentences (80 and more tokens). The quality of the NMT system is dramatically
BLEU Scores with Varying Sentence Length
35 34.7 34.7 33.9 33 33.8 34.1 U E L B 30 28.5 29.6 31 30.3 32.3 31.5 31.3 27.1 26.9 27.6 28.7 Neural Phrase-Based 27.7 25 0 10 Sentence Length (source, subword count) 20 30 40 50 60 70 80
Figure 7: Quality of translations based on sen- tence length. SMT outperforms NMT for sen- tences longer than 60 subword tokens. For very long sentences (80+) quality is much worse due to too short output.
lower for these since it produces too short trans- lations (length ratio 0.859, opposed to 1.024).
# 3.5 Word Alignment
The key contribution of the attention model in neural machine translation (Bahdanau et al., 2015) was the imposition of an alignment of the output words to the input words. This takes the shape of a probability distribution over the input words which is used to weigh them in a bag-of-words representation of the input sentence.
Arguably, this attention model does not func- tionally play the role of a word alignment between the source in the target, at least not in the same way as its analog in statistical machine translation. While in both cases, alignment is a latent variable that is used to obtain probability distributions over words or phrases, arguably the attention model has a broader role. For instance, when translating a verb, attention may also be paid to its subject and object since these may disambiguate it. To fur- ther complicate matters, the word representations are products of bidirectional gated recurrent neu- ral networks that have the effect that each word representation is informed by the entire sentence context.
But there is a clear need for an alignment mech- anism between source and target words. For in- stance, prior work used the alignments provided by the attention model to interpolate word trans- lation decisions with traditional probabilistic dic-
s n o i t a l e r n e e w t e b a m a b O d n a u h a y n a t e N e v a h n e e b d e n i a r t s r o f s r a e y . die Beziehungen 56 89 16 zwischen 72 26 Obama 96 und Netanjahu 79 98 sind 42 11 38 seit 22 54 10 Jahren angespannt . 11 14 84 23 98 49
Figure 8: Word alignment for EnglishâGerman: comparing the attention model states (green boxes with probability in percent if over 10) with align- ments obtained from fast-align (blue outlines).
tionaries (Arthur et al., 2016), for the introduction of coverage and fertility models (Tu et al., 2016), etc.
But is the attention model in fact the proper means? To examine this, we compare the soft alignment matrix (the sequence of attention vec- tors) with word alignments obtained by traditional word alignment methods. We use incremental fast-align (Dyer et al., 2013) to align the input and output of the neural machine system.
See Figure 8 for an illustration. We compare the word attention states (green boxes) with the word alignments obtained with fast align (blue outlines). For most words, these match up pretty well. Both attention states and fast-align align- ment points are a bit fuzzy around the function words have-been/sind.
the attention model may settle on alignments that do not correspond with our intu- ition or alignment points obtained with fast-align. See Figure 9 for the reverse language direction, GermanâEnglish. All the alignment points appear to be off by one position. We are not aware of any intuitive explanation for this divergent behavior â the translation quality is high for both systems.
We measure how well the soft alignment (atten- tion model) of the NMT system match the align- ments of fast-align with two metrics:
⢠a match score that checks for each output if the aligned input word according to fast-
s a d s i n t l ¨a h r e V n e h c s i w z a m a b O d n u u h a y n a t e N t s i t i e s n e r h a J t n n a p s e g . the relationship 47 81 17 between 72 Obama 87 and Netanyahu 93 95 has 38 16 26 been 21 14 54 stretched 77 for years 38 33 90 12 . 11 19 32 17
Figure 9: Mismatch between attention states and desired word alignments (GermanâEnglish).
align is indeed the input word that received the highest attention probability, and
⢠a probability mass score that sums up the probability mass given to each alignment point obtained from fast-align.
In these scores, we have to handle byte pair encod- ing and many-to-many alignments11
In out experiment, we use the neural ma- chine translation models provided by Edinburgh12 (Sennrich et al., 2016a). We run fast-align on the same parallel data sets to obtain alignment mod- els and used them to align the input and output of the NMT system. Table 3 shows alignment scores for the systems. The results suggest that, while drastic, the divergence for GermanâEnglish is an outlier. We note, however, that we have seen such large a divergence also under different data condi- tions.
11(1) NMT operates on subwords, but fast-align is run on full words. (2) If an input word is split into subwords by byte pair encoding, then we add their attention scores. (3) If an output word is split into subwords, then we take the average of their attention vectors. (4) The match scores and probability mass scores are computed as average over output word-level scores. (5) If an output word has no fast-align alignment point, it is ignored in this computation. (6) If an output word is fast-aligned to multiple input words, then (6a) for the match score: count it as correct if the n aligned words among the top n highest scoring words according to attention and (6b) for the probability mass score: add up their attention scores.
# 12https://github.com/rsennrich/wmt16-scripts
Language Pair Match GermanâEnglish EnglishâGerman CzechâEnglish EnglishâCzech RussianâEnglish EnglishâRussian Prob. 14.9% 16.0% 77.2% 63.2% 78.0% 63.3% 76.1% 59.7% 72.5% 65.0% 73.4% 64.1%
Table 3: Scores indicating overlap between at- tention probabilities and alignments obtained with fast-align.
Note that the attention model may produce bet- ter word alignments by guided alignment training (Chen et al., 2016; Liu et al., 2016) where super- vised word alignments (such as the ones produced by fast-align) are provided to model training.
# 3.6 Beam Search
The task of decoding is to ï¬nd the full sentence translation with the highest probability. In statis- tical machine translation, this problem has been addressed with heuristic search techniques that ex- plore a subset of the space of possible translation. A common feature of these search techniques is a beam size parameter that limits the number of par- tial translations maintained per input word.
There is typically a straightforward relationship between this beam size parameter and the model score of resulting translations and also their qual- ity score (e.g., BLEU). While there are dimin- ishing returns for increasing the beam parameter, typically improvements in these scores can be ex- pected with larger beams.
Decoding in neural translation models can be set up in similar fashion. When predicting the next output word, we may not only commit to the high- est scoring word prediction but also maintain the next best scoring words in a list of partial trans- lations. We record with each partial translation the word translation probabilities (obtained from the softmax), extend each partial translation with subsequent word predictions and accumulate these scores. Since the number of partial translation ex- plodes exponentially with each new output word, we prune them down to a beam of highest scoring partial translations.
As in traditional statistical machine translation decoding, increasing the beam size allows us to explore a larger set of the space of possible transla- tion and hence ï¬nd translations with better model
CzechâEnglish
English-Czech
U E L B 31 30 29.7 29.7 30.830.930.930.930.9 30.9 30.6 30.4 30.5 30.430.3 30.4 30 29.8 29.4 30.7 30.3 29.9 29 Unnormalized Normalized 28.5 1 2 4 8 12 20 30 50 100 200 500 1,000 Beam Size
U E L B 24 23 22 22 23.2 23.1 23.9 2424.124.124 23.6 24.1 23.8 24.2 23.8 24 23.9 23.5 22.7 21 20 Unnormalized Normalized 1 2 4 8 12 20 30 50 100 200 Beam Size 23.6 23.2 19.9 500 1,000
GermanâEnglish
EnglishâGerman
37 36 35.7 35.7 36.6 36.4 37.537.537.637.637.6 37.6 37.6 37.2 36.9 36.936.836.736.6 36.3 36.1 35.7 37.6 37.6 35 Unnormalized Normalized 34.6 1 2 4 8 12 20 30 50 100 200 500 1 000 Beam Size
# U E L B
U E L B 29 28 28 27.9 28.929 29.129.129.2 29.2 29.2 28.6 28.4 28.428.528.528.528.4 28.1 27.6 27 26.8 26.8 Unnormalized Normalized 1 2 4 8 12 20 30 50 100 200 Beam Size 29.1 28.7 26.7 500 1 000
RomanianâEnglish
EnglishâRomanian
17 16 15.8 15.8 17.3 16.916.916.9 16.6 16.616.7 16.4 16.5 16.416.416.416.416.3 16.2 16.4 16 15.9 15.6 Unnormalized Normalized 15.3 15 1 2 4 8 12 20 30 50 100 200 500 1 000 Beam Size
# U E L B
U E L B 26 25.4 25.4 25 25.825.825.825.8 25.8 25.6 25.6 25.7 25.6 25.6 25.7 25.7 25.6 25.6 25.5 25.6 25.6 25.6 24 Unnormalized Normalized 1 2 4 8 12 20 30 50 100 200 Beam Size 25.6 25.6 24.7 24 500 1 000
RussianâEnglish 27.727.827.827.727.7 27.6
U E L B 27 26 25.5 25.5 25 1 26.9 26.6 2 27.5 26.9 26.6 26.4 27.1 25.9 25.5 Unnormalized Normalized 24.1 8 12 20 30 50 4 100 200 25.9 24.8 500 1 000 Beam Size
EnglishâRussian
23 22 21 19.9 19.9 20 1 21.8 21.7 2 22.3 22.1 22.622.5 22.422.522.422.422.4 22.3 22.222.2 21.9 22.1 21.4 20.7 Unnormalized Normalized 4 8 12 20 30 50 100 200 Beam Size 21.8 21.3 500 1 000
# U E L B
Figure 10: Translation quality with varying beam sizes. For large beams, quality decreases, especially when not normalizing scores by sentence length.
# scores.
However, as Figure 10 illustrates, increasing the beam size does not consistently improve transla- tion quality. In fact, in almost all cases, worse translations are found beyond an optimal beam size setting (we are using again Edinburghâs WMT 2016 systems). The optimal beam size varies from 4 (e.g., CzechâEnglish) to around 30 (Englishâ Romanian).
Normalizing sentence level model scores by length of the output alleviates the problem some- what and also leads to better optimal quality in most cases (5 of the 8 language pairs investigated). Optimal beam sizes are in the range of 30â50 in almost all cases, but quality still drops with larger beams. The main cause of deteriorating quality are shorter translations under wider beams.
# 4 Conclusions
We showed that, despite its recent successes, neu- ral machine translation still has to overcome vari- ous challenges, most notably performance out-of- domain and under low resource conditions. We hope that this paper motivates research to address these challenges.
guistics, Austin, https://aclweb.org/anthology/D16-1162. Texas, pages 1557â1567.
Dzmitry Cho, Neural machine translation by jointly learning to align and translate. In ICLR. http://arxiv.org/pdf/1409.0473v6.pdf.
Luisa Bentivogli, Arianna and Marcello Bisazza, Mauro 2016. Federico. Cettolo, Neural versus phrase-based machine translation quality: a case study. In Proceedings of Empirical Methods in Processing. Association Linguistics, Austin, Texas, https://aclweb.org/anthology/D16-1025. the 2016 Conference on Language Computational pages 257â267. Natural for
OndËrej Bojar, Rajen Chatterjee, Christian Feder- mann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Mar- tin Popel, Matt Post, Raphael Rubino, Car- olina Scarton, Lucia Specia, Marco Turchi, and Marcos Zampieri. 2016. Karin Verspoor, Findings of the 2016 conference on machine translation. In Proceedings of the First Conference on Machine Translation. Association Computational Linguistics, Berlin, Germany, pages 131â198. http://www.aclweb.org/anthology/W/W16/W16-2301.
Shahram 2016. Evgeny Matusov, Wenhu Chen, Peter. Jan-Thorsten and abs/1607.01628.
What a lot of the problems have in common is that the neural translation models do not show robust behavior when confronted with conditions that differ signiï¬cantly from training conditions â may it be due to limited exposure to training data, unusual input in case of out-of-domain test sen- tences, or unlikely initial word choices in beam search. The solution to these problems may hence lie in a more general approach of training that steps outside optimizing single word predictions given perfectly matching prior sequences.
# Khadivi, Guided alignment training for topic-aware neural machine translation. CoRR http://arxiv.org/abs/1607.01628.
2007. Chiang. 33(2).
# David
Hierarchical phrase-based translation. Computational Linguistics http://www.aclweb.org/anthology-new/J/J07/J07-2003.pdf.
Merrienboer,
Kyunghyun Cho,
# Dzmitry
# Bart
# van
Bahdanau, Bengio. On the properties of neural machine translation: Encoderâdecoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statis- tical Translation. Association for Computa- tional Linguistics, Doha, Qatar, pages 103â111. http://www.aclweb.org/anthology/W14-4012.
# Acknowledgment
This work was partially supported by a Amazon Research Award (to the ï¬rst author) and a Na- tional Science Foundation Graduate Research Fel- lowship under Grant No. DGE-1232825 (to the second author).
Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Jean Lorieux, Byeongil Ko, Catherine Kobus, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun and Peter Zoldan. 2016. Zhang, Systranâs pure neural machine translation systems. CoRR http://arxiv.org/abs/1610.05540.
# References
Philip and Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Lin-
Shuoyang Ding, Kevin Duh, Huda Khayral- and Matt Post. 2016a. lah, Philipp Koehn, The jhu machine translation systems for wmt 2016. In Proceedings of the First Conference on Ma- chine Translation. Association for Computational Linguistics, Berlin, Germany, pages 272â280. http://www.aclweb.org/anthology/W/W16/W16-2310.
Association Linguistics, Seattle, Washington, USA, pages 1700â1709. http://www.aclweb.org/anthology/D13-1176.
Philipp Koehn and Barry Haddow. 2012. Interpolated backoff for factored translation models. In Proceed- ings of the Tenth Conference of the Association for Machine Translation in the Americas (AMTA).
Shuoyang Ding, Kevin Duh, Huda Khayrallah, Philipp Koehn, and Matt Post. 2016b. The JHU machine In Proceed- translation systems for WMT 2016. ings of the First Conference on Machine Translation (WMT).
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Christopher J. Dyer, OndËrej Bo- jar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Com- panion Volume Proceedings of the Demo and Poster Sessions. Association for Computational Linguis- tics, Prague, Czech Republic, pages 177â180. http://www.aclweb.org/anthology/P/P07/P07-2045.
Chris and A simple, fast, and effective reparameterization of ibm model 2. the In Proceedings of the Association for North American Chapter of Human Language Computational Linguistics: Technologies. Association for Computational Linguistics, Atlanta, Georgia, pages 644â648. http://www.aclweb.org/anthology/N13-1073.
Finch, Neural machine translation with supervised attention. In Proceedings of COLING 2016, the 26th Interna- tional Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, pages 3093â3102. http://aclweb.org/anthology/C16-1291.
Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine translation. arXiv preprint arXiv:1612.06897 .
# Michel
Galley,
Graehl,
# Jonathan
# Kevin
Knight, Wei Wang, Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 21st International Confer- ence on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics, Sydney, Australia, pages 961â968. http://www.aclweb.org/anthology/P/P06/P06-1121.
Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- In Proceedings of the In- ken language domains. ternational Workshop on Spoken Language Transla- tion.
Thang Luong, Vinyals, Addressing the rare word problem in neural machine translation. the 53rd Annual Meeting In Proceedings of of the Association for Computational Linguis- tics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers). Association for Computa- tional Linguistics, Beijing, China, pages 11â19. http://www.aclweb.org/anthology/P15-1002.
Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. Whatâs in a translation rule? In Proceedings of the Joint Conference on Hu- man Language Technologies and the Annual Meet- ing of the North American Chapter of the Associ- ation of Computational Linguistics (HLT-NAACL). http://www.aclweb.org/anthology/N04-1035.pdf.
# Ann
# Irvine
# and
# Chris
# Callison-Burch.
2013.
# Dzmitry
# Jean
Pouget-Abadie,
# Dzmitry
# Bahdanau, KyungHyun 2014.
Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation. Association for Computa- tional Linguistics, Soï¬a, Bulgaria, pages 262â270. http://www.aclweb.org/anthology/W13-2233.
Bart Cho, Overcoming the curse of sentence length for neural machine translation using automatic segmentation. CoRR http://arxiv.org/abs/1409.1257.
# van and
# Merrienboer, Yoshua
# Bengio.
abs/1409.1257.
# Marcin
# Junczys-Dowmunt, and
# Tomasz 2016.
Rico Sennrich, Orhan Firat, Kyunghyun Cho, Julian Samuel Barone, 2017. Dwojak, Is neural machine translation ready for deployment? a case study on 30 translation directions. In Proceedings of on Translation Spoken http://workshop2016.iwslt.org/downloads/IWSLT 2016 paper 4.pdf. Hieu Hoang. the International Workshop (IWSLT). Language and Maria Nadejde.
Alexandra Birch, Barry Haddow, Hitschler, Marcin Junczys-Dowmunt, L¨aubli, Antonio Valerio Miceli Jozef Mokry, Nematus: a toolkit for neural machine translation. In Proceedings of tions of Chapter of Linguistics. Association
2013. In Recurrent continuous translation models. Proceedings of the 2013 Conference on Empir- ical Methods in Natural Language Processing.
# the Software Demonstra- the European the Association for Computational Computational for
# the 15th Conference of
Linguistics, Valencia, http://aclweb.org/anthology/E17-3017. Spain, pages 65â68. Rico Sennrich, Barry Haddow, 2016a. and Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation (WMT). Association for Computa- tional Linguistics, Berlin, Germany, pages 371â376. http://www.aclweb.org/anthology/W/W16/W16-2323. Alexandra Birch. abs/1609.08144. Rico Sennrich, Barry Birch. Haddow, 2016b. and Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, pages 1715â1725. http://www.aclweb.org/anthology/P16-1162. Alexandra J¨org Tiedemann. 2012. Parallel data, tools and in- terfaces in opus. In Nicoletta Calzolari (Con- ference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, edi- tors, Proceedings of the Eight International Con- ference on Language Resources and Evaluation (LRECâ12). European Language Resources Associ- ation (ELRA), Istanbul, Turkey. Antonio Toral and V´ıctor M. S´anchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proceedings of the European Chapter of Computational Linguistics: Papers. Association guistics, Valencia, Spain, http://www.aclweb.org/anthology/E17-1100. the 15th Conference of the Association for Volume 1, Long for Computational Lin- pages 1063â1073. Zhaopeng Xiaohua Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Compu- tational Linguistics, Berlin, Germany, pages 76â85. http://www.aclweb.org/anthology/P16-1008. Tu, Zhengdong and Lu, Hang Yang Li. Liu, 2016. Liu, Marco Turchi, Tijl De Bie, and Nello Cristianini. 2008.
Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Cor- rado, Macduff Hughes, and Jeffrey Dean. 2016. Googleâs neural machine translation system: Bridging the gap between human and machine translation. CoRR http://arxiv.org/abs/1609.08144.pdf.
Learning performance of a machine translation system: a statistical and computational analysis. In Proceedings of the Third Workshop on Statistical Machine Translation. Association for Computa- tional Linguistics, Columbus, Ohio, pages 35â43. http://www.aclweb.org/anthology/W/W08/W08-0305.
Nadejde, Haddow, Edinburghâs statistical machine translation systems for wmt16. In Proceedings of the First Conference on Machine Translation. Association Computational Linguistics, Berlin, Germany, pages 399â410. http://www.aclweb.org/anthology/W/W16/W16-2327.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, | {
"id": "1706.03872"
} |
1706.03762 | Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data. | http://arxiv.org/pdf/1706.03762 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | cs.CL, cs.LG | 15 pages, 5 figures | null | cs.CL | 20170612 | 20230802 | 3 2 0 2
g u A 2 ] L C . s c [
7 v 2 6 7 3 0 . 6 0 7 1 : v i X r a
Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.
# Attention Is All You Need
# Ashish Vaswaniâ Google Brain avaswani@google.com
Noam Shazeerâ Google Brain noam@google.com
Niki Parmarâ Google Research nikip@google.com
Jakob Uszkoreitâ Google Research usz@google.com
# Llion Jonesâ Google Research llion@google.com
Aidan N. Gomezâ â University of Toronto aidan@cs.toronto.edu
Åukasz Kaiserâ Google Brain lukaszkaiser@google.com
# Illia Polosukhinâ â¡ illia.polosukhin@gmail.com
# Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
âEqual contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research.
â Work performed while at Google Brain. â¡Work performed while at Google Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
# 1 Introduction
Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation [35, 2, 5]. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38, 24, 15].
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states ht, as a function of the previous hidden state htâ1 and the input for position t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
Attention mechanisms have become an integral part of compelling sequence modeling and transduc- tion models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 19]. In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network.
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
# 2 Background
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22].
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence- aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34].
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence- aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9].
# 3 Model Architecture
Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35]. Here, the encoder maps an input sequence of symbol representations (x1, ..., xn) to a sequence of continuous representations z = (z1, ..., zn). Given z, the decoder then generates an output sequence (y1, ..., ym) of symbols one element at a time. At each step the model is auto-regressive [10], consuming the previously generated symbols as additional input when generating the next.
2
Output Probabilities Add & Norm Feed Forward Add & Norm Multi-Head Attention a, Add & Norm Add & Norm Feed Forward Nx | -+CAgc8 Norm) Add & Norm Masked Multi-Head Multi-Head Attention Attention Se a, ee a, Positional Positional Encoding @ © @ Encoding Input Output Embedding Embedding Inputs Outputs (shifted right)
Figure 1: The Transformer - model architecture.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.
# 3.1 Encoder and Decoder Stacks
Encoder: The encoder is composed of a stack of N = 6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position- wise fully connected feed-forward network. We employ a residual connection [11] around each of the two sub-layers, followed by layer normalization [1]. That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel = 512.
Decoder: The decoder is also composed of a stack of N = 6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.
# 3.2 Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum
3
Scaled Dot-Product Attention Multi-Head Attention
Linear
Figure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.
of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
# 3.2.1 Scaled Dot-Product Attention
We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the dk, and apply a softmax function to obtain the weights on the query with all keys, divide each by values.
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q. The keys and values are also packed together into matrices K and V . We compute the matrix of outputs as:
Attention(Q, K, V ) = softmax( QK T â dk )V (1)
The two most commonly used attention functions are additive attention [2], and dot-product (multi- plicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor . Additive attention computes the compatibility function using a feed-forward network with of a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dk [3]. We suspect that for large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 4. To counteract this effect, we scale the dot products by 1â dk
# 3.2.2 Multi-Head Attention
Instead of performing a single attention function with dmodel-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dv-dimensional
4To illustrate why the dot products get large, assume that the components of q and k are independent random i=1 qiki, has mean 0 and variance dk.
4
output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
MultiHead(Q, K, V ) = Concat(head1, ..., headh)W O where headi = Attention(QW Q i , KW K i , V W V i )
Where the projections are parameter matrices W Q and W O â RhdvÃdmodel. i â RdmodelÃdk , W K i â RdmodelÃdk , W V i â RdmodelÃdv
In this work we employ h = 8 parallel attention layers, or heads. For each of these we use dk = dv = dmodel/h = 64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
# 3.2.3 Applications of Attention in our Model
The Transformer uses multi-head attention in three different ways:
⢠In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9].
⢠The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
⢠Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to ââ) all values in the input of the softmax which correspond to illegal connections. See Figure 2.
# 3.3 Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
FFN(x) = max(0, xW1 + b1)W2 + b2 (2)
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel = 512, and the inner-layer has dimensionality df f = 2048.
# 3.4 Embeddings and Softmax
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor- mation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax dmodel. linear transformation, similar to [30]. In the embedding layers, we multiply those weights by
5
Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types. n is the sequence length, d is the representation dimension, k is the kernel size of convolutions and r the size of the neighborhood in restricted self-attention.
Layer Type Self-Attention Recurrent Convolutional Self-Attention (restricted) Complexity per Layer O(n2 · d) O(n · d2) O(k · n · d2) O(r · n · d) Sequential Maximum Path Length Operations O(1) O(n) O(1) O(1) O(1) O(n) O(logk(n)) O(n/r)
# 3.5 Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed [9].
In this work, we use sine and cosine functions of different frequencies:
P E(pos,2i) = sin(pos/100002i/dmodel) P E(pos,2i+1) = cos(pos/100002i/dmodel)
where pos is the position and i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2Ï to 10000 · 2Ï. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, P Epos+k can be represented as a linear function of P Epos.
We also experimented with using learned positional embeddings [9] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.
# 4 Why Self-Attention
In this section we compare various aspects of self-attention layers to the recurrent and convolu- tional layers commonly used for mapping one variable-length sequence of symbol representations (x1, ..., xn) to another sequence of equal length (z1, ..., zn), with xi, zi â Rd, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata.
One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.
The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12]. Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.
As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence
6
length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [38] and byte-pair [31] representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r). We plan to investigate this approach further in future work.
A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [18], increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k. Separable convolutions [6], however, decrease the complexity considerably, to O(k · n · d + n · d2). Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.
As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.
# 5 Training
This section describes the training regime for our models.
# 5.1 Training Data and Batching
We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding [3], which has a shared source- target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [38]. Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.
# 5.2 Hardware and Schedule
We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days).
# 5.3 Optimizer
We used the Adam optimizer [20] with β1 = 0.9, β2 = 0.98 and ϵ = 10â9. We varied the learning rate over the course of training, according to the formula:
lrate = dâ0.5 model · min(step_numâ0.5, step_num · warmup_stepsâ1.5) (3)
This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000.
# 5.4 Regularization
We employ three types of regularization during training:
7
Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.
Model ByteNet [18] Deep-Att + PosUnk [39] GNMT + RL [38] ConvS2S [9] MoE [32] Deep-Att + PosUnk Ensemble [39] GNMT + RL Ensemble [38] ConvS2S Ensemble [9] Transformer (base model) Transformer (big) BLEU EN-DE EN-FR 23.75 24.6 25.16 26.03 26.30 26.36 27.3 28.4 39.2 39.92 40.46 40.56 40.4 41.16 41.29 38.1 41.8 Training Cost (FLOPs) EN-DE EN-FR 2.3 · 1019 9.6 · 1018 2.0 · 1019 1.8 · 1020 7.7 · 1019 1.0 · 1020 1.4 · 1020 1.5 · 1020 1.2 · 1020 8.0 · 1020 1.1 · 1021 1.2 · 1021 3.3 · 1018 2.3 · 1019
Residual Dropout We apply dropout [33] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1.
Label Smoothing During training, we employed label smoothing of value ϵls = 0.1 [36]. This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.
# 6 Results
# 6.1 Machine Translation
On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.
On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.
For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 4 and length penalty α = 0.6 [38]. These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [38].
Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5.
# 6.2 Model Variations
To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the
5We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.
8
Table 3: Variations on the Transformer architecture. Unlisted values are identical to those of the base model. All metrics are on the English-to-German translation development set, newstest2013. Listed perplexities are per-wordpiece, according to our byte-pair encoding, and should not be compared to per-word perplexities.
base (A) (B) (C) (D) N dmodel 6 512 2 4 8 256 1024 dff 2048 1024 4096 h 8 1 4 16 32 dk 64 512 128 32 16 16 32 32 128 dv 64 512 128 32 16 32 128 Pdrop 0.1 0.0 0.2 ϵls 0.1 0.0 0.2 PPL train steps (dev) 100K 4.92 5.29 5.00 4.91 5.01 5.16 5.01 6.11 5.19 4.88 5.75 4.66 5.12 4.75 5.77 4.95 4.67 5.47 4.92 300K 4.33 BLEU params Ã106 (dev) 25.8 65 24.9 25.5 25.8 25.4 25.1 25.4 23.7 25.3 25.5 24.5 26.0 25.4 26.2 24.6 25.5 25.3 25.7 25.7 26.4 58 60 36 50 80 28 168 53 90 (E) big 6 positional embedding instead of sinusoids 1024 4096 16 0.3 213
development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3.
In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.
In Table 3 rows (B), we observe that reducing the attention key size dk hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings [9], and observe nearly identical results to the base model.
# 6.3 English Constituency Parsing
To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes [37].
We trained a 4-layer transformer with dmodel = 1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank [25], about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences [37]. We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting.
We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we
9
Table 4: The Transformer generalizes well to English constituency parsing (Results are on Section 23 of WSJ)
Parser Training Vinyals & Kaiser el al. (2014) [37] WSJ only, discriminative WSJ only, discriminative WSJ only, discriminative WSJ only, discriminative WSJ only, discriminative semi-supervised semi-supervised semi-supervised semi-supervised semi-supervised multi-task generative Petrov et al. (2006) [29] Zhu et al. (2013) [40] Dyer et al. (2016) [8] Transformer (4 layers) Zhu et al. (2013) [40] Huang & Harper (2009) [14] McClosky et al. (2006) [26] Vinyals & Kaiser el al. (2014) [37] Transformer (4 layers) Luong et al. (2015) [23] Dyer et al. (2016) [8] WSJ 23 F1 88.3 90.4 90.4 91.7 91.3 91.3 91.3 92.1 92.1 92.7 93.0 93.3
increased the maximum output length to input length + 300. We used a beam size of 21 and α = 0.3 for both WSJ only and the semi-supervised setting.
Our results in Table 4 show that despite the lack of task-specific tuning our model performs sur- prisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar [8].
In contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the Berkeley- Parser [29] even when training only on the WSJ training set of 40K sentences.
# 7 Conclusion
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.
For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles.
We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours.
The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor.
Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration.
# References
[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014.
[3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural machine translation architectures. CoRR, abs/1703.03906, 2017.
[4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.
10
[5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014.
[6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016.
[7] Junyoung Chung, Ãaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014.
[8] Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural network grammars. In Proc. of NAACL, 2016.
[9] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017.
[10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for im- age recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â778, 2016.
[12] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001.
[13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[14] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 832â841. ACL, August 2009.
[15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
[16] Åukasz Kaiser and Samy Bengio. Can active memory replace attention? In Advances in Neural Information Processing Systems, (NIPS), 2016.
[17] Åukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR), 2016.
[18] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Ko- ray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2, 2017.
[19] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks. In International Conference on Learning Representations, 2017.
[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[21] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint arXiv:1703.10722, 2017.
[22] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017.
[23] Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114, 2015.
[24] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
11
[25] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
[26] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152â159. ACL, June 2006.
[27] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016.
[28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017.
[29] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 433â440. ACL, July 2006.
[30] Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
[31] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
[32] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
[33] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
[34] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440â2448. Curran Associates, Inc., 2015.
[35] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104â3112, 2014.
[36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015.
[37] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In Advances in Neural Information Processing Systems, 2015.
[38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
[39] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016.
[40] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume 1: Long Papers), pages 434â443. ACL, August 2013.
12
# Attention Visualizations
2 i i= RE 3 2 i 2 = = 2c 3 2 £ om % S GBANAAAA fe. Re) [a Q â¬oe2s ozeseyzses 26e8 TL _ FFREKR8TZESHBOP_,SSESE DSsSSsSESE ~2£FFEâ¬voFEnvnFECRCoecacKRGNESLSESSCEGC -vVvVVVVV HMO KEBOCSRSHLHOD QLSARBYXE FE OH âA ARAAKRAAA â= <2 £ 8 FogesouggsS ss P25 5273 Qvryxapvs es sa 5 Seeneteecorzgrs = Q ogs aaa oO 2 Sere =~ aA o ° 8 aaqaaq0gqaqg o o o 5 > o wWUvvvvVvV Vv âe â¬E£ © e 2 6 v Do <¢ 8 & |
Figure 3: An example of the attention mechanism following long-distance dependencies in the encoder self-attention in layer 5 of 6. Many of the attention heads attend to a distant dependency of the verb âmakingâ, completing the phrase âmaking...more difficultâ. Attentions here shown only for the word âmakingâ. Different colors represent different heads. Best viewed in color.
13
<ped> <ped> UOIUIdO == Aw ul Bulssiw ale « aM = yeum = S| sy ysnf 3q* pinoys = uoluldo Aw ul Bulssiw ae ysnf 38q Pinoys uojeojdde Ss}! nq poped 38q JaAou Me] au <ped> <SOa> uojuido Aw ul Bulssiuw oe aM yeum S| SIU} ysnf 3q Pinoys uojeodde Ss}! ynq yoped 3q 4eAeuU meq auL <ped> <SOa> uo|uldo Aw ul Bulssiuw oe eM yeum S| Siu} ysnf 3q Pinoys uoyeoydde si! ynq yoped 3q aul
Figure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top: Full attentions for head 5. Bottom: Isolated attentions from just the word âitsâ for attention heads 5 and 6. Note that the attentions are very sharp for this word.
14
<ped> <ped> <SOH>\ <SO3> uoluido = uoluido Aw Aw yeyum S| sim piâf}â 4 -ysn{ | Pinoys «+ pinoys uoeoydde uojeodde si! â=S}! nga A ynq poped pooped aq aq Janou⢠J@AoU WIM TIM me) me) oul OUL
<ped> <ped> so <0 Uo|UIdO uoluido Aw Aw ul ul Bulssiw Bulssiw ae ale aM am yeuM yeum S| S| sty} # sly -âA - el eq eq pinoys « pinoys uojeoidde ee Ss}! Ss}! nq ee popod a â_ ee eq eq JOAoU JOAoU IW IW rr auL auL
Figure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the sentence. We give two such examples above, from two different heads from the encoder self-attention at layer 5 of 6. The heads clearly learned to perform different tasks.
15 | {
"id": "1601.06733"
} |
1706.02515 | Self-Normalizing Neural Networks | Deep Learning has revolutionized vision via convolutional neural networks
(CNNs) and natural language processing via recurrent neural networks (RNNs).
However, success stories of Deep Learning with standard feed-forward neural
networks (FNNs) are rare. FNNs that perform well are typically shallow and,
therefore cannot exploit many levels of abstract representations. We introduce
self-normalizing neural networks (SNNs) to enable high-level abstract
representations. While batch normalization requires explicit normalization,
neuron activations of SNNs automatically converge towards zero mean and unit
variance. The activation function of SNNs are "scaled exponential linear units"
(SELUs), which induce self-normalizing properties. Using the Banach fixed-point
theorem, we prove that activations close to zero mean and unit variance that
are propagated through many network layers will converge towards zero mean and
unit variance -- even under the presence of noise and perturbations. This
convergence property of SNNs allows to (1) train deep networks with many
layers, (2) employ strong regularization, and (3) to make learning highly
robust. Furthermore, for activations not close to unit variance, we prove an
upper and lower bound on the variance, thus, vanishing and exploding gradients
are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning
repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with
standard FNNs and other machine learning methods such as random forests and
support vector machines. SNNs significantly outperformed all competing FNN
methods at 121 UCI tasks, outperformed all competing methods at the Tox21
dataset, and set a new record at an astronomy data set. The winning SNN
architectures are often very deep. Implementations are available at:
github.com/bioinf-jku/SNNs. | http://arxiv.org/pdf/1706.02515 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter | cs.LG, stat.ML | 9 pages (+ 93 pages appendix) | Advances in Neural Information Processing Systems 30 (NIPS 2017) | cs.LG | 20170608 | 20170907 | 7 1 0 2
p e S 7 ] G L . s c [
5 v 5 1 5 2 0 . 6 0 7 1 : v i X r a
# Self-Normalizing Neural Networks
# Günter Klambauer
# Thomas Unterthiner
# Andreas Mayr
Sepp Hochreiter LIT AI Lab & Institute of Bioinformatics, Johannes Kepler University Linz A-4040 Linz, Austria {klambauer,unterthiner,mayr,hochreit}@bioinf.jku.at
# Abstract
Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are âscaled exponential linear unitsâ (SELUs), which induce self-normalizing properties. Using the Banach ï¬xed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance â even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization schemes, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs, and other machine learning methods such as random forests and support vector machines. For FNNs we considered (i) ReLU networks without normalization, (ii) batch normalization, (iii) layer normalization, (iv) weight normalization, (v) highway networks, and (vi) residual networks. SNNs signiï¬cantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep. Implementations are available at: github.com/bioinf-jku/SNNs.
Accepted for publication at NIPS 2017; please cite as: Klambauer, G., Unterthiner, T., Mayr, A., & Hochreiter, S. (2017). Self-Normalizing Neural Networks. Processing Systems (NIPS).
# Introduction
Deep Learning has set new records at different benchmarks and led to various commercial applications [25, 33]. Recurrent neural networks (RNNs) [18] achieved new levels at speech and natural language
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
processing, for example at the TIMIT benchmark [12] or at language translation [36], and are already employed in mobile devices [31]. RNNs have won handwriting recognition challenges (Chinese and Arabic handwriting) [33, 13, 6] and Kaggle challenges, such as the âGrasp-and Lift EEGâ competition. Their counterparts, convolutional neural networks (CNNs) [24] excel at vision and video tasks. CNNs are on par with human dermatologists at the visual detection of skin cancer [9]. The visual processing for self-driving cars is based on CNNs [19], as is the visual input to AlphaGo which has beaten one of the best human GO players [34]. At vision challenges, CNNs are constantly winning, for example at the large ImageNet competition [23, 16], but also almost all Kaggle vision challenges, such as the âDiabetic Retinopathyâ and the âRight Whaleâ challenges [8, 14].
However, looking at Kaggle challenges that are not related to vision or sequential tasks, gradient boosting, random forests, or support vector machines (SVMs) are winning most of the competitions. Deep Learning is notably absent, and for the few cases where FNNs won, they are shallow. For example, the HIGGS challenge, the Merck Molecular Activity challenge, and the Tox21 Data challenge were all won by FNNs with at most four hidden layers. Surprisingly, it is hard to ï¬nd success stories with FNNs that have many hidden layers, though they would allow for different levels of abstract representations of the input [3].
To robustly train very deep CNNs, batch normalization evolved into a standard to normalize neuron activations to zero mean and unit variance [20]. Layer normalization [2] also ensures zero mean and unit variance, while weight normalization [32] ensures zero mean and unit variance if in the previous layer the activations have zero mean and unit variance. However, training with normalization techniques is perturbed by stochastic gradient descent (SGD), stochastic regularization (like dropout), and the estimation of the normalization parameters. Both RNNs and CNNs can stabilize learning via weight sharing, therefore they are less prone to these perturbations. In contrast, FNNs trained with normalization techniques suffer from these perturbations and have high variance in the training error (see Figure 1). This high variance hinders learning and slows it down. Furthermore, strong regularization, such as dropout, is not possible as it would further increase the variance which in turn would lead to divergence of the learning process. We believe that this sensitivity to perturbations is the reason that FNNs are less successful than RNNs and CNNs.
Self-normalizing neural networks (SNNs) are robust to perturbations and do not have high variance in their training errors (see Figure 1). SNNs push neuron activations to zero mean and unit variance thereby leading to the same effect as batch normalization, which enables to robustly learn many layers. SNNs are based on scaled exponential linear units âSELUsâ which induce self-normalizing properties like variance stabilization which in turn avoids exploding and vanishing gradients.
# Self-normalizing Neural Networks (SNNs)
Normalization and SNNs. For a neural network with activation function f, we consider two consecutive layers that are connected by a weight matrix W. Since the input to a neural network is a random variable, the activations a in the lower layer, the network inputs z = Wa, and the activations y = f(z) in the higher layer are random variables as well. We assume that all activations x; of the lower layer have mean ys := E(x;) and variance v := Var(x;). An activation y in the higher layer has mean ji := E(y) and variance 7 := Var(y). Here E(.) denotes the expectation and Var(.) the variance of a random variable. A single activation y = f(z) has net input z = w? a. For n units with activation x;,1 < i < nin the lower layer, we define n times the mean of the weight vector w ⬠Râ asw := S77, w; and n times the second moment as 7 := D7, w? ro We consider the mapping g that maps mean and variance of the activations from one layer to mean and variance of the activations in the next layer
HM BY. (HB) _ (he (eH Oa). o
Normalization techniques like batch, layer, or weight normalization ensure a mapping g that keeps (u,v) and (ji, 7) close to predefined values, typically (0, 1). Definition 1 (Self-normalizing neural net). A neural network is self-normalizing if it possesses a mapping g : + Q for each activation y that maps mean and variance from one layer to the next
2
â BatchNorm Depth & â BatchNorm Depth & â BatchNorm Depth 16 â BatchNorm Depth 16 â BatchNorm Depth 32 1 â BatchNorm Depth 32 ââ SNN Depth 8 â SNN Depth 8 â SNN Depth 16 â SNN Depth 16 â SNN Depth 32 â SNN Depth 32 19 loss ining logs Trainin Tri wy 10-* 108g 10-5 ° 0 250 500 750 teens 1250 1500 1750 2000 0 250 500 750 teens 1250 1500 1750 2000
Figure 1: The left panel and the right panel show the training error (y-axis) for feed-forward neural networks (FNNs) with batch normalization (BatchNorm) and self-normalizing networks (SNN) across update steps (x-axis) on the MNIST dataset the CIFAR10 dataset, respectively. We tested networks with 8, 16, and 32 layers and learning rate 1e-5. FNNs with batch normalization exhibit high variance due to perturbations. In contrast, SNNs do not suffer from high variance as they are more robust to perturbations and learn faster.
and has a stable and attracting ï¬xed point depending on (Ï, Ï ) in â¦. Furthermore, the mean and the variance remain in the domain â¦, that is g(â¦) â â¦, where ⦠= {(µ, ν) | µ â [µmin, µmax], ν â [νmin, νmax]}. When iteratively applying the mapping g, each point within ⦠converges to this ï¬xed point.
Therefore, we consider activations of a neural network to be normalized, if both their mean and their variance across samples are within predeï¬ned intervals. If mean and variance of x are already within these intervals, then also mean and variance of y remain in these intervals, i.e., the normalization is transitive across layers. Within these intervals, the mean and variance both converge to a ï¬xed point if the mapping g is applied iteratively.
Therefore, SNNs keep normalization of activations when propagating them through layers of the network. The normalization effect is observed across layers of a network: in each layer the activations are getting closer to the ï¬xed point. The normalization effect can also observed be for two ï¬xed layers across learning steps: perturbations of lower layer activations or weights are damped in the higher layer by drawing the activations towards the ï¬xed point. If for all y in the higher layer, Ï and Ï of the corresponding weight vector are the same, then the ï¬xed points are also the same. In this case we have a unique ï¬xed point for all activations y. Otherwise, in the more general case, Ï and Ï differ for different y but the mean activations are drawn into [µmin, µmax] and the variances are drawn into [νmin, νmax].
Constructing Self-Normalizing Neural Networks. We aim at constructing self-normalizing neu- ral networks by adjusting the properties of the function g. Only two design choices are available for the function g: (1) the activation function and (2) the initialization of the weights.
For the activation function, we propose âscaled exponential linear unitsâ (SELUs) to render a FNN as self-normalizing. The SELU activation function is given by
=aft ifz>0- Q) ae*âa ifx<0
SELUs allow to construct a mapping g with properties that lead to SNNs. SNNs cannot be derived with (scaled) rectiï¬ed linear units (ReLUs), sigmoid units, tanh units, and leaky ReLUs. The activation function is required to have (1) negative and positive values for controlling the mean, (2) saturation regions (derivatives approaching zero) to dampen the variance if it is too large in the lower layer, (3) a slope larger than one to increase the variance if it is too small in the lower layer, (4) a continuous curve. The latter ensures a ï¬xed point, where variance damping is equalized by variance increasing. We met these properties of the activation function by multiplying the exponential linear unit (ELU) [7] with λ > 1 to ensure a slope larger than one for positive net inputs.
3
For the weight initialization, we propose Ï = 0 and Ï = 1 for all units in the higher layer. The next paragraphs will show the advantages of this initialization. Of course, during learning these assumptions on the weight vector will be violated. However, we can prove the self-normalizing property even for weight vectors that are not normalized, therefore, the self-normalizing property can be kept during learning and weight changes.
Deriving the Mean and Variance Mapping Function g. We assume that the x; are independent from each other but share the same mean y and variance v. Of course, the independence assumptions is not fulfilled in general. We will elaborate on the independence assumption below. The network input z in the higher layer is z = w? a for which we can infer the following moments E(z) = SL, w; E(x) = wand Var(z) = Var(d>;_, w; 2;) = v7, where we used the independence of the x;. The net input z is a weighted sum of independent, but not necessarily identically distributed variables x;, for which the central limit theorem (CLT) states that z approaches a normal distribution: z~ N (ww, VT) with density py (z; ww, VT). According to the CLT, the larger n, the closer is z to a normal distribution. For Deep Learning, broad layers with hundreds of neurons x; are common. Therefore the assumption that z is normally distributed is met well for most currently used neural networks (see neue The function g maps the mean and variance of activations in the lower layer to the mean ji = E(y) and variance 7 = Var(y) of the activations y in the next layer:
g: (!) ca (!) : fi(u,w,y,7) = [. selu(z) pn(z; pw, VT) dz (3) co ~ D(M,W,U,T) = | selu(z)? py(z; pw, VvT) dz â (ji)?. co
These integrals can be analytically computed and lead to following mappings of the moments:
~ 1 . pw A= a (ve) erf (5) + (4) a el+"F erfc (4) aerfe (+) ; v2 Tre tay + ps) 1 vr v= 3 ( (us? +u7) (: â erfe (45)) +a? (-20 erfc (ââ*) (5) w)2 « e2HwHtvT) orf (S =) + erfc (4+)) t owrre #5) - (it)â
Stable and Attracting Fixed Point (0,1) for Normalized Weights. We assume a normalized weight vector w with w = 0 andr = 1. Given a fixed point (1, 7), we can solve equations Eq. (a) and Eq. (5) for a and . We chose the fixed point (4,7) = (0,1), which is typical for activation normalization. We obtain the fixed point equations ji = 4. = 0 and y = v = 1 that we solve for a and \ and obtain the solutions ao; + 1.6733 and Ap * 1.0507, where the subscript 01 indicates that these are the parameters for fixed point (0, 1). The analytical expressions for a and Ao; are given in Eq. (14). We are interested whether the fixed point (1,7) = (0, 1) is stable and attracting. If the Jacobian of g has a norm smaller than | at the fixed point, then g is a contraction mapping and the fixed point is stable. The (2x2)-Jacobian 7 (1, â) of g : (1,v) +4 (ft, ) evaluated at the fixed point (0, 1) with ao; and Ao; is
oly) gH Gov) on ov 0.0 ei T(p,v) = ; J(0,1) = ( 6 (uy) unet(un) gur(aw) (0 = (oo 0.782648 (6) Oye re
The spectral norm of J (0, 1) (its largest singular value) is 0.7877 < 1. That means g is a contraction mapping around the ï¬xed point (0, 1) (the mapping is depicted in Figure 2). Therefore, (0, 1) is a stable ï¬xed point of the mapping g.
4
LL 2 L OMe Oe ee oe ee eee Bee EE SSS SSS NN \NAAAAN NSSSSSSS 1.3 1.4 1.5 0.00 0.05 0.10 L bw / ji MN NY RS Seem âNSN OYE oe Mme SNe eee Ah bbe oe eee eee eee: re TAA AR RRR JAAARRERRN AAA RRRAN ee AAZIAERNNN AATPPENNN ZATTPIUNNN i a, Be ae dll oat Se lie Soe ie 2s dear aanwr nnn -0.10 0.9 1.0 11 v/v S © LX)
Figure 2: For w = 0 and 7 = 1, the mapping g of mean pu (a-axis) and variance v (y-axis) to the next layerâs mean ji and variance 7 is depicted. Arrows show in which direction (1, 7) is mapped by 9g: (U,V) + (f,7). The fixed point of the mapping g is (0, 1).
Stable and Attracting Fixed Points for Unnormalized Weights. A normalized weight vector w cannot be ensured during learning. For SELU parameters α = α01 and λ = λ01, we show in the next theorem that if (Ï, Ï ) is close to (0, 1), then g still has an attracting and stable ï¬xed point that is close to (0, 1). Thus, in the general case there still exists a stable ï¬xed point which, however, depends on (Ï, Ï ). If we restrict (µ, ν, Ï, Ï ) to certain intervals, then we can show that (µ, ν) is mapped to the respective intervals. Next we present the central theorem of this paper, from which follows that SELU networks are self-normalizing under mild conditions on the weights. Theorem 1 (Stable and Attracting Fixed Points). We assume α = α01 and λ = λ01. We restrict the range of the variables to the following intervals µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1], that deï¬ne the functionsâ domain â¦. For Ï = 0 and Ï = 1, the mapping Eq. (3) has the stable ï¬xed point (µ, ν) = (0, 1), whereas for other Ï and Ï the mapping Eq. (3) has a stable and attracting ï¬xed point depending on (Ï, Ï ) in the (µ, ν)-domain: µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617]. All points within the (µ, ν)-domain converge when iteratively applying the mapping Eq. (3) to this ï¬xed point.
Proof. We provide a proof sketch (see detailed proof in Appendix Section A3). With the Banach ï¬xed point theorem we show that there exists a unique attracting and stable ï¬xed point. To this end, we have to prove that a) g is a contraction mapping and b) that the mapping stays in the domain, that is, g(â¦) â â¦. The spectral norm of the Jacobian of g can be obtained via an explicit formula for the largest singular value for a 2 à 2 matrix. g is a contraction mapping if its spectral norm is smaller than 1. We perform a computer-assisted proof to evaluate the largest singular value on a ï¬ne grid and ensure the precision of the computer evaluation by an error propagation analysis of the implemented algorithms on the according hardware. Singular values between grid points are upper bounded by the mean value theorem. To this end, we bound the derivatives of the formula for the largest singular value with respect to Ï, Ï, µ, ν. Then we apply the mean value theorem to pairs of points, where one is on the grid and the other is off the grid. This shows that for all values of Ï, Ï, µ, ν in the domain â¦, the spectral norm of g is smaller than one. Therefore, g is a contraction mapping on the domain â¦. Finally, we show that the mapping g stays in the domain ⦠by deriving bounds on ˵ and Ëν. Hence, the Banach ï¬xed-point theorem holds and there exists a unique ï¬xed point in ⦠that is attained.
Consequently, feed-forward neural networks with many units in each layer and with the SELU activation function are self-normalizing (see deï¬nition 1), which readily follows from Theorem 1. To give an intuition, the main property of SELUs is that they damp the variance for negative net inputs and increase the variance for positive net inputs. The variance damping is stronger if net inputs are further away from zero while the variance increase is stronger if net inputs are close to zero. Thus, for large variance of the activations in the lower layer the damping effect is dominant and the variance decreases in the higher layer. Vice versa, for small variance the variance increase is dominant and the variance increases in the higher layer.
However, we cannot guarantee that mean and variance remain in the domain â¦. Therefore, we next treat the case where (µ, ν) are outside â¦. It is especially crucial to consider ν because this variable has much stronger inï¬uence than µ. Mapping ν across layers to a high value corresponds to an
5
exploding gradient, since the Jacobian of the activation of high layers with respect to activations in lower layers has large singular values. Analogously, mapping v across layers to a low value corresponds to an vanishing gradient. Bounding the mapping of v from above and below would avoid both exploding and vanishing gradients. Theorem [2]states that the variance of neuron activations of SNNs is bounded from above, and therefore ensures that SNNs learn robustly and do not suffer from exploding gradients. Theorem 2 (Decreasing v). For \ = \o1, @ = ao1 and the domainQt: -1<w<1,-O0.1<w< 0.1,3 <v < 16, and 0.8 < rT < 1.25, we have for the mapping of the variance 0(U,w,V,T, A, a) given in Eq. G): 0(p,w,v,7, X01, 01) < v-
The proof can be found in the Appendix Section|A3} Thus, when mapped across many layers, the variance in the interval [3, 16] is mapped to a value below 3. Consequently, all fixed points (1, ) of the mapping g (Eq. @B) have v < 3. Analogously, Theorem)3|states that the variance of neuron activations of SNNs is bounded from below, and therefore ensures that SNNs do not suffer from vanishing gradients. Theorem 3 (Increasing v). We consider \ = Xo1, @ = Q01 and the domain Q-: â0.1< p< 0.1, and â0.1 <w < 0.1. For the domain 0.02 < v < 0.16 and 0.8 < 7 < 1.25 as well as for the domain 0.02 < v < 0.24 and 0.9 < T < 1.25, the mapping of the variance 0(,w,V,T, A, a) given in Eq. increases: V(1,wW,V,T, Ao1, @o1) > V.
The proof can be found in the Appendix Section All fixed points (1, ) of the mapping g (Eq. (3)) ensure for 0.8 < 7 that 7 > 0.16 and for 0.9 < 7 that 7 > 0.24. Consequently, the variance mapping Eq. (5) ensures a lower bound on the variance v. Therefore SELU networks control the variance of the activations and push it into an interval, whereafter the mean and variance move toward the fixed point. Thus, SELU networks are steadily normalizing the variance and subsequently normalizing the mean, too. In all experiments, we observed that self-normalizing neural networks push the mean and variance of activations into the domain Q .
Initialization. Since SNNs have a fixed point at zero mean and unit variance for normalized weights w= 0", w; = Oand tr = Soi, w? = 1 (see above), we initialize SNNs such that these constraints are fulfilled in expectation. We draw the weights from a Gaussian distribution with E(w;) = 0 and variance Var(w;) = 1/n. Uniform and truncated Gaussian distributions with these moments led to networks with similar behavior. The âMSRA initializationâ is similar since it uses zero mean and variance 2/n to initialize the weights [17]. The additional factor 2 counters the effect of rectified linear units.
New Dropout Technique. Standard dropout randomly sets an activation x to zero with probability 1 â4q for 0 < q < 1. In order to preserve the mean, the activations are scaled by 1/q during training. If z has mean E(x) = ju and variance Var(x) = v, and the dropout variable d follows a binomial distribution B(1, q), then the mean E(1/gqdz) = ps is kept. Dropout fits well to rectified linear units, since zero is in the low variance region and corresponds to the default value. For scaled exponential linear units, the default and low variance value is lim,_,., selu(z) = âAa = aâ. Therefore, we propose âalpha dropoutâ, that randomly sets inputs to aâ. The new mean and new variance is E(ad + aâ(1 â d)) = qu + (1 â q)oâ, and Var(ad + aâ(1 â d)) = q((1 â g)(aâ â 1)? +v). We aim at keeping mean and variance to their original values after âalpha dropoutâ, in order to ensure the self-normalizing property even for âalpha dropoutâ. The affine transformation a(ad + aâ(1 â d)) + b allows to determine parameters a and b such that mean and variance are kept to their values: E(a(xd + a/(1 â d)) +b) = ys and Var(a(xd + a/(1 â d)) +b) =v. In contrast to dropout, a and 6 will depend on 1 and v, however our SNNs converge to activations with zero mean and unit variance. With . = 0 and v = 1, we obtain a = (q +a?q(1â a)â and b=-(qt+a%q(1- a)? ((1 â q)aâ). The parameters a and b only depend on the dropout rate 1 â qand the most negative activation aâ. Empirically, we found that dropout rates 1 â g = 0.05 or 0.10 lead to models with good performance. âAlpha-dropoutâ fits well to scaled exponential linear units by randomly setting activations to the negative saturation value.
6
Applicability of the central limit theorem and independence assumption. In the derivative of the mapping (Eq. ®)). we used the central limit theorem (CLT) to approximate the network inputs 2 = )0j_, wiz; with a normal distribution. We justified normality because network inputs represent a weighted sum of the inputs x;, where for Deep Learning n is typically large. The Berry-Esseen theorem states that the convergence rate to normality is n~1/? (22). In the classical version of the CLT, the random variables have to be independent and identically distributed, which typically does not hold for neural networks. However, the Lyapunov CLT does not require the variable to be identically distributed anymore. Furthermore, even under weak dependence, sums of random variables converge in distribution to a Gaussian distribution [5].
# Experiments
We compare SNNs to other deep networks at different benchmarks. Hyperparameters such as number of layers (blocks), neurons per layer, learning rate, and dropout rate, are adjusted by grid-search for each dataset on a separate validation set (see Section A4). We compare the following FNN methods:
⢠âMSRAinitâ: FNNs without normalization and with ReLU activations and âMicrosoft weight initializationâ [17].
âBatchNormâ: FNNs with batch normalization [20]. ⢠âLayerNormâ: FNNs with layer normalization [2]. ⢠âWeightNormâ: FNNs with weight normalization [32]. ⢠âHighwayâ: Highway networks [35]. ⢠âResNetâ: Residual networks [16] adapted to FNNs using residual blocks with 2 or 3 layers
with rectangular or diavolo shape.
⢠âSNNsâ: Self normalizing networks with SELUs with α = α01 and λ = λ01 and the proposed dropout technique and initialization strategy.
121 UCI Machine Learning Repository datasets. The benchmark comprises 121 classiï¬cation datasets from the UCI Machine Learning repository [10] from diverse application areas, such as physics, geology, or biology. The size of the datasets ranges between 10 and 130, 000 data points and the number of features from 4 to 250. In abovementioned work [10], there were methodological mistakes [37] which we avoided here. Each compared FNN method was optimized with respect to its architecture and hyperparameters on a validation set that was then removed from the subsequent analysis. The selected hyperparameters served to evaluate the methods in terms of accuracy on the pre-deï¬ned test sets (details on the hyperparameter selection are given in Section A4). The accuracies are reported in the Table A11. We ranked the methods by their accuracy for each prediction task and compared their average ranks. SNNs signiï¬cantly outperform all competing networks in pairwise comparisons (paired Wilcoxon test across datasets) as reported in Table 1 (left panel).
We further included 17 machine learning methods representing diverse method groups [10] in the comparison and the grouped the data sets into âsmallâ and âlargeâ data sets (for details see Section A4). On 75 small datasets with less than 1000 data points, random forests and SVMs outperform SNNs and other FNNs. On 46 larger datasets with at least 1000 data points, SNNs show the highest performance followed by SVMs and random forests (see right panel of Table 1, for complete results see Tables A12 and A12). Overall, SNNs have outperformed state of the art machine learning methods on UCI datasets with more than 1,000 data points.
Typically, hyperparameter selection chose SNN architectures that were much deeper than the selected architectures of other FNNs, with an average depth of 10.8 layers, compared to average depths of 6.0 for BatchNorm, 3.8 WeightNorm, 7.0 LayerNorm, 5.9 Highway, and 7.1 for MSRAinit networks. For ResNet, the average number of blocks was 6.35. SNNs with many more than 4 layers often provide the best predictive accuracies across all neural networks.
Drug discovery: The Tox21 challenge dataset. The Tox21 challenge dataset comprises about 12,000 chemical compounds whose twelve toxic effects have to be predicted based on their chemical
7
Table 1: Left: Comparison of seven FNNs on 121 UCI tasks. We consider the average rank difference to rank 4, which is the average rank of seven methods with random predictions. The ï¬rst column gives the method, the second the average rank difference, and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. SNNs signiï¬cantly outperform all other methods. Right: Comparison of 24 machine learning methods (ML) on the UCI datasets with more than 1000 data points. The ï¬rst column gives the method, the second the average rank difference to rank 12.5, and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. Methods that were signiï¬cantly worse than the best method are marked with â*â. The full tables can be found in Table A11, Table A12 and Table A13. SNNs outperform all competing methods.
FNN method comparison ML method comparison Method avg. rank diff. p-value Method avg. rank diff. p-value SNN MSRAinit LayerNorm Highway ResNet WeightNorm BatchNorm -0.756 -0.240* -0.198* 0.021* 0.273* 0.397* 0.504* SNN SVM 2.7e-02 1.5e-02 RandomForest 1.9e-03 MSRAinit 5.4e-04 LayerNorm 7.8e-07 Highway 3.5e-06 . . . -6.7 -6.4 -5.9 -5.4* -5.3 -4.6* . . . 5.8e-01 2.1e-01 4.5e-03 7.1e-02 1.7e-03 . . .
structure. We used the validation sets of the challenge winners for hyperparameter selection (see Section A4) and the challenge test set for performance comparison. We repeated the whole evaluation procedure 5 times to obtain error bars. The results in terms of average AUC are given in Table 2. In 2015, the challenge organized by the US NIH was won by an ensemble of shallow ReLU FNNs which achieved an AUC of 0.846 [28]. Besides FNNs, this ensemble also contained random forests and SVMs. Single SNNs came close with an AUC of 0.845±0.003. The best performing SNNs have 8 layers, compared to the runner-ups ReLU networks with layer normalization with 2 and 3 layers. Also batchnorm and weightnorm networks, typically perform best with shallow networks of 2 to 4 layers (Table 2). The deeper the networks, the larger the difference in performance between SNNs and other methods (see columns 5â8 of Table 2). The best performing method is an SNN with 8 layers.
Table 2: Comparison of FNNs at the Tox21 challenge dataset in terms of AUC. The rows represent different methods and the columns different network depth and for ResNets the number of residual blocks (ânaâ: 32 blocks were omitted due to computational constraints). The deeper the networks, the more prominent is the advantage of SNNs. The best networks are SNNs with 8 layers.
method 2 3 #layers / #blocks 6 4 8 16 32 83.7 ± 0.3 SNN Batchnorm 80.0 ± 0.5 WeightNorm 83.7 ± 0.8 84.3 ± 0.3 LayerNorm 83.3 ± 0.9 Highway 82.7 ± 0.4 MSRAinit 82.2 ± 1.1 ResNet 84.4 ± 0.5 79.8 ± 1.6 82.9 ± 0.8 84.3 ± 0.5 83.0 ± 0.5 81.6 ± 0.9 80.0 ± 2.0 84.2 ± 0.4 77.2 ± 1.1 82.2 ± 0.9 84.0 ± 0.2 82.6 ± 0.9 81.1 ± 1.7 80.5 ± 1.2 83.9 ± 0.5 77.0 ± 1.7 82.5 ± 0.6 82.5 ± 0.8 82.4 ± 0.8 80.6 ± 0.6 81.2 ± 0.7 84.5 ± 0.2 75.0 ± 0.9 81.9 ± 1.2 80.9 ± 1.8 80.3 ± 1.4 80.9 ± 1.1 81.8 ± 0.6 83.5 ± 0.5 73.7 ± 2.0 78.1 ± 1.3 78.7 ± 2.3 80.3 ± 2.4 80.2 ± 1.1 81.2 ± 0.6 82.5 ± 0.7 76.0 ± 1.1 56.6 ± 2.6 78.8 ± 0.8 79.6 ± 0.8 80.4 ± 1.9 na
Astronomy: Prediction of pulsars in the HTRU2 dataset. Since a decade, machine learning methods have been used to identify pulsars in radio wave signals [27]. Recently, the High Time Resolution Universe Survey (HTRU2) dataset has been released with 1,639 real pulsars and 16,259 spurious signals. Currently, the highest AUC value of a 10-fold cross-validation is 0.976 which has been achieved by Naive Bayes classiï¬ers followed by decision tree C4.5 with 0.949 and SVMs with 0.929. We used eight features constructed by the PulsarFeatureLab as used previously [27]. We assessed the performance of FNNs using 10-fold nested cross-validation, where the hyperparameters were selected in the inner loop on a validation set (for details on the hyperparameter selection see
8
Section A4). Table 3 reports the results in terms of AUC. SNNs outperform all other methods and have pushed the state-of-the-art to an AUC of 0.98.
Table 3: Comparison of FNNs and reference methods at HTRU2 in terms of AUC. The ï¬rst, fourth and seventh column give the method, the second, ï¬fth and eight column the AUC averaged over 10 cross-validation folds, and the third and sixth column the p-value of a paired Wilcoxon test of the AUCs against the best performing method across the 10 folds. FNNs achieve better results than Naive Bayes (NB), C4.5, and SVM. SNNs exhibit the best performance and set a new record.
method FNN methods AUC p-value method FNN methods AUC ref. methods p-value method AUC 0.9803 ± 0.010 SNN MSRAinit 0.9791 ± 0.010 WeightNorm 0.9786* ± 0.010 0.9766* ± 0.009 Highway 3.5e-01 2.4e-02 9.8e-03 LayerNorm 0.9762* ± 0.011 BatchNorm 0.9760 ± 0.013 0.9753* ± 0.010 ResNet 1.4e-02 6.5e-02 6.8e-03 NB C4.5 SVM 0.976 0.946 0.929
# Conclusion
We have introduced self-normalizing neural networks for which we have proved that neuron ac- tivations are pushed towards zero mean and unit variance when propagated through the network. Additionally, for activations not close to unit variance, we have proved an upper and lower bound on the variance mapping. Consequently, SNNs do not face vanishing and exploding gradient prob- lems. Therefore, SNNs work well for architectures with many layers, allowed us to introduce a novel regularization scheme, and learn very robustly. On 121 UCI benchmark datasets, SNNs have outperformed other FNNs with and without normalization techniques, such as batch, layer, and weight normalization, or specialized architectures, such as Highway or Residual networks. SNNs also yielded the best results on drug discovery and astronomy tasks. The best performing SNN architectures are typically very deep in contrast to other FNNs.
# Acknowledgments
This work was supported by IWT research grant IWT150865 (Exaptation), H2020 project grant 671555 (ExCAPE), grant IWT135122 (ChemBioBridge), Zalando SE with Research Agreement 01/2016, Audi.JKU Deep Learning Center, Audi Electronic Venture GmbH, and the NVIDIA Corporation.
# References
The references are provided in Section A7.
# Appendix
# Contents
# A1 Background
12 A2.1 Theorem 1: Stable and Attracting Fixed Points Close to (0,1) . . . . . . . . . . . . 12 A2.2 Theorem 2: Decreasing Variance from Above . . . . . . . . . . . . . . . . . . . . 12
# A2 Theorems
9
A2.3 Theorem 3: Increasing Variance from Below . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A3.4.1 Lemmata for prooï¬ng Theorem 1 (part 1): Jacobian norm smaller than one A3.4.2 Lemmata for prooï¬ng Theorem 1 (part 2): Mapping within domain . . . . A3.4.3 Lemmata for prooï¬ng Theorem 2: The variance is contracting . . . . . . . A3.4.4 Lemmata for prooï¬ng Theorem 3: The variance is expanding . . . . . . . A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1. A3.4.6 Intermediate Lemmata and Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 14 18 19 19 29 29 32 33 37 84 85 87 92 95 97 97 98 100
# A3 Proofs of the Theorems
# A4 Additional information on experiments
A5 Other ï¬xed points
A6 Bounds determined by numerical methods
# A7 References
# List of ï¬gures
# List of tables
# Brief index
This appendix is organized as follows: the ï¬rst section sets the background, deï¬nitions, and for- mulations. The main theorems are presented in the next section. The following section is devoted to the proofs of these theorems. The next section reports additional results and details on the per- formed computational experiments, such as hyperparameter selection. The last section shows that our theoretical bounds can be conï¬rmed by numerical methods as a sanity check.
The proof of theorem 1 is based on the Banachâs ï¬xed point theorem for which we require (1) a contraction mapping, which is proved in Subsection A3.4.1 and (2) that the mapping stays within its domain, which is proved in Subsection A3.4.2 For part (1), the proof relies on the main Lemma 12, which is a computer-assisted proof, and can be found in Subsection A3.4.1. The validity of the computer-assisted proof is shown in Subsection A3.4.5 by error analysis and the precision of the functionsâ implementation. The last Subsection A3.4.6 compiles various lemmata with intermediate results that support the proofs of the main lemmata and theorems.
10
100
102
# A1 Background
We consider a neural network with activation function f and two consecutive layers that are connected by weight matrix W . Since samples that serve as input to the neural network are chosen according to a distribution, the activations x in the lower layer, the network inputs z = W x, and activations y = f (z) in the higher layer are all random variables. We assume that all units xi in the lower layer have mean activation µ := E(xi) and variance of the activation ν := Var(xi) and a unit y in the higher layer has mean activation ˵ := E(y) and variance Ëν := Var(y). Here E(.) denotes the expectation and Var(.) the variance of a random variable. For activation of unit y, we have net input z = wT x and the scaled exponential linear unit (SELU) activation y = selu(z), with
# x
x ifx>0 aeââa ifx<0~- selu(z) = A { (7)
For n units x;, 1 < i < n in the lower layer and the weight vector w © Râ, we define n times the mean by w := )>;__, w; and 7 times the second moment by 7 := )>;_, w?. We define a mapping g from mean ju and variance v of one layer to the mean / and variance v in the next layer:
g: (Hv) + (4,7). (8)
For neural networks with scaled exponential linear units, the mean is of the activations in the next layer computed according to
0 oo j= | Aa(exp(z) â 1) pcauss (23 ww, /vT)dz + [ Azpaauss(z; ww, VvT)dz, (9) 5 0 â0o
and the second moment of the activations in the next layer is computed according to
é = [ da? (exp(z) â 1)? paauss (2; Ww, VuT)dz + [ 72? DGauss (2; ww, VVT)dz. (10) âoo
Therefore, the expressions ˵ and Ëν have the following form:
fi(u,w,V,T, A, a) â (te + js)erte (Eâ) n an)
aeclât"F erfe (â*) + v2 Tre + 2) D(p,w,V,7,A,0) = E,W, V,7,,0) â (ji(H,w, 1,7, A,a))? (12) Eusw.nr da) = 5% ( (us? 7) Gi ( na ) 1) (13) a (2008 erfe (â*) + e2HH+Y7) orfe (â*) + erfe (5) + 2tn) re
We solve equations Eq. 4 and Eq. 5 for ï¬xed points ˵ = µ and Ëν = ν. For a normalized weight vector with Ï = 0 and Ï = 1 and the ï¬xed point (µ, ν) = (0, 1), we can solve equations Eq. 4 and Eq. 5 for α and λ. We denote the solutions to ï¬xed point (µ, ν) = (0, 1) by α01 and λ01.
2 a - * = 1.67326 (14) . erfc (4) exp (3) -1 don = (1 -e1fe () ve) v2" v2
11
2 â1/2 (20 (v9 e4 nerfe (45) e-2(24 nyerte (5) Ve+r4 ) Aor & 1.0507 .
The parameters α01 and λ01 ensure
˵(0, 0, 1, 1, λ01, α01) = 0 Ëν(0, 0, 1, 1, λ01, α01) = 1
Since we focus on the ï¬xed point (µ, ν) = (0, 1), we assume throughout the analysis that α = α01 and λ = λ01. We consider the functions ˵(µ, Ï, ν, Ï, λ01, α01), Ëν(µ, Ï, ν, Ï, λ01, α01), and Ëξ(µ, Ï, ν, Ï, λ01, α01) on the domain ⦠= {(µ, Ï, ν, Ï ) | µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], Ï â [Ïmin, Ïmax] = [0.95, 1.1]}.
Figure 2 visualizes the mapping g for Ï = 0 and Ï = 1 and α01 and λ01 at few pre-selected points. It can be seen that (0, 1) is an attracting ï¬xed point of the mapping g.
# A2 Theorems
# A2.1 Theorem 1: Stable and Attracting Fixed Points Close to (0,1)
Theorem 1 shows that the mapping g deï¬ned by Eq. (4) and Eq. (5) exhibits a stable and attracting ï¬xed point close to zero mean and unit variance. Theorem 1 establishes the self-normalizing property of self-normalizing neural networks (SNNs). The stable and attracting ï¬xed point leads to robust learning through many layers. Theorem 1 (Stable and Attracting Fixed Points). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1]. For Ï = 0 and Ï = 1, the mapping Eq. (4) and Eq. (5) has the stable ï¬xed point (µ, ν) = (0, 1). For other Ï and Ï the mapping Eq. (4) and Eq. (5) has a stable and attracting ï¬xed point depending on (Ï, Ï ) in the (µ, ν)-domain: µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617]. All points within the (µ, ν)-domain converge when iteratively applying the mapping Eq. (4) and Eq. (5) to this ï¬xed point.
# A2.2 Theorem 2: Decreasing Variance from Above
The next Theorem{2]states that the variance of unit activations does not explode through consecutive layers of self-normalizing networks. Even more, a large variance of unit activations decreases when propagated through the network. In particular this ensures that exploding gradients will never be observed. In contrast to the domain in previous subsection, in which v ⬠[0.8, 1.5], we now consider a domain in which the variance of the inputs is higher v ⬠[3, 16] and even the range of the mean is increased jz ⬠[â1, 1]. We denote this new domain with the symbol Q** to indicate that the variance lies above the variance of the original domain 2. In Q+*, we can show that the variance 7 in the next layer is always smaller then the original variance . Concretely, this theorem states that: Theorem 2 (Decreasing v). For \ = Aoi, @ = Ag, and the domain Q*+: -1 <w< 1,-0.1 < w<013<v < 16, and0.8 <7 < 1.25 we have for the mapping of the variance U([1,w,V,T, , 2) given in Eq.
Ëν(µ, Ï, ν, Ï, λ01, α01) < ν . (15)
The variance decreases in [3, 16] and all ï¬xed points (µ, ν) of mapping Eq. (5) and Eq. (4) have ν < 3.
# A2.3 Theorem 3: Increasing Variance from Below
The next Theorem 3 states that the variance of unit activations does not vanish through consecutive layers of self-normalizing networks. Even more, a small variance of unit activations increases when
12
propagated through the network. In particular this ensures that vanishing gradients will never be observed. In contrast to the first domain, in which v ⬠(0.8, 1.5], we now consider two domains Q and Q3 in which the variance of the inputs is lower 0.05 < v < 0.16 and 0.05 < v < 0.24, and even the parameter 7 is different 0.9 < tT < 1.25 to the original 2. We denote this new domain with the symbol (2; to indicate that the variance lies below the variance of the original domain Q. In Q7 and (3 , we can show that the variance v in the next layer is always larger then the original variance v, which means that the variance does not vanish through consecutive layers of self-normalizing networks. Concretely, this theorem states that: Theorem 3 (Increasing v). We consider ) = oi, @ = agi and the two domains QY = {(u,w,v,T) | â01 < w < 0.1,-0.1 < w < 0.1,0.05 < v < 0.16,0.8 < rT < 1.25} and OF = {(j,w,v,7) | â0.1 <p <01,-0.1 <w <0.1,0.05 <v < 0.24,0.9 <7 < 1.25}.
The mapping of the variance Ëν(µ, Ï, ν, Ï, λ, α) given in Eq. (5) increases
Ëν(µ, Ï, ν, Ï, λ01, α01) > ν (16)
in both Qy and Qy. All fixed points (j1,v) of mapping Eq. (5) and Eq. (4) ensure for 0.8 < rT that vb > 0.16 and for 0.9 < T that V > 0.24. Consequently, the variance mapping Eq. (5) and Eq. ensures a lower bound on the variance v.
# A3 Proofs of the Theorems
# A3.1 Proof of Theorem 1
We have to show that the mapping g deï¬ned by Eq. (4) and Eq. (5) has a stable and attracting ï¬xed point close to (0, 1). To proof this statement and Theorem 1, we apply the Banach ï¬xed point theorem which requires (1) that g is a contraction mapping and (2) that g does not map outside the functionâs domain, concretely:
Theorem 4 (Banach Fixed Point Theorem). Let (X, d) be a non-empty complete metric space with a contraction mapping f : X â X. Then f has a unique ï¬xed-point xf â X with f (xf ) = xf . Every xf . sequence xn = f (xnâ1) with starting element x0 â X converges to the ï¬xed point: xn âââââ nââ
Contraction mappings are functions that map two points such that their distance is decreasing:
Definition 2 (Contraction mapping). A function f : X â X ona metric space X with distance d is a contraction mapping, if there is a0 < 5 < 1, such that for all points wu and v in X: d(f(u), f(v)) < dd(u, v).
To show that g is a contraction mapping in Q with distance ||.||2, we use the Mean Value Theorem for uvEed
IIg(t4) â g()ll2 < M ||u â vII2, (17)
in which M is an upper bound on the spectral norm the Jacobian H of g. The spectral norm is given by the largest singular value of the Jacobian of g. If the largest singular value of the Jacobian is smaller than 1, the mapping g of the mean and variance to the mean and variance in the next layer is contracting. We show that the largest singular value is smaller than 1 by evaluating the function for the singular value S(µ, Ï, ν, Ï, λ, α) on a grid. Then we use the Mean Value Theorem to bound the deviation of the function S between grid points. To this end, we have to bound the gradient of S with respect to (µ, Ï, ν, Ï ). If all function values plus gradient times the deltas (differences between grid points and evaluated points) is still smaller than 1, then we have proofed that the function is below 1 (Lemma 12). To show that the mapping does not map outside the functionâs domain, we derive bounds on the expressions for the mean and the variance (Lemma 13). Section A3.4.1 and Section A3.4.2 are concerned with the contraction mapping and the image of the function domain of g, respectively.
With the results that the largest singular value of the Jacobian is smaller than one (Lemma 12) and that the mapping stays in the domain ⦠(Lemma 13), we can prove Theorem 1. We ï¬rst recall Theorem 1:
13
Theorem (Stable and Attracting Fixed Points). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.95, 1.1]. For Ï = 0 and Ï = 1, the mapping Eq. (4) and Eq. (5) has the stable ï¬xed point (µ, ν) = (0, 1). For other Ï and Ï the mapping Eq. (4) and Eq. (5) has a stable and attracting ï¬xed point depending on (Ï, Ï ) in the (µ, ν)-domain: µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617]. All points within the (µ, ν)-domain converge when iteratively applying the mapping Eq. (4) and Eq. (5) to this ï¬xed point.
Proof. According to Lemma 12 the mapping g (Eq. (4) and Eq. (5)) is a contraction mapping in the given domain, that is, it has a Lipschitz constant smaller than one. We showed that (µ, ν) = (0, 1) is a ï¬xed point of the mapping for (Ï, Ï ) = (0, 1).
The domain is compact (bounded and closed), therefore it is a complete metric space. We further have to make sure the mapping g does not map outside its domain â¦. According to Lemma 13, the mapping maps into the domain µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617].
Now we can apply the Banach ï¬xed point theorem given in Theorem 4 from which the statement of the theorem follows.
# A3.2 Proof of Theorem 2
First we recall Theorem[2} Theorem (Decreasing v). For \ = \o1, @ = a1 and the domainOt++: -1<w<1,-0.1<w< 0.1, 3 <v < 16, and 0.8 < T < 1.25 we have for the mapping of the variance 0(u,w,V,T, , a) given in Eq.
Ëν(µ, Ï, ν, Ï, λ01, α01) < ν . (18)
The variance decreases in [3, 16] and all ï¬xed points (µ, ν) of mapping Eq. (5) and Eq. (4) have ν < 3.
Proof. We start to consider an even larger domain â-1 < wp < 1,-0.1 <w <0.1,15<V < 16, and 0.8 < 7 < 1.25. We prove facts for this domain and later restrict to3 <v<16,i.e. Q++, We consider the function g of the difference between the second moment ⬠in the next layer and the variance v in the lower layer:
g(M,W,U,T, Ao1, 01) = E(p,w, V,T,Ao1,401) â VY. (19) If we can show that g(j1,w,v,7, 01,01) < 0 for all (u,w,v,7) ⬠Q**, then we would obtain our desired result 7 < ⬠< v. The derivative with respect to v is according to Theorem|16]
â âν g(µ, Ï, ν, Ï, λ01, α01) = â âν Ëξ(µ, Ï, ν, Ï, λ01, α01) â 1 < 0 . (20)
Therefore g is strictly monotonically decreasing in ν. Since Ëξ is a function in Î½Ï (these variables only appear as this product), we have for x = νÏ
â âν Ëξ = â âx Ëξ âx âν = â âx Ëξ Ï (21)
and
â âÏ Ëξ = â âx Ëξ âx âÏ = â âx Ëξ ν . (22)
Therefore we have according to Theorem 16:
â âÏ Ëξ(µ, Ï, ν, Ï, λ01, α01) = ν Ï â âν Ëξ(µ, Ï, ν, Ï, λ01, α01) > 0 . (23)
Therefore
â âÏ g(µ, Ï, ν, Ï, λ01, α01) = â âÏ Ëξ(µ, Ï, ν, Ï, λ01, α01) > 0 . (24)
14
Consequently, g is strictly monotonically increasing in Ï . Now we consider the derivative with respect to µ and Ï. We start with â âµ
oz on E(y1,w,V,7, \, 0) (25) p wet + ur Mw ( a? (âe4#â*+*) erfe (â*) + (eet et (Cares . + 2Qvr [uw a7 ePHet2V7 orfc (â= + juw ( 2 â erfc +2 rem |). V2VuT V2VvT
(25)
We consider the sub-function
2 2 ive (« (oe) erfc (â*) _ (a) erfc (â*)) : (26) VUT VUT
We set x = Î½Ï and y = ÂµÏ and obtain
x 2 x aty \? Qn + y2ve- a 2(o( We ) erfc (<4) â ( %) erfc (4) : (27)
The derivative to this sub-function with respect to y is
Ea y a+y)? a+ a? (= ae (2a + y) erfe (234) eS (x ) erfe (#4) ) _ 08) x (Gx 2 (ety)? wty 302 JE e- ae (Qa-+y) erfe(2 2ety) _e oe (ety) erfe( 34) > 0.
# x
The inequality follows from Lemma 24, which states that zez2 erfc(z) is monotonically increasing in z. Therefore the sub-function is increasing in y. The derivative to this sub-function with respect to x is
1 2 { @ztu)? 2 2 Qe+y a oe dae â fi DJne Vina (« ( x y ) er c Vaya (x+y)? â8° ye rspate( £22) â valet 1 2 (29)
The sub-function is increasing in x, since the derivative is larger than zero: Ea 2 6 . a Ea 2 ra? (e* om (42? â y?) erfe (24) âe (x â y(a + y)erfe
Ea 2 6 . a Ea 2 ra? (e* om (42? â y?) erfe (24) âe (x â y(a + y)erfe (Ee -)) â V2x°/? (a? â 1) 2 /rx? ?
2 /rx? 2/nx? > (2aây)(2e-+y)2(V2Vz) (w=y)(w+y)2(v2 Vz) 3/2 (2 - -1 via Vi(QQetyt Qnty tae) Va(atyt/(et+y)?+=) ~ V20* (a ) 2\/rx? Vio? (2aây)(Qa+y)2 (22ây)(2e+y)2 (w@=y)(@+y)2 â V2x3/2 (oe? _ 1) (sete+/(Ga) (234)â (Gere) - me w+y act (ape) 48 ape J") (w=y)(e+y)2 _¢ (0? _ 1) Vi (20+y+/Qety)? +42) - Vi(etut J @tuP +) > V2 /rx3/2
(30)
15
â
a2 D (2aây)(2a+y)2 (wy) (@+y)2 ) x(a? _ 1) Vi (2e-+u+ (2a+y)?+2(2a+y) +1) Vi (atu Vatu)? +0.878-2(0+y) +0.878") D V2 /r03/2 Vi (Qetyt/Qrtytl)?) Vi (atyty/(o+y+0.878)?) a2 D V2Vre8/? (2aây)(Qat+y)2 (xy) (a@+y)2 2 Vr@@r+y) +1) - es) x(a? 1) a 3 a2 ( (2aây)(2a-+y)2 (w@=y)(@+y)2 ) _¢ (0? _ 1) V2/rx3/2 (2(a+y)+0. STA oe y)Qa+y)2 Cop eepGbery ey?) 7 ae 2(2e+y)+ aT (2(a + y) + 0.878) 2723/2 x (a? â 1) (2(2% + y) + 1)(2(x + y) + 0.878)) 2 (2a + y) + 1)(2(a + y) + 0.878) V2,./r03/? 8a3 4+ 120?y + 4.145692? + dary? â 6.76009xy â 1.580232 + 0.683154y? (2(2a + y) + 1)(2(a + y) + 0.878) /2./r23/? 8x3 â 0.1 - 12x? + 4.145692? + 4 - (0.0)?x â 6.76009 - 0.1a â 1.580232 + 0.683154 - (0.0)? (2(2a + y) + 1)(2(a + y) + 0.878) /2/ra3/2 8a? + 2.94569 â 2.25624 (2(2x y) + 1) (Q(a + y) + 0.878) V2/a Vz 8(« â 0.377966) (a + 0.746178) (2(2x y) +1)(2(@ + y) 0.878) V2Vava
We explain this chain of inequalities:
⢠First inequality: We applied Lemma 22 two times.
â
â 2
⢠Equalities factor out x and reformulate.
⢠Second inequality part 1: we applied
0 < 2y =â (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 .
Second inequality part 2: we show that for a = 0 (V 960-1697 - 13) following holds: 82 _ (a? + 2a(x+y)) > 0. We have 2-8* â (a? + 2a(x+y)) = 8 â 2a > O and by Sz - (a? + 2a(x 4 y)) â2a < 0. Therefore the minimum is at border for minimal x and maximal y:
â 2 8:12 _ (2 ( /960+1697 |, a2+or+(2 (mt 1697 4 T 10 us 10 7 (32)
Thus
> a? +2a(x+y). (33) fora = qh (/ eee â 13) > 0.878. 8a T
⢠Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.878).
⢠We set α = α01 and multiplied out. Thereafter we also factored out x in the numerator. Finally a quadratic equations was solved.
16
=
The sub-function has its minimal value for minimal x = v7 = 1.5-0.8 = 1.2 and minimal y = pw = â1-0.1 = â0.1. We further minimize the function ww fu 12 0.1
ww fu 12 0.1 )) wwe 27 {2 âerfc > â0.le2T2 | 2 â erfc | â . 34 me (2-e (a) (-«(aym)) 0
Ëξ(µ, Ï, ν, Ï, λ, α) in Eq. (25):
We compute the minimum of the term in brackets of 2? pw wwe 27 | 2 âerfe | âââ } ] + ' ( (4)
We compute the minimum of the term in brackets of HEU, w,v,T, A, a) in Eq. (25):
# µ2 Ï2 2νÏ
2? pw wwe 27 | 2 âerfe | âââ } ] + 35 ' ( (4) o> wootur)? w+ VT petaur)? pwd + 2vT 2 aR (- (a) erfc (â*) - el a) erfc (â= +4/=VvtT > âi Vij ViVi * â 2 â 2-0. 2 . _ 02, (- (eC #12) erfe () â (AR) erfe A ))) - V2v1.2 V2.2 0.1? 0.1 2 0.le212 | 2 âerfc + V1.2 0.212234 . ( (av) Viz
Ëξ(µ, Ï, ν, Ï, λ, α) has the sign Therefore the term in brackets of Eq. (25) is larger than zero. Thus, â âµ of Ï. Since Ëξ is a function in ÂµÏ (these variables only appear as this product), we have for x = µÏ
â âν Ëξ = â âx Ëξ âx âµ = â âx Ëξ Ï (36)
and
â âÏ Ëξ = â âx Ëξ âx âÏ = â âx Ëξ µ . (37)
â âÏ Ëξ has the sign of Ï, â âµ
µ Ï â âµ Ëξ(µ, Ï, ν, Ï, λ01, α01) = Ëξ(µ, Ï, ν, Ï, λ01, α01) . (38)
Since â âµ Ëξ has the sign of µ. Therefore
â âÏ g(µ, Ï, ν, Ï, λ01, α01) = â âÏ Ëξ(µ, Ï, ν, Ï, λ01, α01) (39)
has the sign of ju. We now divide the ji-domain into â1 < ys < Oand0 < p < 1. Analogously we divide the w-domain into â0.1 <w < Oand0 <w < 0.1. In this domains g is strictly monotonically.
For all domains g is strictly monotonically decreasing in v and strictly monotonically increasing in T. Note that we now consider the range 3 < v < 16. For the maximal value of g we set v = 3 (we set it to 3!) and 7 = 1.25.
We consider now all combination of these domains:
e -l<yw<O0and-0.1<w<0:
g is decreasing in µ and decreasing in Ï. We set µ = â1 and Ï = â0.1.
g(â1, â0.1, 3, 1.25, λ01, α01) = â0.0180173 .
e -l<w<O0and0<w<01:
g is increasing in µ and decreasing in Ï. We set µ = 0 and Ï = 0.
g(0, 0, 3, 1.25, λ01, α01) = â0.148532 . (41)
e©0<w<land-0.l<w<0:
g is decreasing in µ and increasing in Ï. We set µ = 0 and Ï = 0.
g(0, 0, 3, 1.25, λ01, α01) = â0.148532 . (42)
17
(35)
(40)
e©0<w<land0<w<0.1:
g is increasing in µ and increasing in Ï. We set µ = 1 and Ï = 0.1.
g(1, 0.1, 3, 1.25, λ01, α01) = â0.0180173 . (43)
Therefore the maximal value of g is â0.0180173.
# A3.3 Proof of Theorem 3
First we recall TheoremB} Theorem (Increasing v). We consider X = Xo1, @ = agi and the two domains Qy {(u,w,v,T) | â01 < w < 0.1,-0.1 < w < 0.1,0.05 < v < 0.16,0.8 < r < 1.25} and OF = {(1,4,Â¥,7) | â0.1< p< 01,-0.1 <w <0.1,0.05 <v < 0.24,0.9 <7 < 1.25}.
The mapping of the variance Ëν(µ, Ï, ν, Ï, λ, α) given in Eq. (5) increases
D(U,W,V,T,Ao1,Q01) > Y (44) in both QF and Q5. All fixed points (41, v) of mapping Eq. (5) and Eq. (4) ensure for 0.8 < 7 that D > 0.16 and for 0.9 < 7 that D > 0.24. Consequently, the variance mapping Eq. 5) and Eq. (A ensures a lower bound on the variance v.
Proof. The mean value theorem states that there exists a t â [0, 1] for which Ëξ(µ, Ï, ν, Ï, λ01, α01) â Ëξ(µ, Ï, νmin, Ï, λ01, α01) = â âν
Ëξ(µ, Ï, ν + t(νmin â ν), Ï, λ01, α01) (ν â νmin) . (45)
Therefore
Ëξ(µ, Ï, ν, Ï, λ01, α01) = Ëξ(µ, Ï, νmin, Ï, λ01, α01) + â âν Ëξ(µ, Ï, ν + t(νmin â ν), Ï, λ01, α01) (ν â νmin) . (46)
Therefore we are interested to bound the derivative of the ξ-mapping Eq. (13) with respect to ν:
Ox Bye ews aT Aoi; 01) = (47) wwtvr \? ww. vr \? 12 (« (- (() erfc Gas + ) - oe( 7) erfc (â= + ~))) - 2 V2uT V2uT erfc ( We ) + 2) : V2VvT
The sub-term Eq. (308) enters the derivative Eq. with a negative sign! According to LemmalI8} the minimal value of sub-term Eq. (308) is obtained by the largest largest v, by the smallest 7, and the largest y = jw = 0.01. Also the positive term erfc (4) + 2 is multiplied by 7, which is minimized by using the smallest 7. Therefore we can use the smallest 7 in whole formula Eq. to lower bound it. First we consider the domain 0.05 < v < 0.16 and 0.8 < 7 < 1.25. The factor consisting of the 1.0.01 exponential in front of the brackets has its smallest value for e~ 0.05-0-8 , Since erfe is monotonically decreasing we inserted the smallest argument via erfc (- oats! in order to obtain the maximal negative contribution. Thus, applying LemmajI8} we obtain the lower bound on the derivative: 122 . wir)? mw +2u7\? 12 G (- (a) erfe (â*) 2 BEY ente (â*))) _ 2 V2 vt V2 /vT
122 . wir)? mw +2u7\? 12 G (- (a) erfe (â*) 2 BEY ente (â*))) _ 2 V2 vt V2 /vT (48)
18
f ( ad ) + 2) > eric J2./0T ; 2 . Jose Shi NB, (24 (- (canes) erfe (S . aoe) _ 2/016 -0.8 | 2:0.16-0.840.01)? 2-0.16-0.8+0.01 0.01 2el V2V0.16-.0.8 ) erfc eee) â erfe (-as) + 2) > 0.969231 . V2V0.16 - 0.8 V2V0.05 - 0.8 )
1 2
For applying the mean value theorem, we require the smallest (1). We follow the proof of Lemmals| which shows that at the minimum y = jw must be maximal and x = vt must be minimal. Thus, the smallest E(ju,w,v, 7, Aoi, 01) is â¬(0.01, 0.01, 0.05, 0.8, Ao1, 201) = 0.0662727 for 0.05 < v and 0.8 <7. Therefore the mean value theorem and the bound on (j1)? (Lemma[43} provide = E(,w,V,7, Nor, Q01) â (fi(u,w,Y,7, Aor, 01)â > (49) 0.0662727 + 0.969231(v â 0.05) â 0.005 = 0.01281115 + 0.969231v > 0.08006969 - 0.16 + 0.969231lv > 1.049301lv > Vv.
Next we consider the domain 0.05 < v < 0.24 and 0.9 < 7 < 1.25. The factor consisting of the exponential in front of the brackets has its smallest value for e~ 30.05-0-9 , Since erfe is monotonically . . se, (0.01 . . . decreasing we inserted the smallest argument via erfc ( Jave0e05 her | in order to obtain the maximal negative contribution.
Thus, applying Lemma 18, we obtain the lower bound on the derivative:
w+ur\? bw vr \2 tyre" tee (« (- (<9) ext (ââ*) ol HEY ente (ââ¢))) _ V2 vt V2 /vT (50)
( pu J2./0T
) 2)
# erfc
+ 2
>
# νÏ
2 10 ge PPbRR 2, (24 (- (clans TEE) rte (â 09+ a) _ 20.24 -0.9 2:0.24-0.9+0.01 )? 2-0.24-0.9+ 0.01 0.01 del V2V0.24.0.9 ) erfc Cao) âerfc (-aw) + 2) > 0.976952. V2V0.24-0.9 V2V0.05 - 0.9 )
1 2
For applying the mean value theorem, we require the smallest (1). We follow the proof of Lemmas] which shows that at the minimum y = jzw must be maximal and x = v7 must be minimal. Thus, the smallest â¬(,w,v,7, Ao1, 01) is â¬(0.01, 0.01, 0.05, 0.9, Ao1, @o1) = 0.0738404 for 0.05 < v and 0.9 < 7. Therefore the mean value theorem and the bound on (jz)? (Lemma|43} gives
v= E(p1,w, V,T, X01, 001) â (AH, w, YT, Ao1, ao1))? > (51) 0.0738404 + 0.976952(v â 0.05) â 0.005 = 0.0199928 + 0.976952 > 0.08330333 - 0.24 + 0.976952v > 1.060255v > v.
# A3.4 Lemmata and Other Tools Required for the Proofs
# A3.4.1 Lemmata for prooï¬ng Theorem 1 (part 1): Jacobian norm smaller than one
In this section, we show that the largest singular value of the Jacobian of the mapping g is smaller than one. Therefore, g is a contraction mapping. This is even true in a larger domain than the original â¦. We do not need to restrict Ï â [0.95, 1.1], but we can extend to Ï â [0.8, 1.25]. The range of the other variables is unchanged such that we consider the following domain throughout this section: µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
19
(50)
Jacobian of the mapping. In the following, we denote two Jacobians: (1) the Jacobian 7 of the mapping h : (u,v) +> (jt, â¬), and (2) the Jacobian H of the mapping g : (4,1) +> (fi, 7) because the influence of ji on v is small, and many properties of the system can already be seen on 7.
a- a- âf{ Irn S2a\_ ope avlk T= ( Jar Ix ) â ( ee we ) ©)
# Irn ( Jar Ix Hu Haz ( Hai Hee
Hu Haz Ar Fr = = 7 . 53 H ( Hai Hee ) ( Ia â 2jtTi1 P22 â 22 ) 63)
The deï¬nition of the entries of the Jacobian J is:
A (54) op 1 vr pw + VT pw =X Het erfe fe + 2 aad (c« erfe ( Jiu ) erfe (4 â) ) Tra(,w,v,7, 4,0) = 2 filu.w.1.7, 0) = (55) OV 1 ve, 2 22 =r | aetâ* > erfe (â + 7) â(aâ1) eo a 4 V2 /vT TUT (2) Tar(U,0,U,7,A,0) = Deter Â¥T As) = (56) wpUt +UT Mw G âet#t 2) erfe (4) + 22m 207 ong, ( Hee + =") ( ââ ( plus )) [2 = 12a? ave erfe | âââ_ } + pw | 2 â erfe + VVTe ( V2 /vT M J2/0T Tw OQ: To2(p1,W,V,7, A, a) = By Slt WoT As) = (57) 1 ¢ per ~ [ pw +r =r (« âetâ+'D) erfe (â*) + uw +5 + 2vr pu Qa? e2HY+2"7 erfc (â*) â erfe ( ) + 2) J2/vT J2vT
Proof sketch: Bounding the largest singular value of the Jacobian. If the largest singular value of the Jacobian is smaller than 1, then the spectral norm of the Jacobian is smaller than 1. Then the mapping Eq. (4) and Eq. (5) of the mean and variance to the mean and variance in the next layer is contracting.
We show that the largest singular value is smaller than 1 by evaluating the function S(µ, Ï, ν, Ï, λ, α) on a grid. Then we use the Mean Value Theorem to bound the deviation of the function S between grid points. Toward this end we have to bound the gradient of S with respect to (µ, Ï, ν, Ï ). If all function values plus gradient times the deltas (differences between grid points and evaluated points) is still smaller than 1, then we have proofed that the function is below 1.
# The singular values of the 2 Ã 2 matrix
_ fan ay A= ( a2, a22 ) (68)
are
(Ven + G92)? + (aa1 â diz)? + V/(ar1 â 22)? + (a2 4 an)â) ; 5 (Ven + G99)? + (oq â 42)? â V(a11 = G22)? + (aio + anâ)
1 2 1 2
# SL
=
s2 = . (60)
20
(59)
We used an explicit formula for the singular values [4]. We now set H11 = a11, H12 = a12, H21 = a21, H22 = a22 to obtain a formula for the largest singular value of the Jacobian depending on (µ, Ï, ν, Ï, λ, α). The formula for the largest singular value for the Jacobian is:
(Va + Hoo)? + (Har â Hi2)? + V(Har â H22)? + (Hie + Haâ)
S(µ, Ï, ν, Ï, λ, α) = (61)
1 =35 (Va + Foz â 2jtFi2)? + (Par â 20a â Fiz)? + V(Tu = Jaz + 2tTi2)? + (Sia + Jar â 2iTuay?) ;
where J are deï¬ned in Eq. (54) and we left out the dependencies on (µ, Ï, ν, Ï, λ, α) in order to keep the notation uncluttered, e.g. we wrote J11 instead of J11(µ, Ï, ν, Ï, λ, α).
Bounds on the derivatives of the Jacobian entries. In order to bound the gradient of the singular value, we have to bound the derivatives of the Jacobian entries J11(µ, Ï, ν, Ï, λ, α), J12(µ, Ï, ν, Ï, λ, α), J21(µ, Ï, ν, Ï, λ, α), and J22(µ, Ï, ν, Ï, λ, α) with respect to µ, Ï, ν, and Ï . The values λ and α are ï¬xed to λ01 and α01. The 16 derivatives of the 4 Jacobian entries with respect to the 4 variables are:
; 2(a-1) 2,2 wwtvr)2 =(a OAu _ by? _# usury? (â**) - V2 (62) ee wr ae 27 Ou JQ /vT SVT
_ by? _# (â**) - ee wr ae 27 Ou JQ /vT SVT 2 a 1 a2 [ /Fla- Dew puter)? OSs = =) (-e ee | VE a + yeâ rte (⢠+ur Ow 2 UT V2/ur 2. /vT (am) ) erfc Vv2Vur OI _ 1 awa? (uwpury?® oe (po tut) v2 (a-l)pw a oy qrTwe (~ ete (Me) bale (rps? Vit OF _ \ywen ee (ae te ente (HEAT) | 2((a-l)w a Or 4 : â Vv2yor) | Va \ (vr)3/? JT Of2 _ OA Ou Ov OPi2 _ x â weet (usbvr)? fe ( He tur) | v2 (a â 1)pw a Fo 7 glee ae erfc Viger) Va nse Vit 2 we? pwr Oz â 1),-32 (arte" ve 2 erfe Ga + ) Ov 8 J2vT 2 ((-I(a-Dprw | Vr(at+apw-1)â ar3/? T v)2\/F ' p3/2 Vv Oz Lo Pw? ( (uwtyr)2 Ga + =) (uw+ur)? Ga + ) = ~=\e @ | 2ae 2 ~ erfe +avte 27 ~ erfe + Or 8 V2VuT V2VvT 2 ((-1)(a-1)pew â -atapwt 1 2 | â alt T (vr)3/2 UT OFar _ we (« (-« =) ets nfo Ga + ) in Ou V2/uT
21
=
gg (uwt2ur)? 2a? ud + QUT Qare ânr ea erfe ( + âerfe V2Vvr Oa =. Gi +1) (- Je we ene (Me Ow a(Quw +1)e 2 e~ = erfe (uwt2v7)2 pw? Ga + 2Qur V2 vt s(n) «Bem OJa yg Pu? ( 2 ( âa (4) = â}\ 0 Dur âe Qur rfc + Ov 3 TWE a e⬠er) Vivir doze 6 fe (SS) ,; Vente -1) OFa1 = 1 ewe (« (<8) ext â() * or 2 V2 /0T 2 dodo enfe (4 + =) ; V2(-1) )(e?-1 ae yi OP22, _ OTn Ou Ov OSa2 1 aire (« (-") erfe (â + ) 4 Ow 2 V2/uT dade enfe 2 (mee) | Vey OFa2 = 12, 26- co (« (-*) erfc (â*) + Ov 4 V2 sur 802 (wusbave)? erfe pw + 2vT v2 (a? â 1) pw are ur t - V2VvT (vr)3/2 OFo2 _ 1y2 (-20%- wu? wwreer)? a (â + â) _ Or 4 V2\/vT 2,2 2 2 9 new? (uwtur) Ga + VT g (wwtaer)? a2yTe wre wr erfe +4ateâ 27 eâ V2 /vT 3 (mwt2u7)? 2? pw + =) ( ( 8a°vTe 27 eB erfe + 2| 2-erfe ( V2 /vT (e% (3 â 3a? 7)
+ 2
+
# 3α2 â νÏ
# eâ µ2 Ï2
# 2Î½Ï erfc
# ( pw V2
# νÏ
# (pw + QUT | âââ V2/vT
â
))
+
Lemma 5 (Bounds on the Derivatives). The following bounds on the absolute values of the deriva- tives of the Jacobian entries J11(µ, Ï, ν, Ï, λ, α), J12(µ, Ï, ν, Ï, λ, α), J21(µ, Ï, ν, Ï, λ, α), and J22(µ, Ï, ν, Ï, λ, α) with respect to µ, Ï, ν, and Ï hold:
OF Ou OF Ow <_0.0031049101995398316 (63) <_ 1.055872374194189
22
+
(63)
OF <_0.031242911235461816 Ov a oF < 0.03749149348255419
a oie < 0.031242911235461816 (2a Siz < 0.031242911235461816 Ow a oie < 0.21232788238624354 os < 0.2124377655377270
a oF < 0.02220441024325437 (2a a Ja) â 1.146955401845684 Ow a oF < 0.14983446469110305 992] â 9.17980135762932363 Or
a 222) â 9 44983446469110305 Ou a S22) â 9 44983446469110305 Ow a 222) â 1 395740052651535 Ov OSes < 2.396685907216327
Proof. See proof 39.
Bounds on the entries of the Jacobian. Lemma 6 (Bound on J11). The absolute value of the function Ju = gdw (acme erfc (43) âerfe (4) + 2) is bounded by |Ji1| < 0.104497 in the domain -0.1 <u < 0.1, -0.1 ¢<w <0.L08 cv < 1.5, and0.8 <7 < 1.25 fora = a1 and X = Xo.
Proof.
1 ort og (Hw+tvr jus Ju| = |=Aw | aet"â+ ® erfe ( ) + 2 â erfe ( )) \Aa| F ( V2 /vT V2 /vT 1 < [5 ||Allvl (Jal0.587622 + 1.00584) < 0.104497,
23
where we used that (a) Ji; is strictly monotonically increasing in jw and |2 â erfc ( 9.01 ) |< V2V0T 1.00584 and (b) Lemmal47}hat jet +> erfe (4) | < O14 erfe (.giess) = 0.587622
Lemma 7 (Bound on J12). The absolute value of the function 2 ww? Jo = $Ar (acme erfc (44) â(aâ1) ae) is bounded by |J12| < 0.194145 in the domain -0.1< w<0.1,-0.1<w <0.1L0.8<y < 1.5, and0.8 <7 < 1.25 fora = api and X = Xo.
Proof.
|Ji2| < dale actâ+ > erfe Me Fer (a â 1) 2 se < BIS | : V2 /vT TUT â 1 qAlll |0.983247 â 0.392294| < 0.194035
V2/ur 2? . . . woe the second term 0.582677 < 4/ oe âx= < 0.997356, which can easily be seen by maximizing or minimizing the arguments of the exponential or the square root function. The first term scaled by a is 0.727780 < aeât © erfe (44) < 0.983247 and the second term scaled by a â 1 is (24,2 0.392294 < (a â 1),/ ae ne < 0.671484. Therefore, the absolute difference between these terms is at most 0.983247 â 0.392294 leading to the derived bound. For the first term we have 0.434947 < e#â+F erfe (424) < 0.587622 after Lemmal#7}and for
Bounds on mean, variance and second moment. For deriving bounds on ˵, Ëξ, and Ëν, we need the following lemma. Lemma 8 (Derivatives of the Mapping). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
# The derivative â
The derivative â ⵠ˵(µ, Ï, ν, Ï, λ, α) has the sign of Ï. The derivative â âν ˵(µ, Ï, ν, Ï, λ, α) is positive. The derivative â âµ Ëξ(µ, Ï, ν, Ï, λ, α) has the sign of Ï. The derivative â âν Ëξ(µ, Ï, ν, Ï, λ, α) is positive.
Proof. See 40. Lemma 9 (Bounds on mean, variance and second moment). The expressions ˵, Ëξ, and Ëν for α = α01 and λ = λ01 are bounded by â0.041160 < ˵ < 0.087653, 0.703257 < Ëξ < 1.643705 and 0.695574 < Ëν < 1.636023 in the domain µ â [â0.1, 0.1], ν â [0.8, 15], Ï â [â0.1, 0.1], Ï â [0.8, 1.25].
Proof. We use Lemmal§|which states that with given sign the derivatives of the mapping Eq. (4) and Eq. (5) with respect to v and y are either positive or have the sign of w. Therefore with given sign of w the mappings are strict monotonic and the their maxima and minima are found at the borders. The minimum of {i is obtained at zw = â0.01 and its maximum at jw = 0.01 and o and 7 at minimal or maximal values, respectively. It follows that â0.041160 < fi(â0.1, 0.1, 0.8, 0.8, Ao1, 201) <f < fa(0.1, 0.1, 1.5, 1.25, Aor, ao1) < 0.087653. (66)
24
(65)
Similarly, the maximum and minimum of Ëξ is obtained at the values mentioned above:
0.703257 < â¬(â0.1, 0.1, 0.8, 0.8, Aor, 01) <E < E(0.1, 0.1, 1.5, 1.25, Aor, 01) < 1.643705. (67)
Hence we obtain the following bounds on Ëν:
0.703257 â ˵2 < Ëξ â ˵2 < 1.643705 â ˵2 0.703257 â 0.007683 < Ëν < 1.643705 â 0.007682 0.695574 < Ëν < 1.636023. (68)
Upper Bounds on the Largest Singular Value of the Jacobian. Lemma 10 (Upper Bounds on Absolute Derivatives of Largest Singular Value). We set α = α01 and λ = λ01 and restrict the range of the variables to µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], and Ï â [Ïmin, Ïmax] = [0.8, 1.25].
The absolute values of derivatives of the largest singular value S(µ, Ï, ν, Ï, λ, α) given in Eq. (61) with respect to (µ, Ï, ν, Ï ) are bounded as follows:
# âS âµ âS âÏ âS âν âS âÏ
< 0.32112 , (69)
< 2.63690 , (70)
< 2.28242 , (71)
< 2.98610 . (72)
Proof. The Jacobian of our mapping Eq. (4) and Eq. (5) is deï¬ned as
â~ (Hu He )\_ Su Tia H= ( Hoi Hee ) ~ ( Ja â 26tTi1 P22 â 22 ) (73)
and has the largest singular value
S(u,w,u,7,A,a) = 5 (Ven Hoo)? + (Haz + Hai)? + V(Hi + Hea)? 4 (ia â Hai)â) (74)
according to the formula of Blinn [4].
# We obtain | Os OH
| Os 1 Hi â Hoe Hi + H22 ~|\\< OH VJ (Hur â Ho2)? + (Hie + Ha)? (Har + Haz)? + (Hai â Hie)? (75)
1 141 t < =1 (HiztHa1)? | (HarâHiz)? 4 4 2 (HuâHa2)? | (Har FHa22)?
and analogously
| Os 1 Hiz + Hai _ Ha â Haz <1 OHA2 2 \ Jit â Hoe)? + (Hie + Har)? (Har + Hoa)? + (Har â Hi2)? (76)
|
25
,
and
| Os 1 Hoi â Hi2 Haz + Har = |5 <4 <1 OH21 2 \ /(Hir + Haz)? + (Hor â Haz)? \/(Haa â Ho2)? + (Hi2 + Hai)? (77)
and
| os 1 Hii + Ho Hi â Ho = _â <1. OH22 2 \ /(Hir + Haz)? + (Hor â Haz)? \/(Ha â Ho2)? + (Hi2 + Hai)? (78)
We have
# âS âµ âS âÏ âS âν âS âÏ
âS âH11 âS âH11 âS âH11 âS âH11
âH11 âµ âH11 âÏ âH11 âν âH11 âÏ
âS âH12 âS âH12 âS âH12 âS âH12
âH12 âµ âH12 âÏ âH12 âν âH12 âÏ
âS âH21 âS âH21 âS âH21 âS âH21
âH21 âµ âH21 âÏ âH21 âν âH21 âÏ
âS âH22 âS âH22 âS âH22 âS âH22
âH22 âµ âH22 âÏ âH22 âν âH22 âÏ
= + + + (79)
= + + + (80)
= + + + (81)
= + + + (82)
(83)
from which follows using the bounds from Lemma 5:
Derivative of the singular value w.r.t. ju: os (84) Ou OS ||OHu| | OS ||OHi2 AS ||OHal | OS ||AH2x» OH Ou "| OHa2 Ou "| OHo1 Ou "| OH» Ou OHu| , |AHi2| , |OHar OH22 ou | | Ou | | On Ou OF OSi2| | |OFo1 â 2nur| , |OSo2 â 2Ti2) â Ou On | Ou Ou > OF OPi2 OPar OF22 OA | | ~ 2 OAi2| | ~ t t +t t t t < Ou | Ou | Ou Ou 2 Ou Wil +2 |u| +2 Ou (| + 2| ial |Fan| 0.0031049101995398316 + 0.031242911235461816 + 0.02220441024325437 + 0.14983446469110305+ 2- 0.104497 - 0.087653 + 2 - 0.104497?+ 2 - 0.194035 - 0.087653 + 2 - 0.104497 - 0.194035 < 0.32112,
where we used the results from the lemmata 5, 6, 7, and 9.
Derivative of the singular value w.r.t. w: os ao| < OS ||AHu| | OS ||OH.2 AS ||OHal | OS ||AH2x» OH || Ow + aos dw | |OHa|| dw + ri Ow Hu] | |OMa2| _, |OHa| _ |PH22| dw | | dw | | dw dw | ~ Ofua| ee ; ee ; [ee < dw | | dw | Ow Ow ~
(85)
26
OJu1| ,|OA2| , | Oa OJ22| , 9 OF lal + 2|Tul Oft| , dw | | dw | | dw Ow | | Ow Blt MBG] | OAi2| | ~ Oj 2 Ow |jt| + 2| Ars] ao < (86)
2.38392 + 2 · 1.055872374194189 · 0.087653 + 2 · 0.1044972 + 2 · 0.031242911235461816 · 0.087653 + 2 · 0.194035 · 0.104497 < 2.63690 ,
where we used the results from the lemmata 5, 6, 7, and 9 and that ˵ is symmetric for µ, Ï.
Derivative of the singular value w.r.t. v: os ay < (87) aS ||OHu| , | aS ||| | OS ||OHa| | | OS | | Ha» OH || Ov | |OHi2|| Av |â |OHa|| Ov | * |OHs2|] dv OHA OHi2 OH21 OH22 < ov | | av ov | | av | ~ OFu ae ; | â Fr ; | â 2nFi2 < Ov Ov Ov Ov ~ OSs ee | Oat | OSes . 2|°oe |i] + 2|Fir| | Fiz| +2 Oia || +2|Tis|? < 2.19916 + 2 - 0.031242911235461816 - 0.087653 + 2 - 0.104497 - 0.194035+ 2 - 0.21232788238624354 - 0.087653 + 2- 0.194035? < 2.28242 ;
where we used the results from the lemmata 5, 6, 7, and 9.
Derivative of the singular value w.r.t. Ï :
os ar| < (88) OS ||OHu OS ||OHi2 OS ||OHa1| | OS ||OH22 OHu|| Or | |AHie2|| Ar |" |AHal| ar | | | ss ar OHi1| | |OHi2| , |OH21| | | OH22 < Or || ar | | dr | | ar | * OF Ofi2| , | = 2p | â 262 < Or Or |- Or Or ~ OS OD2 OJa1 OJ22 OFu|~ Oj Or Or | Or | Or Or Vel + 21 ual Or 2 OSs l#| + 2| Fra oh < (89)
2.82643 + 2 · 0.03749149348255419 · 0.087653 + 2 · 0.104497 · 0.194035+ 2 · 0.2124377655377270 · 0.087653 + 2 · 0.1940352 < 2.98610 , where we used the results from the lemmata 5, 6, 7, and 9 and that ˵ is symmetric for ν, Ï .
Lemma 11 (Mean Value Theorem Bound on Deviation from Largest Singular Value). We set α = α01 and λ = λ01 and restrict the range of the variables to µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], and Ï â [Ïmin, Ïmax] = [0.8, 1.25]. The distance of the singular value at S(µ, Ï, ν, Ï, λ01, α01) and that at S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) is bounded as follows:
|S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) â S(µ, Ï, ν, Ï, λ01, α01)| <
27
0.32112 |âµ| + 2.63690 |âÏ| + 2.28242 |âν| + 2.98610 |âÏ | .
Proof. The mean value theorem states that a t â [0, 1] exists for which
S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) â S(µ, Ï, ν, Ï, λ01, α01) = âS âµ âS âÏ âS âν âS âÏ
from which immediately follows that
S(ut+ Ap,w + Aw,v + Av,r + Ar, X01, 001) â S(u,w,v,7, A01, 001)| < (92) 8 (w+ tAp,w + tAw,v + tdv,r + tAr,ro1,001)| [Apel + i os 9) (w+ tAp,w + tAw,v + tdv,r + tAr, ro1,001)| |Aw| + Ow os 3 (w+ tAp,w + tAw,v + tAv,r + tAr, ro1,001)| [Av] + Vv os 5 (w+ tAp,w + tAw,v + tAdv,r + tAr,ro1,001)} |Az| . 7
We now apply Lemma 10 which gives bounds on the derivatives, which immediately gives the statement of the lemma.
Lemma 12 (Largest Singular Value Smaller Than One). We set α = α01 and λ = λ01 and restrict the range of the variables to µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
The the largest singular value of the Jacobian is smaller than 1:
S(µ, Ï, ν, Ï, λ01, α01) < 1 . (93)
Therefore the mapping Eq. (4) and Eq. (5) is a contraction mapping.
Proof. We set âµ = 0.0068097371, âÏ = 0.0008292885, âν = 0.0009580840, and âÏ = 0.0007323095.
According to Lemma 11 we have
|S(µ + âµ, Ï + âÏ, ν + âν, Ï + âÏ, λ01, α01) â S(µ, Ï, ν, Ï, λ01, α01)| < 0.32112 · 0.0068097371 + 2.63690 · 0.0008292885+ 2.28242 · 0.0009580840 + 2.98610 · 0.0007323095 < 0.008747 .
For a grid with grid length âµ = 0.0068097371, âÏ = 0.0008292885, âν = 0.0009580840, and âÏ = 0.0007323095, we evaluated the function Eq. (61) for the largest singular value in the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25]. We did this using a computer. According to Subsection A3.4.5 the precision if regarding error propagation and precision of the implemented functions is larger than 10â13. We performed the evaluation on different operating systems and different hardware architectures including CPUs and GPUs. In all cases the function Eq. (61) for the largest singular value of the Jacobian is bounded by 0.9912524171058772.
We obtain from Eq. (94):
S(wt Ap,w + Aw,yv + Av,7 + At, Aoi, 01) < 0.9912524171058772 + 0.008747 < 1. (95)
28
(94)
# A3.4.2 Lemmata for prooï¬ng Theorem 1 (part 2): Mapping within domain
We further have to investigate whether the the mapping Eq. (4) and Eq. (5) maps into a predeï¬ned domains. Lemma 13 (Mapping into the domain). The mapping Eq. (4) and Eq. (5) map for α = α01 and λ = λ01 into the domain µ â [â0.03106, 0.06773] and ν â [0.80009, 1.48617] with Ï â [â0.1, 0.1] and Ï â [0.95, 1.1].
Proof. We use Lemma 8 which states that with given sign the derivatives of the mapping Eq. (4) and Eq. (5) with respect to α = α01 and λ = λ01 are either positive or have the sign of Ï. Therefore with given sign of Ï the mappings are strict monotonic and the their maxima and minima are found at the borders. The minimum of ˵ is obtained at ÂµÏ = â0.01 and its maximum at ÂµÏ = 0.01 and Ï and Ï at their minimal and maximal values, respectively. It follows that:
â0.03106 < ji(â0.1, 0.1, 0.8, 0.95, Ao1, a01) <f < fu(0.1, 0.1, 1.5, 1.1, Ao1, ao1) < 0.06773, (96)
and that ˵ â [â0.1, 0.1]. Similarly, the maximum and minimum of Ëξ( is obtained at the values mentioned above:
0.80467 < â¬(â0.1, 0.1, 0.8, 0.95, Ao1, 01) <E < E(0.1, 0.1, 1.5, 1.1, Ag, a1) < 1.48617. (97)
Since | Ëξ â Ëν| = |˵2| < 0.004597, we can conclude that 0.80009 < Ëν < 1.48617 and the variance remains in [0.8, 1.5].
Corollary 14. The image g(9â) of the mapping g : (u,v) + (jt,%) (Eq. B)) and the domain Y = {(p,v)|-0.1 <p < 0.1,0.8 <p < 1.5} is a subset of O':
gM) oe, (98)
for all Ï â [â0.1, 0.1] and Ï â [0.95, 1.1].
Proof. Directly follows from Lemma 13.
# A3.4.3 Lemmata for prooï¬ng Theorem 2: The variance is contracting
Main Sub-Function. We consider the main sub-function of the derivate of second moment, J22 (Eq. (54)):
26 = ty, (-creer erfc (â + 7) + 207620 42"7 erfe (4 + â) â erfe ( aad ) + 2) ave 2 Vijir Viyir Vi jir (99)
that depends on ÂµÏ and Î½Ï , therefore we set x = Î½Ï and y = µÏ. Algebraic reformulations provide the formula in the following form:
Oz 1,9 2(_-# jen? (yt@\ 4, Grew? (y+ Qa . y ; ays =r (a ( e \(¢ ente (T=) 2e este (=) arte (=) +2)
For A = Ao and a = ao , we consider the domain -1 <u < 1,-0.1 <w <01,15<v< 16, and, 0.8 <7 < 1.25. For x and y we obtain: 0.8-1.5 = 1.2 <2 < 20=1.25-16and0.1-(â1) =-0.1l<y<01= 0.1 - 1. In the following we assume to remain within this domain.
29
f(1.2,y)
# y
# Q
(x+y)2 2x
Q Figure A3: Left panel: Graphs of the main subfunction f(x,y) = ee erfe (#4) - (22+y)â ot . oo. . . . . 2eâ â erfe ( 2ery ) treated in Lenina The function is negative and monotonically increasing Vv2Va with x independent of y. Right panel: Graphs of the main subfunction at minimal x = 1.2. The graph shows that the function f (1.2, y) is strictly monotonically decreasing in y.
Lemma 15 (Main subfunction). For 1.2 <x < 20 andâ-0.1 < y < 0.1,
the function
x+y)? ety)? 2. eo enfe (F) â 26 erfe (4) (101)
is smaller than zero, is strictly monotonically increasing in x, and strictly monotonically decreasing in y for the minimal x = 12/10 = 1.2.
Proof. See proof 44.
The graph of the subfunction in the specified domain is displayed in Figure[A3| Theorem 16 (Contraction v-mapping). The mapping of the variance (p,w,v,T,,@) given in Eq. is contracting for X = Aoi, & = agi and the domain Qt: -0.1< w<0.1 -01<w<0.1 15<v< 16, and0.8 < Tr < 1.25, that is,
<1. (102) | oun, w,V,T, Ao1, a1)
Proof. In this domain â¦+ we have the following three properties (see further below): â âν ˵ > 0, and â
Oo -| ae < <1 (103) ln avâ Oz 5.0. ln - hay
Ëξ < 1 in an even larger domain that fully contains â¦+. According to
⢠We ï¬rst proof that â âν Eq. (54), the derivative of the mapping Eq. (5) with respect to the variance ν is
Ox Bp S(t Ws 4s T Aor 101) = (104) 1). 2 (pot YE [we bur sr («a ( e ) erfe âJa JaF = + + 2vt pu Qa? ePHot2U7 orfc (â*) â erfc ( ) + 2) . Jur V2 vt
30
For \ = Ani, a= a01, -l<uw<l,-01<w<0115<¢y < 16,and0.8 <7 < 1.25, we first show that the derivative is positive and then upper bound it.
According to Lemmal|I5] the expression (uwtur)? pis + UT on ane (Me)
(uwtur)? pis + UT 9 uuet2u7)2 exfe (â + =) (105) on ane (Me) - â Vie
is negative. This expression multiplied by positive factors is subtracted in the derivative Eq. (104), therefore, the whole term is positive. The remaining term
2 â erfc Î½Ï (106)
of the derivative Eq. (104) is also positive according to Lemma 21. All factors outside the brackets in Eq. (104) are positive. Hence, the derivative Eq. (104) is positive.
The upper bound of the derivative is:
1.5 2 cop UE ~ [pw + UT prot (<4 (-e +3 ) erfe Gas + (107) Qa 22? *2"7 erfe (â) â erfe (+) + 2) = shar (<8 (-- =) (se erfc (â*) - oe erfc (â*)) â erfc () + 2) < pian (a8 (8) (ee ee (MEO) - ys 9 (uw ave)? f (â 2) f ( pw ) 42 evr erfe | ââââ | } - erfc Jur Jur 1 : ratory? 1240.1 =1.2531 (<4 («! ina) erfe (5) â 2 v2Vv12 > 2- =+*)) ( | ( pu ) ) Qe\ v2vI2/ erfe ( âââ_â âe w= }| âerfe +2) < ( V2V1.2 V2 fur * 1 1.240.1)? 1.2+0.1 *1.95)2 (e008 (« Lets) erfc C3) _ 01 o1 V2V1.2 *12)) oe ha) « 1 120.1)? 1.2+0.1 =1.2531 (-e%08, (« ana) erfc (a) - 2 V2V1.2 9 aazsoa)? = (2-1.2+0.1 f 0.1 2) < eX v2vt2/ erfe | âââââ â eric | âââ= } + ( v2v12 )) (ava) ) - 0.995063 < 1.
We explain the chain of inequalities:
â First equality brings the expression into a shape where we can apply Lemma 15 for the the function Eq. (101).
â First inequality: The overall factor Ï is bounded by 1.25. â Second inequality: We apply Lemma 15. According to Lemma 15 the function Eq. (101) is negative. The largest contribution is to subtract the most negative value of the function Eq. (101), that is, the minimum of function Eq. (101). According to Lemma 15 the function Eq. (101) is strictly monotonically increasing in x and strictly monotonically decreasing in y for x = 1.2. Therefore the function Eq. (101) has its minimum at minimal x = Î½Ï = 1.5 · 0.8 = 1.2 and maximal y = ÂµÏ = 1.0 · 0.1 = 0.1. We insert these values into the expression.
31
(107)
â Third inequality: We use for the whole expression the maximal factor eâ µ2Ï2
2Î½Ï < 1 by setting this factor to 1.
â Fourth inequality: erfc is strictly monotonically decreasing. Therefore we maximize its argument to obtain the least value which is subtracted. We use the minimal x = Î½Ï = 1.5 · 0.8 = 1.2 and the maximal y = ÂµÏ = 1.0 · 0.1 = 0.1.
# â Sixth inequality: evaluation of the terms.
⢠We now show that ˵ > 0. The expression ˵(µ, Ï, ν, Ï ) (Eq. (4)) is strictly monoton- ically increasing im ÂµÏ and Î½Ï . Therefore, the minimal value in â¦+ is obtained at ˵(0.01, 0.01, 1.5, 0.8) = 0.008293 > 0.
⢠Last we show that â âν ˵ > 0. The expression â can we reformulated as follows: âν ˵(µ, Ï, ν, Ï ) = J12(µ, Ï, ν, Ï ) (Eq. (54))
(µÏ+Î½Ï )2 2Î½Ï Î»Ï eâ µ2Ï2 2(αâ1) â Î½Ï â erfc â 4 Ïαe 2Î½Ï J12(µ, Ï, ν, Ï, λ, α) = (108)
# ncaa (
â
â
is larger than is larger than zero when the term zero. This term obtains its minimal value at ÂµÏ = 0.01 and Î½Ï = 16 · 1.25, which can easily be shown using the Abramowitz bounds (Lemma 22) and evaluates to 0.16, therefore J12 > 0 in â¦+.
# A3.4.4 Lemmata for prooï¬ng Theorem 3: The variance is expanding
Main Sub-Function From Below. We consider functions in pw and v7, therefore we set x = pw and y = vr. For A = Xo1 and a = ao1, we consider the domain â0.1 < pw < 0.1, â0.1 < w < 0.1 0.00875 < vy < 0.7, and 0.8 < 7 < 1.25. For x and y we obtain: 0.8 - 0.00875 = 0.007 < x < 0.875 = 1.25-0.7 and 0.1-(â0.1) = â0.01 < y < 0.01 = 0.1 - 0.1. In the following we assume eto be within this domain.
In this domain, we consider the main sub-function of the derivate of second moment in the next layer, J22 (Eq. (54): O- 1 2
O- 1 urn , ; 2 . S¢ = =r (-crers erfc (<*) + 2076242" erfe () â erfc ( V2 /vT J2/vT Vir (109)
that depends on ÂµÏ and Î½Ï , therefore we set x = Î½Ï and y = µÏ. Algebraic reformulations provide the formula in the following form:
0: = 110 Ov (110) (8) ($8) a oe (8) a (gh) 2 V2/x Jr Lemma 17 (Main subfunction Below). For 0.007 < x < 0.875 and â0.01 < y < 0.01, the function
rw? â, (at+y (22+)? i) e 2 erfe | â-â ] â2e° =~ erfe | ââ 111 (5:2) - (St my
smaller than zero, is strictly monotonically increasing in x and strictly monotonically increasing in y for the minimal x = 0.007 = 0.00875 · 0.8, x = 0.56 = 0.7 · 0.8, x = 0.128 = 0.16 · 0.8, and x = 0.216 = 0.24 · 0.9 (lower bound of 0.9 on Ï ).
32
Proof. See proof|45] Lemma 18 (Monotone Derivative). For 4 = 01, @ = a1 and the domain â0.1 < w < 0.1, â0.1 <w < 0.1, 0.00875 < v < 0.7, and 0.8 < T < 1.25. We are interested of the derivative of
(484 2 [pwtur wot dur)? juw + 2vT 5 vat) re(! ) -2el Be) we(Sr)) . 112 r(e erfc Visor e erfc ViJur (112)
The derivative of the equation above with respect to
⢠ν is larger than zero;
e 7 is smaller than zero for maximal v = 0.7, v = 0.16, and v = 0.24 (with 0.9 < T);
⢠y = ÂµÏ is larger than zero for Î½Ï = 0.008750.8 = 0.007, Î½Ï = 0.70.8 = 0.56, Î½Ï = 0.160.8 = 0.128, and Î½Ï = 0.24 · 0.9 = 0.216.
Proof. See proof 46.
# A3.4.5 Computer-assisted proof details for main Lemma 12 in Section A3.4.1.
Error Analysis. We investigate the error propagation for the singular value (Eq. (61) if the function arguments jy, w, 1,7 suffer from numerical imprecisions up to e. To this end, we first derive error propagation rules based on the mean value theorem and then we apply these rules to the formula for the singular value. Lemma 19 (Mean value theorem). For a real-valued function f which is differentiable in the closed interval a, b], there exists t ⬠(0, 1] with
f (a) â f (b) = âf (a + t(b â a)) · (a â b) . (113)
It follows that for computation with error âx, there exists a t â [0, 1] with
[fl@+ Aa) â f(x)| < ||Vf(@+tAx)|| Aa] . (114)
Therefore the increase of the norm of the error after applying function f is bounded by the norm of the gradient ||V f(a + tAa)|l.
We now compute for the functions, that we consider their gradient and its 2-norm:
addition:
addition: f(a) =x, + x and Vf (a) = (1,1), which gives ||V f(a)|| = V2. We further know that |f(@+ Aw) â f(x)| = |ar +a + Av, + Avg â 2 â 2] < |Axi| + |Axo| . (115)
Adding n terms gives:
n n n So ai + An; - Soa < So Aa: < n|Aril nas « (116) i=1 i=l i=l
subtraction:
â
f(x) = a1 â 2 and V f(x) = (1,1), which gives ||V f(x)|| = V2. We further know that |f(w + Aa) â f(x) = |x) â 22 + Ary â Arg â 2 4+ 22| < [Axi] + |Aro| - (117)
Subtracting n terms gives:
n n So =(#i + Axi) + Son < So Asi < n|Azilnax + (118) i=1 i=l i=1
33
multiplication:
multiplication: f(x) = 1x2 and V f(a) = (x2, 21), which gives ||V f(a)|| = ||a]. We further know that |f(a + Aa) â f(a) |v, +g + Ary - v2 + Arg: 2, + Ary: Ars â 21+ 22| < (119) |Azy| |x| + [Ao] lai] + O(A?) .
|âx1| |x2| + |âx2| |x1| + O(â2) .
Multiplying n terms gives:
Io + Aci) âI Tans At + o(a%)| < (120) i=1 i i=l i=l â* n Ii i=l i=l + O(A2). âwt lmax
e division: f(z) = 2 and Vf (x) = (4,-3). which gives ||V f(a)|| = tl. We further know that a+Ar, v4 (a1 + Axy)a â 21 (x2 + Ara) + A - If(w + Aw) â f(x)| ta+Arg 22 (xo + Axe)x2 (121) An wg â Arg: 21 Ax, _ Ara r , o(A2) ; x3 + Axo - x2 XQ x3 @ square root: f(a) = Vand f'(z) = Eee which gives | fâ(x)| = we © exponential function: f(x) = exp(x) and fâ(x) = exp(zx), which gives | fâ(x)| = exp(z). e error function: f(a) =erf(x) and fâ(x) = ae exp(â2?), which gives | fâ(2)| = ae exp(â2?). e complementary error function: f(x) = erfe(x) and fâ(x) = -z exp(â2â), which gives | fâ(x)| = Fa exp(â# 2).
Lemma 20. /f the values j1,w,v,7 have a precision of ¢, the singular value (Eq. (61p) evaluated with the formulas given in Eq. 4) and Eq. (61) has a precision better than 292¢.
This means for a machine with a typical precision of 2~°? = 2.220446 - 10-16, we have the rounding error ⬠© 10~1%, the evaluation of the singular value (Eq. (61)) with the formulas given in Eq. and Eq. (61) (61) has a precision better than 10-18 > 292e.
Proof. We have the numerical precision ⬠of the parameters f1,w,v,7, that we denote by Ap, Aw, Av, Ar together with our domain 2.
With the error propagation rules that we derived in Subsection A3.4.5, we can obtain bounds for the numerical errors on the following simple expressions:
A (pw) < Ape |w| + Aw |p| < 0.2⬠(122) A(vr) < Av |r| + Ar |r| < BEE Le Se (A(vr)2 + A2 lr) 53 < (6 + 1.25 - 1.5â¬)/4 < 2e (ww) + A (vr) = 5 Qe A (uw) A (pw) + A (2) < 2.2 A de
â
34
(122)
A (v2) < A < x A (V2Ver) < V2A (Vor) + urd (V2) < V2-1875⬠+ 1.5-1.25- i < 3.5¢ (+) < (A (qs) V2V0F + |e A (v2vo7)) â+â, as < 0.2eV/2V0.64 + 0.01 - 3.5e) au < 0.25¢ A (â*) < (4 (ue + v7) V2V07 + |p + vT| A (v2ver)) wa < (3.2«v2V0.64 + 1.885 - 3.5¢) < 8¢. 2- 0.64
Using these bounds on the simple expressions, we can now calculate bounds on the numerical errors of compound expressions:
. plus 2 -(+4 yâ (4 pu )< A | erfe < ee \Vever] A (123) ( (=) Vr v2.07 2 yl d5e < 0.3¢ . (pw tr 2 ~(4g4z)â (ââ*) A fc < ee \Yev) A (| ââ] < 124 (ex â Ca J2/vT )) Vir J2vT (124) 2 var < 10e A (ehh) < (eM) A (MTT) < (125) 99479 De < 5.7⬠(126)
Subsequently, we can use the above results to get bounds for the numerical errors on the Jacobian entries (Eq. (54)), applying the rules from Subsection A3.4.5 again:
_ 1 wo (*) (4 pow )3 )) . A(Aiu) A (S (ce erfc Vive erfe jt 2 <6¢e, (127)
and we obtain A (Ji2) < 78¢, A (Jar) < 189¢, A (Jo2) < 405¢ and A (ji) < 52â¬. We also have bounds on the absolute values on Jj; and ji (see Lemma|6| Lemma}7| and Lemma)9), therefore we can propagate the error also through the function that calculates the singular value (Eq. (61).
A(S(u,w,V,7,A,0)) = (128) a(3 (Va + Jaz â 2ftFi2)? + (Joi â 2ftAir â Fiz)? + JV (Tu â Far + 2jt ia)? + (Tia + Tar = 2%iFu)?) ) < 292e.
Precision of Implementations. We will show that our computations are correct up to 3 ulps. For our implementation in GNU C library and the hardware architectures that we used, the precision of all mathematical functions that we used is at least one ulp. The term âulpâ (acronym for âunit in the last placeâ) was coined by W. Kahan in 1960. It is the highest precision (up to some factor smaller 1), which can be achieved for the given hardware and ï¬oating point representation.
Kahan deï¬ned ulp as [21]:
35
âUlp(x) is the gap between the two ï¬nite ï¬oating-point numbers nearest x, even if x is one of them. (But ulp(NaN) is NaN.)â
Harrison deï¬ned ulp as [15]:
âan ulp in x is the distance between the two closest straddling floating point numbers a and 8, i.e. those with a < x < band a Â¥ b assuming an unbounded exponent range.â
In the literature we ï¬nd also slightly different deï¬nitions [29].
According to [29] who refers to [11]:
âTEEE-754 mandates four standard rounding modes:â âRound-to-nearest: r(x) is the floating-point value closest to x with the usual distance; if two floating-point value are equally close to x, then r(x) is the one whose least significant bit is equal to zero.â âTEEE-754 standardises 5 operations: addition (which we shall note © in order to distinguish it from the operation over the reals), subtraction (©), multiplication (®), division (@), and also square root.â âTEEE-754 specifies em exact rounding [Goldberg, 1991, §1.5]: the result of a floating-point operation is the same as if the operation were performed on the real numbers with the given inputs, then rounded according to the rules in the preceding section. Thus, x @ y is defined as r(x + y), with x and y taken as elements of RU {-00, +00}; the same applies for the other operators.â
Consequently, the IEEE-754 standard guarantees that addition, subtraction, multiplication, division, and squared root is precise up to one ulp.
We have to consider transcendental functions. First the is the exponential function, and then the complementary error function erfc(x), which can be computed via the error function erf(x).
Intel states [29]:
âWith the Intel486 processor and Intel 387 math coprocessor, the worst- case, transcendental function error is typically 3 or 3.5 ulps, but is some- times as large as 4.5 ulps.â
According //man.openbsd.org/OpenBSD-current/man3/exp.3: to https://www.mirbsd.org/htman/i386/man3/exp.htm and http:
âexp(x), log(x), expm1(x) and log1p(x) are accurate to within an ulpâ
which is the same for freebsd https://www.freebsd.org/cgi/man.cgi?query=exp&sektion= 3&apropos=0&manpath=freebsd:
âThe values of exp(0), expm1(0), exp2(integer), and pow(integer, integer) are exact provided that they are representable. Otherwise the error in these functions is generally below one ulp.â
The same holds for âFDLIBMâ http://www.netlib.org/fdlibm/readme:
âFDLIBM is intended to provide a reasonably portable (see assumptions below), reference quality (below one ulp for major functions like sin,cos,exp,log) math library (libm.a).â
In http://www.gnu.org/software/libc/manual/html_node/ Errors-in-Math-Functions.html we ï¬nd that both exp and erf have an error of 1 ulp while erfc has an error up to 3 ulps depending on the architecture. For the most common architectures as used by us, however, the error of erfc is 1 ulp.
We implemented the function in the programming language C. We rely on the GNU C Library [26]. According to the GNU C Library manual which can be obtained from http://www.gnu.org/
36
Fanation > 050 00 os TO Ts
Figure A4: Graphs of the upper and lower bounds on erfc. The lower bound â Ï( 2eâx2 â x2+2+x) (red), the
2eâx2 x2+ 4
e727 p . : upper bound AV) (green) and the function erfc(a) (blue) as treated in Lemma)22
software/libc/manual/pdf/libc.pdf, the errors of the math functions exp, erf, and erfc are not larger than 3 ulps for all architectures [26, pp. 528]. For the architectures ix86, i386/i686/fpu, and m68k/fpmu68k/m680x0/fpu that we used the error are at least one ulp [26, pp. 528].
# Intermediate Lemmata and Proofs
Since we focus on the ï¬xed point (µ, ν) = (0, 1), we assume for our whole analysis that α = α01 and λ = λ01. Furthermore, we restrict the range of the variables µ â [µmin, µmax] = [â0.1, 0.1], Ï â [Ïmin, Ïmax] = [â0.1, 0.1], ν â [νmin, νmax] = [0.8, 1.5], and Ï â [Ïmin, Ïmax] = [0.8, 1.25].
For bounding different partial derivatives we need properties of different functions. We will bound a the absolute value of a function by computing an upper bound on its maximum and a lower bound on its minimum. These bounds are computed by upper or lower bounding terms. The bounds get tighter if we can combine terms to a more complex function and bound this function. The following lemmata give some properties of functions that we will use in bounding complex functions.
f i e~
Throughout this work, we use the error function erf(x) := 1â Ï function erfc(x) = 1 â erf(x). Lemma 21 (Basic functions). exp(x) is strictly monotonically increasing from 0 at ââ to â at â and has positive curvature.
According to its deï¬nition erfc(x) is strictly monotonically decreasing from 2 at ââ to 0 at â.
Next we introduce a bound on erfc: Lemma 22 (Erfc bound from Abramowitz).
nee enfe(s) < â (129) Va (Va? +242) ~ va (\/22+4 +2)
for x > 0.
Proof. The statement follows immediately from [1] (page 298, formula 7.1.13).
These bounds are displayed in ï¬gure A4.
37
x'exp('2)âerfotx)
# explx"2)"erfelx)
Figure A5: Graphs of the functions ex2 and Lemma 24, respectively. erfc(x) (left) and xex2 erfc(x) (right) treated in Lemma 23
Lemma 23 (Function ex2 has positive curvature (positive 2nd order derivative), that is, the decreasing slowes down.
A graph of the function is displayed in Figure A5.
# Proof. The derivative of ex2
erfc(x) is âex2 erfc(x) âx = 2ex2 x erfc(x) â 2 â Ï . (130)
erfc(x) is
Using Lemma 22, we get
deâ erfe(x) e.g x âââ = 2" rerfe(x) â < - <0 Ox (x) Jr vi ( +442) Jr Jr (131)
Thus ex2 The second order derivative of ex2
erfc(x) is strictly monotonically decreasing for x > 0.
â2ex2 erfc(x) âx2 = 4ex2 x2 erfc(x) + 2ex2 erfc(x) â 4x â Ï . (132)
Again using Lemma 22 (ï¬rst inequality), we get
: 2a 2( (2x? +1) ra erfe(x) â =.) > (133)
4 (22? + 1) 4a Vi(va2+2+2) vr 4 (2? â Va? + 22 +1) Vi (Va? +240 4 (a? â Vat + 22? +1) 5 Va (Va? +242) 4 (2? â Vat + 2224141) Va (va? +242)
# 4x â Ï
â
=
>
= 0
For the last inequality we added 1 in the numerator in the square root which is subtracted, that is, making a larger negative term in the numerator.
38
< 0
Lemma 24 (Properties of xex2 tonically increasing to 1â Ï . erfc(x)). The function xex2 erfc(x) has the sign of x and is mono-
# Proof. The derivative of xex2
erfc(x) is
2ex2 x2 erfc(x) + ex2 erfc(x) â 2x â Ï . (134)
This derivative is positive since
2ex2 x2 erfc(x) + ex2 erfc(x) â 2x â Ï = (135)
Qa 2 (2x? + 1) Qa 2((2a? +1) âa (Va? +2+2)) 2? (442 fol) â _ oF (20+ 1) erfe(a) Vn Vive +2) va Vi (Va? +242) 2(a? âaV2? +241) 2(2? -âaVa? +241) s 2 (x? -ny/2? + +2+1) Vi (Va? +242) Vit (Va? +242) Jit (Va? +242) 2 (x? â Vat + 20? +141) 2(2- V+? +1) Vi (Ve +240) Vi (Vi 4240) 0.
We apply Lemma 22 to x erfc(x)ex2
and divide the terms of the lemma by x, which gives
2 2 2 Flzvtni) < werfe(x)eâ < vt (Jae t1+1) : (136)
For limxââ both the upper and the lower bound go to 1â
# we
# Ï .
Lemma 25 (Function µÏ). h11(µ, Ï) = ÂµÏ is monotonically increasing in µÏ. It has minimal value t11 = â0.01 and maximal value T11 = 0.01.
# Proof. Obvious.
Lemma 26 (Function Î½Ï ). h22(ν, Ï ) = Î½Ï is monotonically increasing in Î½Ï and is positive. It has minimal value t22 = 0.64 and maximal value T22 = 1.875.
Proof. Obvious. Lemma 27 (Function µÏ+Î½Ï Î½Ï Î½Ï and µÏ. It has minimal value t1 = 0.5568 and maximal value T1 = 0.9734.
increasing in both
# Proof. The derivative of the function µÏ+xâ â x
with respect to x is
2
â 1 â 2 x â ÂµÏ + x â 2x3/2 2 = 2x â (ÂµÏ + x) â 2 2x3/2 = x â ÂµÏ â 2x3/2 2 > 0 , (137)
since x > 0.8 · 0.8 and ÂµÏ < 0.1 · 0.1. Lemma 28 (Function µÏ+2Î½Ï Î½Ï Î½Ï and µÏ. It has minimal value t2 = 1.1225 and maximal value T2 = 1.9417.
increasing in both
# Proof. The derivative of the function µÏ+2xâ â x â â
with respect to x is
2
2 x ÂµÏ + 2x â 2x3/2 2 â = 4x â (ÂµÏ + 2x) â 2 2x3/2 = 2x â ÂµÏ â 2x3/2 2 > 0 . (138)
39
=
µÏâ â 2 Lemma 29 (Function monotonically increasing in µÏ. T3 = 0.0088388. Î½Ï ). h3(µ, Ï, ν, Ï ) = µÏâ â monotonically decreasing in Î½Ï and It has minimal value t3 = â0.0088388 and maximal value 2 νÏ
Proof. Obvious.
2
has a minimum at 0 for µ = 0 or Lemma 30 (Function Ï = 0 and has a maximum for the smallest Î½Ï and largest |µÏ| and is larger or equal to zero. It has minimal value t4 = 0 and maximal value T4 = 0.000078126.
Proof. Obvious.
â
â
Lemma 31 (Function 2 Ï (αâ1) â Î½Ï ). 2 Ï (αâ1) â Î½Ï > 0 and decreasing in Î½Ï .
Proof. Statements follow directly from elementary functions square root and division.
Lemma 32 (Function 2 â ert ( > 0 and decreasing in vt and increasing in |ww. stig) 2â (i)
Proof. Statements follow directly from Lemma[21]and erfc. Lemma 33 (Function V2 ( eae â fz). For
Lemma 33 (Function V2 ( eae â fz). For X = X and a = ago, V2 (Ge - ts) < 0 and increasing in both vt and jw.
Proof. We consider the function V2 ( (oD - <z); which has the derivative with respect to x:
2 a 3(a â 1)pw V2 (3S ~ 99572) (139)
This derivative is larger than zero, since
V7 ( Q _ oh) > v2 (o- ee) > 0. (140) T 2(v7r)3/2 2(v7)5/2 2(v7)3/2
The last inequality follows from α â 3·0.1·0.1(αâ1) 0.8·0.8
# > 0 for a = aor.
We next consider the function V2 (8 - Sz) , which has the derivative with respect to x:
(8 - Sz) J 2a
Ï (α â 1) (Î½Ï )3/2 > 0 . (141)
Lemma 34 (Function V2 (< De se p âatepe tr] avr) ). The function (v UT ~1)(aâ1)p2w? _ fi . . . . . . V2 (â Youreâ 4 âatawwtl avr) < 0 is decreasing in vt and increasing in jw. (uT)3/2 UT
Proof. We deï¬ne the function
2 ((-1)(a- 1)p?w? _ Tat apwt1 Vo ( aa | Ja avr (142)
which has as derivative with respect to x: vz 3(aâ1)pPw? T 25/2
vz 3(aâ1)pPw? = -a+ ores +1 a (143) T 25/2 2a3/2 2/z
40
1 V2rx/2 (3(@ = 1)pPw? = x(-a + apw +1) â ax) .
The derivative of the term 3(α â 1)µ2Ï2 â x(âα + Î±ÂµÏ + 1) â αx2 with respect to x is â1 + α â µÏα â 2αx < 0, since 2αx > 1.6α. Therefore the term is maximized with the smallest value for x, which is x = Î½Ï = 0.8 · 0.8. For ÂµÏ we use for each term the value which gives maximal contribution. We obtain an upper bound for the term: 3(â0.1 · 0.1)2(α01 â 1) â (0.8 · 0.8)2α01 â 0.8 · 0.8((â0.1 · 0.1)α01 â α01 + 1) = â0.243569 . (144) Therefore the derivative with respect to x = Î½Ï is smaller than zero and the original function is decreasing in νÏ
We now consider the derivative with respect to x = µÏ. The derivative with respect to x of the function
Py} (a-1)a? | -a+ar+1
is
Ï (Î±Î½Ï â 2(α â 1)x) (Î½Ï )3/2 Since â2x(â1 + α) + Î½Ï Î± > â2 · 0.01 · (â1 + α01) + 0.8 · 0.8α01 > 1.0574 > 0, the derivative is larger than zero. Consequently, the original function is increasing in µÏ.
The maximal value is obtained with the minimal Î½Ï = 0.8 · 0.8 and the maximal ÂµÏ = 0.1 · 0.1. The maximal value is
2 /0.1-0.1a91 âa01 +1 0.170.1?(â1)(a91 â 1) . \/ | â V0.8 +0. = â1.72296. *( 0.8 - 0.8001 72296 V0.8 - 0.8 (0.8 - 0.8)3/2
Therefore the original function is smaller than zero.
F 2 ( (e?=1)mw 302 Lemma 35 (Function V2 one â az }). For X= Xo1 and a = a1, 2 ( (0° ue 308) © 0 and i ing in both d 7 âCrp _â Vor < Oan increasing in DOIN VT and [LW
# Proof. The derivative of the function 2 2 T
2 [ (a? -1 302 2 ( (0% =I mw 3a (148) T 3/2 VE
with respect to x is 2 V2
2 ( 3a? 3 (a? â 1) pus 3 (a2x â (a? â 1) pw) V2 (Sr 7 2x5/2 V2rx5/2 > 0, (149)
since α2x â µÏ(â1 + α2) > α2 010.8 · 0.8 â 0.1 · 0.1 · (â1 + α2 01) > 1.77387
The derivative of the function
> 2_1)2 392 2 (o? =1)e _ 302 (150) T (vr)3/2 VT
with respect to x is
7 (o* ~1) ore > 0. (151)
The maximal function value is obtained by maximal vt = 1.5 - 1.25 and the maximal pw = 0.1 - 0.1. The maximal value is V2 (So - ist) â4.88869. Therefore the function is negative.
41
(147)
21) pw Lemma 36 (Function V2 (â2+ â 3a? v7) ). The function 2 2 ( (e?=1)mw 4 9 . . . . = (2+ â 3a°\V/vT |) < 0 is decreasing in vt and increasing in jw.
# Proof. The derivative of the function a
a (2 - nat) (152)
with respect to x is 2 =
2 (- (a? â 1) pw _ 3a? ) â (a? = 1) pw â 3022 = 273/2 2W/z Vora3/2 <0, (153)
since â3α2x â µÏ(â1 + α2) < â3α2 010.8 · 0.8 + 0.1 · 0.1(â1 + α2 01) < â5.35764.
# The derivative of the function
2 2a = (= - sat) (154) TT VT
with respect to x is
2
2 (2-1) â_ââ > 0. 155 VY VT ( )
The maximal function value is obtained for minimal vt = 0.8 - 0.8 and the maximal pw = 0. . 2 0.1. The value is 2 â â 3V0.8- 0804) = â5.34347. Thus, the function is negative.
L 37 (Functi (wwtur)? fo ( meter Th . (wwtvr)? fo ( mectur 0i emma 37 (Function vteâ 27 __ erfc (444). e function vTe~ 27 __ erfc (424) > 01s increasing in vt and decreasing in jw.
Proof. The derivative of the function
(wwe)? pw + x we 2 erfc 156 ( a) eo)
with respect to x is
aes (a(a + 2) â pw?) erfe (4g) pw â 2 . (157) Qa Vin JE
This derivative is larger than zero, since
peepee (vt(vT + 2) â p?w?) erfe (44) pw â VT oye Tad (158)
0.4349 (vt (vt + 2) â p?w*) Quer 0.5 (vr(vT + 2) â p?w?) _ Qnvt 0.5 (vt (ut + 2) â pPw?) +
â
+
+
# ÂµÏ â Î½Ï â â νÏ
# 2Ï
ÂµÏ â Î½Ï â â 2Ï Î½Ï Î½Ï (ÂµÏ â Î½Ï )
=
â
>
â
# 2ÏνÏ
=
42
â
â
â0.5µ2Ï2 + ÂµÏ â0.5µ2Ï2 + ÂµÏ â Î½Ï + 0.5(Î½Ï )2 â Î½Ï Î½Ï + Î½Ï â = 2ÏÎ½Ï â Î½Ï )2 + 0.25(Î½Ï )2 Î½Ï + (0.5Î½Ï â â 2ÏÎ½Ï > 0 .
We explain this chain of inequalities:
V2V0r is strictly monotonically decreasing. The minimal value that is larger than 0.4349 is taken on at the maximal values v7 = 1.5 - 1.25 and puw = 0.1 - 0.1. wut)? . e The first inequality follows by applying Lenin says that eats erfe (4)
â
The second inequality uses 1
⢠The second inequality uses 1 2Ï = 0.545066 > 0.5.
2 0.4349
⢠The equalities are just algebraic reformulations.
â
⢠The last inequality follows from â0.5µ2Ï2 + ÂµÏ Î½Ï + 0.25(Î½Ï )2 > 0.25(0.8 · 0.8)2 â â 0.5 · (0.1)2(0.1)2 â 0.1 · 0.1 · 0.8 · 0.8 = 0.09435 > 0.
Therefore the function is increasing in Î½Ï . Decreasing in ÂµÏ follows from decreasing of ex2 form the fact that erfc and the exponential function are positive and that Î½Ï > 0.
Positivity follows
w+2ur)? ws wher)? wQvr Lemma 38 (Function vre âtee erfe (4324). The function vrei erfe (<4 ) >0 is increasing in vt and decreasing in jw.
Proof. The derivative of the function
wx 2. ae = erfc (ââS) (159)
is
1+ 20)? pw Ea en (vice (2x (2x + 1) â pew Pu) exfc (4942") + Jx(pw â 22x) ) (160) no
# e
w+2x)? 5 3 We only have to determine the sign of (te â=~ (20 (2a + 1) â pw?) erfe (4324) +/2(pwâ 2x) since all other factors are obviously larger than zero.
This derivative is larger than zero, since
(nw t2u7)? fiw + 2vT Vie ~ (2vr(2v7 + 1) â pw?) erfe Gree Dor \+ VT (ww â 2vr) > (161) 0.463979 (2v7(2v7 + 1) â pw?) + JvT(uw â 2vT) = â 0.463979)? w? + pw /VT + 1.85592(vT)? + 0.927958v7 â QTrVvT = pu (/vT â 0.463979) + 0.85592(v7)? + (ut â Sur * _ 0.0720421vr > 0.
We explain this chain of inequalities:
e The first inequality follows by applying Lemma 23] which says _ that eee rfc wee is strictly monotonically decreasing. The minimal value that is larger than 0.261772 is taken on at the maximal values vr = 1.5 - 1.25 and paw = 0.1- 0.1. 0.261772,/7 > 0.463979.
⢠The equalities are just algebraic reformulations.
43
â
e The last inequality follows from pw (V/VTâ0.463979nw) + 0.85592(vT)? â 0.0720421v7 > 0.85592 - (0.8 - 0.8)? â 0.1 - 0.1 (V1.5 - 1.25 + 0.1 - 0.1 - 0.463979) â 0.0720421 - 1.5 - 1.25 > 0.201766.
Therefore the function is increasing in Î½Ï . Decreasing in ÂµÏ follows from decreasing of ex2 from the fact that erfc and the exponential function are positive and that Î½Ï > 0.
Positivity follows
Lemma 39 (Bounds on the Derivatives). The following bounds on the absolute values of the deriva- tives of the Jacobian entries J11(µ, Ï, ν, Ï, λ, α), J12(µ, Ï, ν, Ï, λ, α), J21(µ, Ï, ν, Ï, λ, α), and J22(µ, Ï, ν, Ï, λ, α) with respect to µ, Ï, ν, and Ï hold:
a a < 0.0031049101995398316 (162) a OFa| â 4 5597237419419 Ow OF < 0.031242911235461816 Vv a oF < 0.03749149348255419
a oie < 0.031242911235461816 [g a Oz < 0.031242911235461816 Ow a OFis < 0.21232788238624354 a oie < 0.2124377655377270
a oF < 0.02220441024325437 [g a Ia) â 4 146955401845684 Ow a oF < 0.14983446469110305 a 292) â 9 17980135762932363 Or
a 222) â 9 4.4983446469110305 Ou a OToe < 0.14983446469110305 Ow a 222) â 1 395740052651535 Ov a oes < 2.396685907216327
44
Proof. For each derivative we compute a lower and an upper bound and take the maximum of the absolute value. A lower bound is determined by minimizing the single terms of the functions that represents the derivative. An upper bound is determined by maximizing the single terms of the functions that represent the derivative. Terms can be combined to larger terms for which the maximum and the minimum must be known. We apply many previous lemmata which state properties of functions representing single or combined terms. The more terms are combined, the tighter the bounds can be made.
Next we go through all the derivatives, where we use Lemma 25, Lemma 26, Lemma 27, Lemma 28, Lemma 29, Lemma 30, Lemma 21, and Lemma 23 without citing. Furthermore, we use the bounds on the simple expressions t11,t22, ..., and T4 as deï¬ned the aforementioned lemmata:
âJ11 âµ
â
wtur)? Z(aâ1). We use Lemma(3ifand consider the expression ae âars erfe ( st | - VEO) in brackets. An upper bound on the maximum of is
α01et2 1 erfc(t1) â Ï (α01 â 1) â T22 = 0.591017 . (163)
A lower bound on the minimum is
α01eT 2 1 erfc(T1) â Ï (α01 â 1) t22 â = 0.056318 . (164)
Thus, an upper bound on the maximal absolute value is
V2 ~1) 1 2 2 E o 4 4g 7 5 roiwinaxe aoe" erfe(t1) â Tn = 0.0031049101995398316 . (165)
âJ11 âÏ
â
We use Lemma and consider the expression Vee ue = a(pw + wotur)2 Veâ a orfe (44) in brackets.
An upper bound on the maximum is
Ï (α01 â 1)T11 t22 â â α01(t11 + 1)eT 2 1 erfc(T1) = â0.713808 . (166)
A lower bound on the minimum is V2 (oo â tu
Ï (α01 â 1)t11 t22 â â α01(T11 + 1)et2 1 erfc(t1) = â0.99987 . (167)
This term is subtracted, and 2 â erfc(x) > 0, therefore we have to use the minimum and the maximum for the argument of erfc.
Thus, an upper bound on the maximal absolute value is
14, (eu (yf#on =D (Tis + Vell erfe(ts) | âerfe(Ts) +2 = âe -âa + L)e! eric(t â erfe(T: = 301 Vin ora 1 3
1.055872374194189 .
45
(168)
âJ11 âν
We consider the term in brackets
(ustvy?® (pw tut) | [2 ((aâ1)pw a ae ente (MZ) : V7 ( ors? =) : (169)
# αe
We apply Lemma 33 for the ï¬rst sub-term. An upper bound on the maximum is
. 2 ao. â LT; ay . ae! erfe(ty) 4 V2 Me a or) 0.0104167 . (170) 22
A lower bound on the minimum is â2D agie⢠erfe(T1) 4 / Tv
α01eT 2 1 erfc(T1) + (α01 â 1)t11 t3/2 22 â α01â t22 = â0.95153 . (171)
Thus, an upper bound on the maximal absolute value is
1 2) 2 ao. â L)t fay 7 TdovTimaxtmaxe!4 (oo erfe(Ti) + fo ( ate a oe) (172) 122 0.031242911235461816 .
âJ11 âÏ We use the results of item âJ11 bound on the maximal absolute value is
âν were the brackets are only differently scaled. Thus, an upper
1 t 72 _./2 f(a -Vtu aor _ dot maxWnax⬠4 (oo 1 erfc(T}) 4 V3 { BP - los (173) 0.03749149348255419 .
âJ12 âµ Since âJ12
âµ = âJ11
âν , an upper bound on the maximal absolute value is
1 2. 2 { (aoi â Lt a ~ Hor Tmax maxe!4 (ie erfc(T;) 4 Vi ( we a se.) = (174) P22 0.031242911235461816 .
âJ12 âÏ We use the results of item âJ11 bound on the maximal absolute value is
âν were the brackets are only differently scaled. Thus, an upper
1 2 aoi â L)t a ~ PoittmaxTmnaxel⢠(es erfe(T1) 4 â (â 1 a se.) (175) 22 0.031242911235461816 .
â
OTi2 av For the second term in brackets, we see that a1 72 O01 TZ age erfe(t;) = 1.53644. We now check different values for v2 (-1)(a = 1)p?w? _ VT(a + apw â( V2 /F ' 3/2
mineT 2 1 erfc(T1) = 0.465793 and maxet2 1 erfc(t1) = 1.53644.
â
v2 (-1)(a = 1)p?w? _ VT(a + apw â 1) ar3/2 176 â( V2 /F ' 3/2 Ve ) ; (176)
46
where we maximize or minimize all single terms.
A lower bound on the minimum of this expression is
[2 f (-1)(@01 = Dp iraxWrrax , VTmin (oon + aoitii â 1) exon Tate (177) 7 De Finn vale Vimin âmin V Tmin âmax â 1.83112.
An upper bound on the maximum of this expression is
3/2 v2 (-D(@01 = DeininWmin , VTmax(@o1 + 01711 â 1) _ ante (178) 7 Vines V/Tinax we VP maxx 0.0802158 .
An upper bound on the maximum is
5 3/2 1) ret vz (-1)(@01 â DpRinein _ CorTein , (179) 8 7 Vines Tina Vi nax
1 8 â
Ïmax(α01 + α01T11 â 1) ν3/2 min maxet2 + α01Ï 2 1 erfc(t1) = 0.212328 .
A lower bound on the minimum is
1 goes 2 7
mineT 2 α01Ï 2 1 erfc(T1) + (180)
â
(â1)(α01 â 1)µ2 ν5/2 Ïmin min maxÏ2 max â + Ïmin(α01 + α01t11 â 1) ν3/2 max â α01Ï 3/2 max â νmin = â 0.179318 .
Thus, an upper bound on the maximal absolute value is
5 3/2 dy rets v2 (=1)(a01 â Dpvinâmin Q01T nin t (181) 8 7 Uidese/ Tomas Vmax
1 8 â
Ïmax(α01 + α01T11 â 1) ν3/2 min maxet2 + α01Ï 2 1 erfc(t1) = 0.21232788238624354 .
âJ12 âÏ
We use Lemma B4]to obtain an upper bound on the maximum of the expression of the lemma: 2 (0.1? -0.1?(-1)(a01 - 1) (0.1-0.1)a01 â a1 + 1 = | ââ___+ 9 â V0.8 - 0.8001 4 â1.72296 Viz ( (0.8 -0.8)372 eo V08-0.8 )
2 (0.1? -0.1?(-1)(a01 - 1) (0.1-0.1)a01 â a1 + 1 = | ââ___+ 9 â V0.8 - 0.8001 4 â1.72296 . Viz ( (0.8 -0.8)372 eo V08-0.8 ) (182)
We use Lemma[34]|to obtain an lower bound on the minimum of the expression of the lemma: [2 (eo =) Vid 15a: 4 (â0.1 -0.1)ao1 â a1 + *) Tv (1.5 - 1.25)8/2 V1.5 - 1.25
[2 (eo =) Vid 15a: 4 (â0.1 -0.1)ao1 â a1 + *) 9.2302. Tv (1.5 - 1.25)8/2 V1.5 - 1.25 (183)
wwtur)? Next we apply Lamia the expression vTe oor erfe (4 V5 2). We use Lemma] to obtain an upper bound on the maximum of this expression:
Next we apply Lamia the expression vTe oor erfe to obtain an upper bound on the maximum of this expression: (4.5-1.25â0.1.0.1)2 1.5-1.25â-0.1-0.1
(1.5·1.25â0.1·0.1)2 2·1.5·1.25 â â 1.5 · 1.25e α01 erfc 2 1.5 · 1.25 = 1.37381 . (184)
47
We use Lemma 37 to obtain an lower bound on the minimum of this expression:
(0.8-0.8+0.1-0.1)? . {(9.8-0.8+0.1-0.1 0.8- 0.867 20503 ~~ ao; erfe | ââââââââ_ } = 0.620462. 185 on ( V2V08-08 ) 89)
wwtuT)? " Next we apply Lemma]23}for dae âa erfe (444). An upper bound on this expres- sion is
(0.8-0.8â0.1-0.1)? 26 SIO ert ( 0.8 â 0.1-0.1 ao) = 1.96664. (186)
A lower bound on this expression is
1.5-1.25+0.1-0.1
2e (1.5·1.25+0.1·0.1)2 2·1.5·1.25 α01 erfc â â 2 1.5 · 1.25 = 1.4556 . (187)
The sum of the minimal values of the terms is â2.23019+0.62046+1.45560 = â0.154133. The sum of the maximal values of the terms is â1.72295 + 1.37380 + 1.96664 = 1.61749.
Thus, an upper bound on the maximal absolute value is 1 (+722)? (ty, + Tho =o e4 (corte 272 erfc Ce 8 V2/To2
1 8 (t11+T22 )2 2T22 λ01et4 â α01T22e erfc + (188)
2 -)T, erfc(t,) 4 v2 _ (G01 -1) T ty) = 0.2124377655377270 .
(α01 â 1)T 2 11 t3/2 22 2α01et2 â 1 erfc(t1) + â + âα01 + α01T11 + 1 t22 â â
1 Vt22))
âJ21 âµ
An upper bound on the maximum is
N31 Wrvax a2yelt (-e7"â¢) erfe(T)) + 202, ele! erfc(t2) â erfe(T3) + 2) = (189) 0.0222044 .
A upper bound on the absolute minimum is do Wrrax azyelt (âe~"*) erfe(t,) + 203
do Wrrax azyelt (âe~"*) erfe(t,) + 203 eT? eM erfc(T2) â erfe(t3) + 2) = (190) 0.00894889 .
Thus, an upper bound on the maximal absolute value is
azyett (-e7â¢) erfe(Ty) + 202, et2 el erfc(t2) â erfe(T3) + 2) = (91) 0.02220441024325437 .
âJ21 âÏ
An upper bound on the maximum is
01(2T11 + 1)et2 α2 λ2 01 (192)
etent erfc(t2) + 2T\1(2 â erfe(T3)) + 2 (-e7â¢) erfe(T)) + \2vFae") =
2 a, (tu + ett (-e7â¢) erfe(T)) + \2vFae") = 1.14696.
A lower bound on the minimum is
rn (0: (Tn +1)e% (-e~"*) erfe(t1) + (193) a, (Qt + Det eT erfe(T2) + 2t11(2 â erfe(T3))+
48
2 |v") = â0.359403 .
Thus, an upper bound on the maximal absolute value is
01(2T11 + 1)et2 α2 λ2 01 (194)
ete erfe(tg) + 2711 (2 â erfc(T3)) + 2 (âe7â¢) erfe(T1) + (ivr) =
2 as (ti + Le⢠(âe7â¢) erfe(T1) + (ivr) = 1.146955401845684 .
âJ21 âν
An upper bound on the maximum is
v2) (i - 1) _ 2 e 2 c 501 Tmax max "1 ay (-e"*) erfe(Ty) + 409, e% erfe(t,) 4 VTn
1 2
0.149834 .
A lower bound on the minimum is
1 V/2(-1) (a8: - 1) 2 2. 501 Tmaxtmaxe 4 a6) (-e!?) erfc(t) + 402,e erfc(To) 4 Vto2
â 0.0351035 .
Thus, an upper bound on the maximal absolute value is
V/2(-1) (a8: - 1) 1 . . 501 Tmaxtmaxe 4 a, (-e"*) erfe(T)) + 4a? e!2 erfc(t2) 4 Vin
0.14983446469110305 .
âJ21 âÏ
An upper bound on the maximum is
1 /2(-1) (a8: - 1) 501 maxtmaxe" a64 (-e7') erfe(T1) + 4a®,e! erfc(t2) 4 *
0.179801 .
A lower bound on the minimum is
1 V2(-1) (61 â 1) 501M maxtmaxe a6) (-e"â) erfc(t,) + 4a2,et2 erfc(T2) + 4
1 2
â 0.0421242 .
Thus, an upper bound on the maximal absolute value is
1 V2(-1) (261 ~ 1) 501M maxtmaxe aa, (-e"â) erfe(T1) + 4a®,e! erfc(t2) 4 a
0.17980135762932363 .
49
(195)
(196)
(197)
(198)
(199)
(200)
âJ22 âµ We use the fact that âJ22
âµ = âJ21 α2 01
Jax | Thus, an upper bound on the maximal absolute value is 2 2 ; ; V2(-1) (a8, â 1)
We use the fact that Fae = Jax | Thus, an upper bound on the maximal absolute value is
2
2 2 ; ; V2(-1) (a8, â 1) p01 TinaxWinaxe aay (-e7*) erfe(T)) + 4a? e!2 erfc(t2) 4 Vin
1 2
0.14983446469110305 .
âJ22 âÏ
An upper bound on the maximum is
2 2 1 P 2-1) (a8, _ 1) 5 0iMmaxTmax® 4 apy (-e"') erfe(T,) + 4a2, el erfc(t2) 4 VTh2
0.149834 .
A lower bound on the minimum is
2 2 1 _ : ; 2-1) (1 â 1) 501 HmaxTimaxe 4 [a2 (-e'â) erfe(t1) + 4ae,et? erfe(T2) 4 Vin
â 0.0351035 .
Thus, an upper bound on the maximal absolute value is
2 1 v2 5 0iMmaxTinax® apy (-e') erfe(T)) + 402, e2 erfc(tz) +4
0.14983446469110305 .
âJ22 âν
21) ww We apply Lemma}35]to the expression 2 (Sts - 3). Using Lemna an upper bound on the maximum is
upper bound on the maximum is 1 DrorTinax® 2 ((a1-1Tu y2 (6 2) us TEL?
1 5 DrorTinax® (a3, (-e7') erfe(T)) + 802, e!2 erfc(t2) + (205) 2 ((a1-1Tu 308 y2 (6 2) 7a _ Soi 1.19441. us TEL? VT22
Using Lemma 35, a lower bound on the minimum is
1 5 5 DrorTinax® (a3, (-eâ) erfe(t) + 8a2,e7? erfc(T2) + (206) 2 ((a1-1)tu â 3a3 v2 (ââ =) 1 a )) = -1.80574. u t95 V 022
Thus, an upper bound on the maximal absolute value is
- PrTRawe Ga (-e"â) erfe(t1) + 802,62 erfce(T2) + (207) /2 (ag, - 1) ti = 302, 5 â = 1.805740052651535 . 7 ( Be Vi22
50
(201)
(202)
(203)
(204)
(207)
âJ22 âÏ
Or : FZ ( (e?=1)nw 4 9 We apply Lemma)|36jto the expression V2 Te 3a°VJUT }. pwtur)? We apply Lemma|37|to the expression ute âses erfe (424) . We apply Lemma|38}to . (Hw t2u7)? 2 the expression vre~ 2-7 _ erfe ( #24247 ) | P V2VuT
We combine the results of these lemmata to obtain an upper bound on the maximum:
1,. (Mitte)? ('T; tos âd2, (-editmee âFiaa ene ( ut =) + (208) 4 V2Vin . 4, Gart2722)? (ty +279 8ap,;Tose e277 erfe a â V2VTx 2027 e~⢠erfe(T,) + 402, ¢!2e~" erfc(t) + 2(2 â erfc(T3)) + 2 on, (ad - 1) Ti 2 wa yf ae ( OL _ 302, Vi = 2.39669 . 7 ( Vin
We combine the results of these lemmata to obtain an lower bound on the minimum:
1 (Ty +229)? (Ty 2tos po (Seditmee viaa ane ( ut =) + (209) V2Vin2 @ur+T22)?7 (ty, + Toe ae, Tee" e 2P2 ane ( - V2VTo2 a2, ett en's erfc(t,) + 4a? eT? eT erfc(T>) + 2 ta (og, â 1) t,o rc 2(2 â erfc(t3)) 4 xe Vln â 309, V T22 = -1.17154.
Thus, an upper bound on the maximal absolute value is
1 â Gitta? (Ty +t pn (-editmete âWaa ane ( ut 2) + (210) V2\/ta2 (142729)? f ( + =) e 22 erfc | â~ââ } â V2VTho 2a2,eT eT erfc(T,) + dad ele erfe(tz) + 2(2 â erfe(T3)) + 2-7, ( (oi -1) Ti 2 anne 7 =e â t. = 2.36 907216327 . / =e ( = 3091 Vio2 396685907216327 2 -t 8a91To2e Ny
Lemma 40 (Derivatives of the Mapping). We assume α = α01 and λ = λ01. We restrict the range of the variables to the domain µ â [â0.1, 0.1], Ï â [â0.1, 0.1], ν â [0.8, 1.5], and Ï â [0.8, 1.25].
The derivative â ⵠ˵(µ, Ï, ν, Ï, λ, α) has the sign of Ï.
The derivative â âν ˵(µ, Ï, ν, Ï, λ, α) is positive.
The derivative â âµ Ëξ(µ, Ï, ν, Ï, λ, α) has the sign of Ï.
The derivative â âν Ëξ(µ, Ï, ν, Ï, λ, α) is positive.
# Proof.
â
ⵠ˵(µ, Ï, ν, Ï, λ, α)
(2 â erfc(x) > 0 according to Lemma 21 and ex2 to Lemma 23. Consequently, has â erfc(x) is also larger than zero according ⵠ˵(µ, Ï, ν, Ï, λ, α) the sign of Ï.
51
âν ˵(µ, Ï, ν, Ï, λ, α) Lemma 23 says ex2 erfc(x) is decreasing in µÏ+Î½Ï Î½Ï in Î½Ï since it is proportional to minus one over the squared root of Î½Ï .
we . (negative) is increasing
â jyo Mwtyr _ 1.5-1.2540.1-0.1 c We obtain a lower bound by setting Je > CAV for the e®â erfc(a) -5-1.2540.1-0.1 1 term. The term in brackets is larger than of V2V15-1.25 ) Qo1 erfc (25st) - V2V1.5-1.25
term. The term in brackets is larger than masa â 1) = 0.056
2 Ï0.8·0.8 (α01 â 1) = 0.056 Consequently, the function is larger than zero.
4/
Ëξ(µ, Ï, ν, Ï, λ, α)
â âµ
We consider the sub-function ry w+ur\? (ave âa? (a)
# ry ug
ry w+ur\? w+2ur\? (ave âa? (a) erfc (â*) â e( aH) erfc (â*)) : ug VUT
We set x = v7 and y = jw and obtain
v2 JE- 2 (« (oR) ene (EH) ~ GRAY ete (4) ) . (212)
The derivative of this sub-function with respect to y is
fz) (213) a? (a (2a + y) erfe (234) ~ (a + y) erfe (+= x @etuy? aa J20rya(* (ety) erfe( 2) _ ee ceo =) xr
The inequality follows from Lemma 24, which states that zez2 increasing in z. Therefore the sub-function is increasing in y. erfc(z) is monotonically
The derivative of this sub-function with respect to x is â
(Qr+y)? , ty a Vira? (c ae (42? â y?) erfe (2g) -e âe (a ây) (x+y) erfe ( om )) - V2 (a? -1) 3/2 2 /mx? , s Sy)
(214)
The sub-function is increasing in x, since the derivative is larger than zero: Jama? (Sa (42? â y?) erfe (24) ee (x â y)(a + y) erfc
Jama? (Sa (42? â y?) erfe (24) ee (x â y)(a + y) erfc (+4)) â V2xr°/? a Q/rx? (215) (2xây)(2a+y)2 (x=y)(x@+y)2 â V2x3/2 (a2 â1 ve(s age / (CG) ()') vi atu ( ety er ( ) Vive vive) te Q/mx? 2 ( (2x-y)(2x+y)2(V2Vz) (@y)(e+y)2(v2Vz) 3/2 (2 - â V2x -1 vra (4 Vi (Getut/@rtutie) Ji(etyt tue) v200/? (a? â1) 2 /nx? 2 (2aây)(2e+y)2 (ey) (@+y)2 _ 24 via (x (irtur /@r+y) as) atts) #(o*~1) > V2/n03/?
# Jama?
(2x+y)2 2x
V2xr°/? (a? â 1)
=
52
.
> =
a2 D (2%ây)(2a+y)2 (wy) (@+y)2 _¢ Va (2e+u+V/ Gatun) 242(2u+y)+1) Vi (tut (-+y)?+0.782-2(x+y)+0.782? ) V2) nx3/2 0° ( (2a~y)(2x+y)2 (wy) (x+y)2 ) x(a? 1) D vi( (ety t/Qx+y+) 7) ~ Vit ebut/@tut0.782)") VaVra3/? (Qaây)(2e+y)2 (a-y)(e+y)2 ) _ (c? _ 1) VaQQxty) tl) Vr@le+y)-+0.782)) V2 /r03/2 (2(e@+y)+0. Sess y)(Qa+y)2 _ â ovtenner sy) 12) a2 D 7 (2(2a + y) + aT 2a + y) + 0.782) V2/723/2 Vo? (a (a? = 1) (2(2% + y) + 1)(2(a + y) + 0.782) (2(20 + y) + 1)(2(@ + y) + 0.782) V2 /r03/? 8x3 + (12y + 2.68657)x? + (y(4y â 6.41452) â 1.40745) a + 1.22072y? (2(2a + y) + 1)(2(a + y) + 0.782) /2,./ra3/? 8x3 + (2.68657 â 120.01)x? + (0.01(â6.41452 â 40.01) â 1.40745)x + 1.22072(0.0)? (2(20 + y) + 1)(2(@ + y) + 0.782) V2 /r03/? 8x? + 2.56657x â 1.472 (2(2e + y) + 1)(2(a + y) + 0.782) V2Va/e 8x? + 2.56657x â 1.472 (2(2ax + y) +.1)(2(@ + y) + 0.782) V2VaVr 8(a + 0.618374) (a â 0.297553) (22a + y) +1)(2(e@+y) +0. 782) Vo/ava We explain this chain of inequalities:
_¢ (a2 _ 1)
=
â First inequality: We applied Lemma 22 two times. â 2 â Equalities factor out â Second inequality part 1: we applied
â
0 < 2y =â (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 . (216)
Second inequality part 2: we show that for a = =n (\ / 204841697 â 13) following holds: 8* â (a? + 2a(a + y)) > 0. We have 2 8* â (a? + 2a(x +y)) = 8-2a>0 Da 7 and a 82 â (a? + 2a(a + y)) = â2a > 0. Therefore the minimum is at border for minimal x and maximal y:
holds: 8x and â 8x ây minimal x and maximal y:
2 8 - 0.64 2 / 2048 + 1697 1 / 2048 + 1697 â1 -64 + 0.01) 4 â1e T Al T 7 (0.6 0.01) (3( T *)) (217)
Thus
# Se Tv
Se > a? +2a(x+y). (218) Tv
# (/2setoez 13)
# for a = 1 20
â 13
> 0.782.
# Ï
â Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.782).
â We set α = α01 and multiplied out. Thereafter we also factored out x in the numerator. Finally a quadratic equations was solved.
53
=
= 0 .
The sub-function has its minimal value for minimal « and minimal y x = vt = 0.8 - 0.8 = 0.64 and y = pw = â0.1- 0.1 = â0.01. We further minimize the function rw . pw 9.017, 0.01 uwe 27 (2 âerfc > â0.01e 20-64 | 2 â erfe | ââ . (219 pet (rete (oe) (?-«« (Gam) °»
Ëξ(µ, Ï, ν, Ï, λ, α):
We compute the minimum of the term in brackets of â âµ
We compute the minimum of the term in brackets of ZEu, W,V,T, A, a):
p22 pow uwe 2Â¥* {2 âerfc + 220 ( ( V2/ =)) oo)
# µÏe
(â*))) + evr V2Vut 7 )? 2-0.64 â0.01 ) erfe C2") V2/0.64
a2 (- (CB) ene (ââ*) â (REY cote (â*))) + evr > 01 , Va Jor V2Vut 7
(ââ*) â Va Jor 0.64 â 0.01 rfc coe +) V2V0.64 0.01
; ~0. 0.64 â 0.01 20.64â0.01 )? 2-0.64 â0.01 a2 (- (C8) rfc coe +) - el Viv0-64 ) erfe C2") _ o V2V0.64 V2/0.64
2 â¢
0.01
# Tony)?
â
# (saa))
| (
0.012 20.64
â
# 2 â erfc
= 0.0923765 .
+
0.64
0.01e
0.64
Therefore the term in brackets is larger than zero.
Ëξ(µ, Ï, ν, Ï, λ, α) has the sign of Ï.
# Thus, â âµ
E(u,
ZE(U,w,v,7, A, 0) ov We look at the sub-term 20+y ae( 2)
20+y )\? Qa aty )? ae( 2) ere ( =) _ (#4) arte (4) . (221) Viva Vala
We obtain a chain of inequalities:
2ety \? Qn arty \? r ae( 24) ane ( at) _ (a) arte (4) > (222) ViVi ViVi 2-2 2 2 2 Qe+1 a+ + at 4 âlie (24) +2) ve (a+ (#4)'+4) eto tietinty ~ V(aty)?+ =) Vi 1 vive ( ap ea +1+42a+y SRT) Vi 2V2V/z (5 sass ~ Wary Tw. sao! (2V2V/z) (2( ee + â) + 0.782) â (2(2a + y) + 1)) Vi((2(a + y) + 0.782)(2(2x + y) + 1)) (2V2V2) (2y + 0.782 - 2-1) Vi ((2(a + y) + 0.782)(2(22 + y) + 1)) > 0.
2ety 24)
\?
(222)
We explain this chain of inequalities:
â First inequality: We applied Lemma 22 two times. â 2 â Equalities factor out â Second inequality part 1: we applied
â
# x and reformulate.
0 < 2y =â (2x + y)2 + 4x + 1 < (2x + y)2 + 2(2x + y) + 1 = (2x + y + 1)2 . (223)
54
â Second inequality part 2: we show that for a = 30 (\ / 204841697 â 13) following holds: 8* â (a? + 2a(a + y)) > 0. We have 2 8* â (a? + 2a(x +y)) = 8-2a>0 and i 82 _ (a? + 2a(x + y)) = â2a < 0. Therefore the minimum is at border for minimal x and maximal y:
holds: 8x and â 8x ây minimal x and maximal y:
2 0.64 2 / 2048 + 1697 1 / 2048 + 1697 -ik -64 + 0.01) 4 â1e - Al 7 2) 06 0.01) (3( 7 *)) (224)
8 · 0.64 Ï
Thus
# 8x â Tv
8x â > a? 4+2a(a+y). (225) Tv
for a = 1 20 Ï â 13 > 0.782.
â Equalities only solve square root and factor out the resulting terms (2(2x + y) + 1) and (2(x + y) + 0.782).
We know that (2 â erfc(x) > 0 according to Lemma 21. For the sub-term we derived
ae( AF) erfe (4) _ (He) erfc (<4) >0. (226)
Consequently, both terms in the brackets of â âν Ëξ(µ, Ï, ν, Ï, λ, α) is larger than zero. fore â âν Ëξ(µ, Ï, ν, Ï, λ, α) are larger than zero. There-
Lemma 41 (Mean at low variance). The mapping of the mean ˵ (Eq. (4))
ji(u,w,v,7,,a) = > (tc + juw) erfe (=): (227)
aclât © erfe Cac) + [iver + 2)
in the domain â0.1 < â0.1, -0.1 <w < â0.1, and 0.02 < vt < 0.5 is bounded by
# hhnann
|˵(µ, Ï, ν, Ï, λ01, α01)| < 0.289324 (228)
and
lim νâ0 |˵(µ, Ï, ν, Ï, λ01, α01)| = λµÏ. (229)
We can consider ˵ with given ÂµÏ as a function in x = Î½Ï . We show the graph of this function at the maximal ÂµÏ = 0.01 in the interval x â [0, 1] in Figure A6.
Proof. Since ju is strictly monotonically increasing with puw ji(qe,w,v,7, 4,0) < fu(0.1, 0.1, 0,7, A, a) <
(230)
1 . { 0.01 o.014Yt (on) v2 _ 0.012 =A | â(a@ + 0.01) erfe +ae⢠2 erfe | ââââ ]} + 4/âVvte @ +2-0.01] 2 ( ( ) (34) V2 vt T
{ 0.01 +ae⢠(34) 0.02 + 0.01 erfe Ca) yas
(on) | ââââ ]} + V2 vt 0.01 0.01) erfe (a0 FV ere \ a gas
1 0.05 0.02 + 0.01 0.01 _0.012 =X e 2 +9 la erfe Ca) â (ao1 + 0.01) erfe (a0 Jee 20.5 an 0.01 -2 a ( mente yas ) (om FV ere \ a gas voy < 0.21857,
< 0.21857,
< ~
where we have used the monotonicity of the terms in Î½Ï .
55
= 0 .
Figure A6: The graph of function ˵ for low variances x = Î½Ï for ÂµÏ = 0.01, where x â [0, 3], is displayed in yellow. Lower and upper bounds based on the Abramowitz bounds (Lemma 22) are displayed in green and blue, respectively.
Similarly, we can use the monotonicity of the terms in Î½Ï to show that
fi(u,w,v,7,A, a) > fi(0.1, â0.1,v,7, A, a) > â0.289324, (231)
such that |˵| < 0.289324 at low variances.
Furthermore, when (Î½Ï ) â 0, the terms with the arguments of the complementary error functions erfc and the exponential function go to inï¬nity, therefore these three terms converge to zero. Hence, the remaining terms are only 2ÂµÏ 1
Lemma 42 (Bounds on derivatives of ji in Q~). The derivatives of the function ji(f, w,V,7, 01, 001 (Eq. (4) with respect to [4,w,v,T in the domain Q~ = {p1,w,v,T| â0.1< pw <0.1,-0.1<w< 0.1,0.05 < v < 0.24,0.8 < r < 1.25} can be bounded as follows:
# ft}
â âµ â âÏ â âν â âÏ
< 0.14 (232)
< 0.14
< 0.52
< 0.11.
Proof. The expression
0. 1 = (uw)? (uw)? (On plus ) (uwter)? (â + â)) =f = Ji = sAwe 2 | Qe 2 â e 2 erfe + aeâ 27 ~ erfc | ââââ dp an) ( (A V2/0T (233)
contains the terms e which are monotonically de- creasing in their arguments (Lemma 23). We can therefore obtain their minima and maximal at the minimal and maximal arguments. Since the ï¬rst term has a negative sign in the expression, both terms reach their maximal value at ÂµÏ = â0.01, ν = 0.05, and Ï = 0.8.
(a) 1 ; rl < 5 |Al | (2 = 620989 erfe (0.0853553) + ae erfe (0.106066) ) | < 0.133 (a
Since, ˵ is symmetric in µ and Ï, these bounds also hold for the derivate to Ï.
56
(234)
h(x) 0.005 0.004 0.003 0.002 0.001 0.009 x 0.0 0.2 0.4 0.6 0.8 1.0
Figure A7: The graph of the function h(x) = ˵2(0.1, â0.1, x, 1, λ01, α01) is displayed. It has a local maximum at x = Î½Ï â 0.187342 and h(x) â 0.00451457 in the domain x â [0, 1].
We use the argumentation that the term with the error function is monotonically decreasing (Lemma 23) again for the expression
(a) âfa=SJ2= 235 apt Si2 (235) 1 â 20? (uwter)? ffl + VT 2 = =)\re = [ae fo ( |) (a 1),/ | < pre? (~ zi erte (4 â) (a-1) a )< 1 ier (|1.1072 â 2.68593]) < 0.52.
wtvr)? We have used that the term 1.1072 < ageâ a7 erfe (4g) < 1.49042 and the term 0.942286 < (a â Wes < 2.68593. Since ji is symmetric in v and 7, we only have to chance outermost term | +Ar| to | Av to obtain the estimate | 2 ji| < 0.11.
Lemma 43 (Tight bound on ˵2 in â¦â). The function ˵2(µ, Ï, ν, Ï, λ01, α01) (Eq. (4)) is bounded by
|f?|
|f?| < 0.005 (236)
in the domain Q~ = {p1,w,v,7| âO1<w<0.1,-0.1 <w <0.1,0.05 <v <0.24,08<7< 1.25}.
We visualize the function ˵2 at its maximal µν = â0.01 and for x = Î½Ï in the form h(x) = ˵2(0.1, â0.1, x, 1, λ01, α01) in Figure A7.
Proof. We use a similar strategy to the one we have used to show the bound on the singular value (Lemmata 10, 11, and 12), where we evaluted the function on a grid and used bounds on the derivatives together with the mean value theorem. Here we have
|? (1, w, v,T,Ao1, 01) â fi? (u + Ap, w + Aw,y + Av,7 4 Ar, Ao1,.001)| < (238) 0 .5 O Oo. ji?| |Ap| 4 ji} |Aw| 4 ji?| |Av| 4 ji?| |Ar. [Sa ised + | 27] kl + || fan + | a
We use Lemma 42 and Lemma 41, to obtain
~ j2| = 2\fi|| ja] < 2- 0.289324 - 0.14 = 0.08101072 (239) Ou Ou O 5 .,| oO. . =f] = 2|a||â fi] < 2- 0.289324 - 0.14 = 0.08101072 Ow Ow
57
(235)
(237)
o â ji? = 2|f|| =f] < 2- 0.289324 - 0.52 = 0.30089696 Ov al (a) Fr ~ ~ 4 Lac = 2|ja|| fil < 2 - 0.289324 - 0.11 = 0.06365128 ar" Or
We evaluated the function ˵2 in a grid G of â¦â with âµ = 0.001498041, âÏ = 0.001498041, âν = 0.0004033190, and âÏ = 0.0019065994 using a computer and obtained the maximal value maxG(˵)2 = 0.00451457, therefore the maximal value of ˵2 is bounded by
max (µ,Ï,ν,Ï )ââ¦â 0.00451457 + 0.001498041 · 0.08101072 + 0.001498041 · 0.08101072+ 0.0004033190 · 0.30089696 + 0.0019065994 · 0.06365128 < 0.005.
Furthermore we used error propagation to estimate the numerical error on the function evaluation. Using the error propagation rules derived in Subsection[A3.4.5] we found that the numerical error is smaller than 10~13 in the worst case. Lemma 44 (Main subfunction). For 1.2 <x < 20and-0.1<y<0.1,
the function
(ety? , faty (22+)? 2e+y e 2 erfc 2e~ 2 â erfc (= 4) (242) (= 2/x L) - 2x
is smaller than zero, is strictly monotonically increasing in x, and strictly monotonically decreasing in y for the minimal x = 12/10 = 1.2.
Proof. We ï¬rst consider the derivative of sub-function Eq. (101) with respect to x. The derivative of the function
tiv? , faty \- @zpy? (= + 7) e 2 erfe 2e~ 2 â erfc (243) (= 2x v2Ve
with respect to x is â
vi (es (ty uu) ) + V3Vz(30 â y) *(« ây)(a+y) erfe (424) 2 (4a? â y?) erfe (= 2/mx?
(244) vi (e a (a â y)(@ + y) erfc (4 al = (0 + y)(2% â y) erfe (2+) + V2\/z(3x â a ty)? (ety)? e (aâ w)(ety) erfe( S42) 2c (2a+y)(2xây) erfe( 2a) - ve Vivi Vivi + (Bx ây) 22 /ra2 Jt
We consider the numerator
(tw)? (224 ~ (244 Vi ee (7 wa + wert (FH) 20 om (20 + y)( Qa â y)erte (2542) - (80-4) Vv2vE VaVE ne (245)
â
For bounding this value, we use the approximation
ez2 erfc(z) â â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (246)
58
240 (240)
(241)
=
x(3x â y)
(245)
=
from Ren and MacKenzie [30]. We start with an error analysis of this approximation. According to Ren and MacKenzie [30] (Figure 1), the approximation error is positive in the range [0.7, 3.2]. This range contains all possible arguments of erfc that we consider. Numerically we maximized and minimized the approximation error of the whole expression
oe (a â y)(@ + y) erfc (4%) | 2 (2x â y)(2x + y) erfc Vive V2Ve 2.911(x â y)(a@+y) (V2Vz) (senguen + fo (=)" 201) 2-2.911(2x â y) (2x + y) (V2Vz) (sesygyes n (2eun)â zon)
E(x, y) =
(23)
â
2
# x
â
(247)
We numerically determined 0.0113556 < E(x,y) < 0.0169551 for 1.2 < x < 20 and -0.1 < y <0.1. We used different numerical optimization techniques like gradient t based constraint BFGS algorithms and non-gradient-based Nelder-Mead methods with different start points. Therefore our approximation is smaller than the function that we approximate. We subtract an additional safety gap of 0.0131259 from our approximation to ensure that the inequality via the approximation holds true. With this safety gap the inequality would hold true even for negative x, where the approximation error becomes negative and the safety gap would compensate. Of course, the safety gap of 0.0131259 is not necessary for our analysis but may help or future investigations.
We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]:
(x+y)2 2x
(2x+y)2 2x
; . aoe (x â y)(x + y) erfc (=) 2¢ (2a â y)(2% + y) erfe (24) (3a â y) 4 Viva â Vive (30 ây) 4 2.911(a â y)(a@+y) _ (\ (se) + 2.9112 4 sues (v2Vz) 2(2x â y)(2a + y)2.911 Vm â0.0131259 = 2 (30 ây) 4 (V2V/#2.911) (w â y)(w« +y) _ ( ma + y)? +2- 2.91122 + (2.911 â 1)(a + wv7) (v2V/z) 2(2x â y) (2a + y) (V2Vr2.911) Jy â 00131259 = (V2Vz) ( w(Qa + y)? +2- 2.91 12x + (2.911 â 1)(2a + vv)
# e
# (x â y)(x + y) erfc
# 2e
â
â
# x
(248)
â
# TZ
# VE
59
(3x ây) +2.911 (w= w(@ +9) (2.911 â La ty) + (ety)? + 225s 7 2(2x â y)(2x + y) (2.911 â1)(22 +-y) 4 (ee + y)? + 220122 T â 0.0131259 > (3a â y) + 2.911 (w= y)(e+y) (2.911 â1)(~+y)4 Jes) + (x+y)? 4 22.01)? » 2:2.9112y T 2(2x â y)(2x + y) (2.911 â1)(22 +-y) 4 (Qe + y)? + 220lPe T â 0.0131259 = (3a â y) + 2.911 (@= (e+) - (2.911-D(ety)t+ Vf (@ty+ 2.911? y? 2(2x â y)(2x + y) (2.911 â1)(22 +-y) 4 Ver + y)? + 220172 â 0.0131259 = (3a â y) + 2.911 (e-wety) 2(2x â y)(2a + y) 0.0131259 2.911 (x + y) + 29 (2.911 â1)(22 + y) + \/ (2x + y)? + 220s . xâyj(aty 2(2a â y)(2a + y)2.911 (3a â y) 4 eos ry) _ M âââ - 0.0131259 = TTY ST (2.911 â1)(2r +y) + y/ (2a + y)? + 22212 2.911 (222-9 2.911 ( y+ 22 ) eer ty) 4 T 2.911 2- 2.91122 me) (3a â y â 0.0131259) (em (Qe+y)+4/Qe+y)?24 :) : (x ây)(a +y) (em (Qr+y)+4/ Qr+y)? 4 2uire)) TT - â1 (Gc y+) (em 1)(22 + y) + yf (2x + y)? azure) = (( (x â y)(« + y) + (3x â y â 0.0131259)(x + y + 0.9266)) (\/Qx + y)? + 5.39467x + 3.8220 4 Lolly)
â 0.0131259 =
(249)
5.822(2a â y)(x + y + 0.9266) (2a + y))
> -1 ((« ty) + 2) (em 1)(2a + y) + 4/ (2a + y)? zars)) > 0.
We explain this sequence of inequalities:
⢠First inequality: The approximation of Ren and MacKenzie [30] and then subtracting a safety gap (which would not be necessary for the current analysis).
â
â 2
⢠Equalities: The factor x is factored out and canceled.
⢠Second inequality: adds a positive term in the ï¬rst root to obtain a binomial form. The term containing the root is positive and the root is in the denominator, therefore the whole term becomes smaller.
60
â
(3x â y) +
(x â y)(x + y)
(x + y) + 2.911
# Ï
â
2(2x â y)(2x + y)2.911
(2.911 â 1)(2x + y) +
(2x + y)2 + 2·2.9112x
# Ï
â 0.0131259 =
⢠Equalities: solve for the term and factor out.
)2 L 2-2.9112a t Bringing all terms to the denominator (( + y) + 2-244) (ou -1Qr+y)+VQrt+y T ).
Equalities: Multiplying out and expanding terms.
⢠Last inequality > 0 is proofed in the following sequence of inequalities.
We look at the numerator of the last expression of Eq. (248), which we show to be positive in order to show > 0 in Eq. (248). The numerator is
(Vee + y)? + 5.39467x + 3.8222 4 Lolly)
((x â y)(a@ + y) + (8a â y â 0.0131259)(a + y + 0.9266)) (Vee + y)? + 5.39467x + 3.8222 4 5.822(2x â y)(x + y + 0.9266) (2% + y) = â 5.822(2x â y)(a + y + 0.9266) (2a + y) + (3.822% + 1.911y)((a â y)(a + y)+ (3a â y â 0.0131259)(a + y + 0.9266)) + ((% â y)(a+y)+4 (3a â y â 0.0131259)(a + y + 0.9266))/ Qa + y)? + 5.394672 = â 8.023 + (4a? + 2xy + 2.76667x â 2y? â 0.939726y â 0.0121625) \/(2x + y)? 4 (250) + 5.39467xâ 8.0x?y â 11.0044? + 2.0ry? + 1.69548ary â 0.0464849x + 2.0y? + 3.59885y7 â 0.0232425y = â 8.003 + (4a? + 2ey + 2.76667x â 2y â 0.939726y â 0.0121625) \/(2x + y)? 4 + 5.39467xâ 8.0x?y â 11.0044? + 2.0ry? + 1.69548ary â 0.0464849x + 2.0y? + 3.59885y7 â 0.0232425y .
The factor in front of the root is positive. If the term, that does not contain the root, was positive, then the whole expression would be positive and we would have proofed that the numerator is positive. Therefore we consider the case that the term, that does not contain the root, is negative. The term that contains the root must be larger than the other term in absolute values. â (-8.02° â 8.02°y â 11.0044x? + 2.cy? + 1.69548ay â 0.0464849x + 2.1% + 3.59885y" â 0.0232425y) <
â (-8.02° â 8.02°y â 11.0044x? + 2.cy? + 1.69548ay â 0.0464849x + 2.1% + 3.59885y" â 0.0232425y) < (251)
(251)
(4a? + 2ay + 2.76667x â 2y? â 0.939726y â 0.0121625) \/(2x + y)? + 5.39467x Therefore the squares of the root term have to be larger than the square of the other term to show > 0 in Eq. (248). Thus, we have the inequality: (â8.02 â 8.02?y â 11.0044a? + 2.ay? + 1.69548xy â 0.04648492a + 2.y? + 3.59885y? â 0.0232425y)â
(252)
(4x? + 2ny + 2.766672 â 2y? â 0.939726y â 0.0121625)â (2x + y)? +5.394672) .
This is equivalent to 0 < (4a? + 2ay + 2.76667" â 2y? â 0.939726y â 0.0121625)â ((2a + y)? +.5.394672) â
(253)
(â8.02° â 8.02?y â 11.0044a? + 2.0ay? + 1.695482y â 0.04648492x + 2.0y? + 3.59885y? â 0.0232425y)â â 1.2227a° + 40.1006aty + 27.7897a4 + 41.0176 y? + 64.5799a%y + 39.4762a° + 10.9422a7y>â 13.543a7y? â 28.845527y â 0.364625a? + 0.611352ay* + 6.83183ay? + 5.46393ry?+ 0.121746xy + 0.000798008a â 10.6365y° â 11.927y* + 0.190151y? â 0.000392287y? .
We obtain the inequalities: â 1.2227x5 + 40.1006x4y + 27.7897x4 + 41.0176x3y2 + 64.5799x3y + 39.4762x3 + 10.9422x2y3â
(254)
13.543x2y2 â 28.8455x2y â 0.364625x2 + 0.611352xy4 + 6.83183xy3 + 5.46393xy2+ 0.121746xy + 0.000798008x â 10.6365y5 â 11.927y4 + 0.190151y3 â 0.000392287y2 =
61
â
<
=
.
â 1.2227x° + 27.7897a7 + 41.0176x°%y? + 39.4762x°° â 13.543x7y? â 0.364625x?+ y (40.10062* + 64.5799a° + 10.942227y? â 28.8455a? + 6.831832y? + 0.1217462 â 10.6365y* + 0.190151yâ) + 0.611352xry* + 5.46393xy? + 0.0007980082 â 11.927y* â 0.000392287y" > â 1.22272" + 27.78972* + 41.0176 - (0.0)?2* + 39.4762x°° â 13.543 - (0.1)?2? â 0.364625x? â 0.1 - (40.10062* + 64.5799x* + 10.9422 - (0.1)?a? â 28.8455x? + 6.83183 - (0.1)?a + 0.121746 + 10.6365 - (0.1)* + 0.190151 - (0.1)?) + 0.611352 - (0.0)4a + 5.46393 - (0.0)?a + 0.000798008a â 11.927 - (0.1)* â 0.000392287 - (0.1)? = â 1.22272° + 23.7796a* + (20 + 13.0182)x? + 2.373552? â 0.0182084x â 0.000194074 > â 1.22272° + 24.7796a* + 13.0182a° + 2.373552? â 0.0182084x â 0.000194074 > 13.01822° + 2.373552? â 0.01820842 â 0.000194074 > 0.
We used 24.7796 - (20)* â 1.2227 - (20)® = 52090.9 > 0 and a < 20. We have proofed the last inequality > 0 of Eq. (248).
Consequently the derivative is always positive independent of y, thus
(etv)? â, (aty (22+y)? 2) e 2 erfe â2e7 2 ~ erfe 255 (4) (St eo)
is strictly monotonically increasing in x.
The main subfunction is smaller than zero. Next we show that the sub-function Eq. (101) is smaller than zero. We consider the limit:
. (ety)? a+y @ety)? | (Qr4+ â) lim e 2 ~ erfe â 2e7 2= ~ erfc =~) =0 256 e090 ( V2Vz ) ( V2Vz °)
The limit follows from Lemma 22. Since the function is monotonic increasing in x, it has to approach 0 from below. Thus,
zty)? c ety)? 2a e 3 erfe (34) â 26 erfe A) (257)
is smaller than zero.
Behavior of the main subfunction with respect to y at minimal x. We now consider the deriva- tive of sub-function Eq. (101) with respect to y. We proofed that sub-function Eq. (101) is strictly monotonically increasing independent of y. In the proof of Theorem 16, we need the minimum of sub-function Eq. (101). Therefore we are only interested in the derivative of sub-function Eq. (101) with respect to y for the minimum x = 12/10 = 1.2
Consequently, we insert the minimum x = 12/10 = 1.2 into the sub-function Eq. (101). The main terms become
â 1.2 â 2 x + y â â x 2 y + 1.2 â â 1.2 2 y â 5y + 6 â 15 2 â = = + = 2 1.2 (258)
and
2x + y â â x 2 = y + 1.2 · 2 â â 1.2 2 = â y â 2 1.2 + â 1.2 â 2 = 5y + 12 â 15 2 . (259)
Sub-function Eq. (101) becomes:
an) 8) aim) 2 vet to Yi tVv2,/2 la BO erfc y + VO | oe \ VR vi erfc y + va?
.
(260)
62
The derivative of this function with respect to y is
â
2V15 VI5 VI5m (c2r(v+)*(5y + 6) erfe (BEE) â 2ear(u+12)* (5y + 12) erfe (S42) ) + 30 (261) 6V 157
We again will use the approximation of Ren and MacKenzie [30]
ez2 erfc(z) = â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (262)
Therefore we ï¬rst perform an error analysis. We estimated the maximum and minimum of
â
â
2-2.911(5y + 12 2.911(5y +6 Vi50 o11(5y +12) - - 911(5y +6) : Vi(2.911-1)(5y+12) , syt12\* , 2 Vm(2.911-1)(5y+6) | sy+6\7 , 9 1 (3 2) + 2.911 1) m (248) + 2.911 (263) 5y +6 5 _ (Sy +12 V150 (cory + 6) erfe (2 + ) - Jens (u+12)" (59) + 12) erfe (4 )) + 30. 2/15 2V15
+ 30 +
We obtained for the maximal absolute error the value 0.163052. We added an approximation error of 0.2 to the approximation of the derivative. Since we want to show that the approximation upper bounds the true expression, the addition of the approximation error is required here. We get a sequence of inequalities:
5 6 5 _ {5 12 V150 (cory + 6) erfe (2 + ) - Jens (u+12)" (59) + 12) erfe (% + )) + 30. < 2/15 15
â
2/15 15 (264) Jibn 2.911(5y + 6) _ 2-2.911(5y + 12) 2 2 Vm(2.911-1)(5y+6) , / (5y+6\~ 4 2 vR(2.911-1)(5y+12) Syt12)\" | OTE 7(3 is) + 2.911 we | 7(2 2) + 2.911 30+0.2 = (30 - 2.911)(5y + 6) _ 2(30 - 2.911)(5y + 12) . . 2 (2.911 â 1)(5y + 6) 4 ou + 6)2 4 (2452011) (2.911 â 1)(5y + 12) 4 eu +12)? 4 30+0.2 = 2 (0.2 + 30) | (2.911 â 1)(5y + 12) + | (5y +12)? 4 (ae 2) Vi (2.911 â 1)(5y + 6) + 4| (5y + 6)? 4 2 (Ae) Wa 2 2-30-2.911(5y +12) | (2.911 â 1)(5y +6) 4 | (5y + 6)? 4 (=) | 2/15 - 2.911 ; 2.911 -30(5y +6) | (2.911 â 1)(5y + 12) +4) (5y +12)2 4 ( = ) Vi
+ 2.9112
+
â
(24520)
15·2.911 â Ï
2
+
63
2 2/15 - zu) (2.911 â 1)(5y + 6) + 4| (5y + 6)? 4 ( Va -1 <0. 2 2V15 - 2.911 Vi (2.911 â 1)(5y + 12) + 4} (5y + 12)? 4 (
We explain this sequence of inequalities.
⢠First inequality: The approximation of Ren and MacKenzie [30] and then adding the error bound to ensure that the approximation is larger than the true value.
â
â
⢠First equality: The factor 2 15 and 2 Ï are factored out and canceled.
⢠Second equality: Bringing all terms to the denominator
2 52.9 we) (265) (2.911 â 1)(5y + 6) + 4| (5y + 6)? 4 ( Ti 2 2V15 - 2.911 Vi (2.911 â 1)(5y + 12) + 4} (5y + 12)? 4 (
⢠Last inequality < 0 is proofed in the following sequence of inequalities.
We look at the numerator of the last term in Eq. (264). We have to proof that this numerator is smaller than zero in order to proof the last inequality of Eq. (264). The numerator is
2 2/15 - 2.911 VI5-2.9 ) (266) (0.2 + 30) | (2.911 â 1)(5y + 12) + 4} (5y + 12)? 4 ( Vi 2 2V15 - 2.911 _ Vi (2.911 â 1)(5y + 6) + ,| (5y + 6)? 4 ( 2 eo) 2-30 -2.911(5y + 12) | (2.911 â 1)(5y + 6) + 4} (5y + 6)? 4 ( Vi 2 2V15 ze) 2.911 - 30(5y + 6) | (2.911 â 1)(5y + 12) + | (Sy +12)? 4 ( Vi
We now compute upper bounds for this numerator:
(267) 2 2/15 - 2.911 Vr (0.2 + 30) | (2.911 â 1)(5y + 12) + | (5y + 12)? 4 ( 2 2,15 - a) (2.911 â 1)(5y + 6) + ,| (5y + 6)? 4 ( Vi
64
(266)
2 eo) 2-30 -2.911(5y + 12) | (2.911 â 1)(5y + 6) + 4} (5y + 6)? 4 ( Vi 5 2.911 - 30(5y +6) { (2.911 â 1)(5y + 12) + «| (5y +12)? 4 (=) â 1414.99? â 584.739 \/(5y + 6)? + 161.84y + 725.211 \/(5y + 12)? + 161.84yâ 5093.97y â 1403.37,\/ (Sy + 6)? + 161.84 + 30.2\/(5y + 6)? + 161.84,/(5y + 12)? + 161.844 870.253\/(5y + 12)? + 161.84 â 4075.17 < â 1414.99? â 584.739 \/(5y + 6)? + 161.84y + 725.211 \/(5y + 12)? + 161.84yâ 5093.97y â mw 6 +5-(â0.1))? + 161.84 + 30.2\/(6 + 5 - 0.1)? + 161.84,/(12 + 5 - 0.1)? + 161.844 870.253,/(12 +5 = + 161.84 â 4075.17 = â 1414.99y? â 584 ee 1/(y + 12)? + 161.84y â 5093.97y â 309.691 < y (â584.739/ By + 6)? + 161.84 + 725.211y/(5y + 12)? + 161.84 â 5093.97) â 309.691 < -0.1 (725.2112 +5- (0.1)? + 161.84 â 584.739\/(6 + 5 - 0.1)? + 161.84 5093.97) 309.691 â 208.604 . a
4} (5y + 6)? 4
For the ï¬rst inequality we choose y in the roots, so that positive terms maximally increase and negative terms maximally decrease. The second inequality just removed the y2 term which is always negative, therefore increased the expression. For the last inequality, the term in brackets is negative for all settings of y. Therefore we make the brackets as negative as possible and make the whole term positive by multiplying with y = â0.1.
Consequently
iv? , (aty (2e+y)? 2) e 2 erfe | â-â ] â 2eâ 2 ~ erfe | ââ 268 (4) (St ees)
is strictly monotonically decreasing in y for the minimal x = 1.2. Lemma 45 (Main subfunction below). For 0.007 < x < 0.875 and â0.01 < y < 0.01, the function
iv? , (aty (2e+y)? mea) e 2 erfe â2e7 2 ~ erfe | ââ 269 (258) (at om
smaller than zero, is strictly monotonically increasing in x and strictly monotonically increasing in y for the minimal x = 0.007 = 0.00875 · 0.8, x = 0.56 = 0.7 · 0.8, x = 0.128 = 0.16 · 0.8, and x = 0.216 = 0.24 · 0.9 (lower bound of 0.9 on Ï ).
Proof. We ï¬rst consider the derivative of sub-function Eq. (111) with respect to x. The derivative of the function
(ety)? wt) (22+y)? CG) e 2 erfc â2e =~ erfe (270) (Se Vive
with respect to x is
(ety? (22+y)? 6 2 vi (es * (eây)(u + y) erfe (HHL) â 2⬠oe (4a? â y?) exfe (254 ut) ) + VBVz(30 â y) 2/mx?
â
(271) â 2
a) â Qe a = on + y)(2x â y) erfe (34 2 /rx? vi (eae (x â y)( x+y)erte ($¢ +L) ) + v2val 3x â y) ae
â
65
=
=
oH a w(2ty)erfe( SH) ne aeâ (Qrty)Qxâyerte(2ee) \ ve Vivi Vivi + (Bx ây) V22/rSrx?
â
We consider the numerator
(ety)? : (2x41 . et Vi e (x â y)(x +y)erfe (+) 7 20a = (20 + y)(2x â y) erfe (24) - (80-4) Vv2vE VaVE ne (272)
â
For bounding this value, we use the approximation
ez2 erfc(z) â â 2.911 â Ï(2.911 â 1)z + Ïz2 + 2.9112 . (273)
from Ren and MacKenzie [30]. We start with an error analysis of this approximation. According to Ren and MacKenzie (Figure 1), the approximation error is both positive and negative in the range [0.175, 1.33]. This range contains all possible arguments of erfc that we consider in this subsection. Numerically we maximized and minimized the approximation error of the whole expression (ety? aty (2e4y)? E _ ( 2a e (x â y)(a + y) erfc (#4) 2e (2a â y)(2a + y) erfc (234)
(ety? aty (2e4y)? E _ ( 2a Bley) = e (x â y)(a + y) erfc (#4) - 2e (2a â y)(2a + y) erfc (234) Vive Ve
â
2.911(x â y)(a@+y) 2 viva ( vraouâiety) ; m (4) r20u1) 2-2.911(2x â y) (2x + y) Ve(2.911-1)(2e+y) | 2ety \* | 2 viva ( JET m (25x) r20u1)
(274)
We numerically determined â0.000228141 < E(a,y) < 0.00495688 for 0.08 < a < 0.875 and â0.01 < y < 0.01. We used different numerical optimization techniques like trradient based constraint BFGS algorithms and non-gradient-based Nelder-Mead methods with different start points. Therefore our approximation is smaller than the function that we approximate.
We use an error gap of â0.0003 to countermand the error due to the approximation. We have the sequences of inequalities using the approximation of Ren and MacKenzie [30]:
(3x â y) +
# e
(x+y)2 2x
# . (EH
# y)erte (%)
# (x â y)(x + y) erfc
â
2
# x
â
â
2
# x
â
# 2e
(2x+y)2 2x
(2x â y)(2x + y) erfc â 2
â
# x
( 2ety (25
# Pall
â
2
# x
â
# Vr
(275)
(30 â y) 4 Tet _ 2 ( [= (sex) + oon emcpezien ) (vay
66
2(2x â y)(2a + y)2.911 2 Qaty : _ (2.911-1) VF(20-+y) (v2vz) (V=(2) + 2.9112 4 Re ) Vm â 0.0003 = (30 ây) 4 (V2V/#2.911) (w â y)(w« +y) _ ( n(x + yy? +2- 2.91122 + (2.911 â 1)(@ + wv7) (Vv2Vz) 2(2x â y) (2a + y) (V2Vr2.911) (V2Vz) ( TQe+ yj? +2- 291120 + (2.911 â 1)(2x + y)v7) (3a â y) +2.911 (c«-y)\(@+y) _ (2.911 â1)(a@+y)+/(a+y)? + 222 7 2(2x â 22 Cr=wCrty | _ 9.0903 5 (2.911 â 1)(2% + y) 4 (Qe fy)? + 2201122 T Vix â 0.0003 (32 â y) +2.911 (x= y)(@ +) (2.911 â1)(@+y)4 Jee) + (a fy)? + Zeolite 4 2-2.9112y ' Tw 220 y)@rty) |) _ n.q903 = (2.911 â 1)(22+y) 4 (ex | y)? + 22anite (32 ây) 4 vm (= y)(@+y) _ (2.911 â1)(a+y)+ (wt y+ zg)? 2(2x â y) (2a Qe-wr+y) | _ 9.9903 = (2.911 â 1)(22+y) 4 (ex | y)? + 22onlte (3 â y) + 2.911 (e=w(e+y) 2(2a â y)(2x + y) 2.911 (x + y) + 2212 (2.911 â1)(22 + y) + \/ (2x + y)? + 220e ; _ (c-y)(uty) 2(2x â y)(2x + y)2.911 : (Bx) + Cy Bam âââ ~ 0.0003 TT Y)F Me (2.911 â1)(Qa + y) + y/ (Qa + y)? + 2290: = = 2.9 (30 ây) 4 (x â y)(« ty) 2(2a â y)(2a + y)2.911 0.0003 e+y+ (2.911 -1)2Qn+y) + (Qe + y)? + awe (-222-y20n ( ty) 4 2) (Qe +y) 4 (e+ +2) (3a â y â 0.0003) (conve + y) 4 ex by)24 2m) T
# p2une)) canes)
# (won =) (em
2 · 2.9112x Ï
(2x + y)2 +
(x â y)(x + y)
(2.911 â 1)(2x + y) +
1
((«
2 · 2.9112x Ï
2.911 Ï
4/
(2x + y)2 +
(x + y) +
(2.911 â 1)(2x + y) +
=
67
â 0.0003 =
+
(â82° 8a2y + 4x? / (2x + y)? + 5.39467x â 10.9554? + 2ay? â 2y?\/(Qx + y)? + 5.394672 + 1.76901ay + Qayy/(2x + y)? + 5.394672 + 2.77952\/ (2x + y)? + 5.394672 â 0.9269y\/ (2x + y)? + 5.39467a â 0.00027798\/ (2x + y)? + 5.39467a â 0.00106244x + 2y? + 3.62336y? â 0.00053122y) - -1 ((« ty) 4 =) (em I(2e +y) + 4/ (Qe +y)24 vanes) (â82° + (4x? + 2xy + 2.77952 â 2y? â 0.9269y â 0.00027798) \/(2x + y)? + 5.39467a â 8a°y â 10.9554a? + 2ey? + 1.7690 Ly â 0.001062442 + 2y* + 3.62336y? â 0.00053122y) - -1 ((« ty) 4 =) (em 1)(2r + y) + Qe + y)2 4 canes) > 0. We explain this sequence of inequalities:
⢠First inequality: The approximation of Ren and MacKenzie [30] and then subtracting an error gap of 0.0003.
â
â 2
⢠Equalities: The factor x is factored out and canceled.
⢠Second inequality: adds a positive term in the ï¬rst root to obtain a binomial form. The term containing the root is positive and the root is in the denominator, therefore the whole term becomes smaller.
Equalities: solve for the term and factor out.
e Bringing all terms to the denominator ((x + y) + 2911) (ou â1)Qa+y)+/(Qr+y)? 4 sage),
⢠Equalities: Multiplying out and expanding terms.
⢠Last inequality > 0 is proofed in the following sequence of inequalities.
We look at the numerator of the last expression of Eq. (275), which we show to be positive in order to show > 0 in Eq. (275). The numerator is
82° + (42? + 2ey + 2.77952 â 2y? â 0.9269y â 0.00027798) 2x + y)? + 5.394672 â (276)
8x2y â 10.9554x2 + 2xy2 + 1.76901xy â 0.00106244x + 2y3 + 3.62336y2 â 0.00053122y . The factor 4x2 + 2xy + 2.7795x â 2y2 â 0.9269y â 0.00027798 in front of the root is positive:
The factor 4x? + 2ry + 2.7795a â 2y? â 0.9269y â 0.00027798 in front of the root is positive:
da? + 2ay + 2.77952 â 2yâ â 0.9269y â 0.00027798 > (277) â2y? + 0.007 - 2y â 0.9269y + 4 - 0.007? + 2.7795 - 0.007 â 0.00027798 = â2y? â 0.9129y + 2.77942 = â2(y + 1.42897)(y â 0.972523) >0. If the term that does not contain the root would be positive, then everything is positive and we have proofed the the numerator is positive. Therefore we consider the case that the term that does not contain the root is negative. The term that contains the root must be larger than the other term in absolute values. â (-827 â 8a?y â 10.9554x? + 2xyâ + 1.76901 xy â 0.001062442 + 2y° + 3.62336yâ â 0.00053122y) <
(277)
â (-827 â 8a?y â 10.9554x? + 2xyâ + 1.76901 xy â 0.001062442 + 2y° + 3.62336yâ â 0.00053122y) < (278)
(278)
(4a? + Qey + 2.7795a â 2y? â 0.9269y â 0.00027798) V/(2a + y)? + 5.394672 . Therefore the squares of the root term have to be larger than the square of the other term to show > 0 in Eq. (275). Thus, we have the inequality: (â82° â 82?y â 10.9554xâ + 2axy + 1.76901xy â 0001062442 + 2y° + 3.62336y? â 0.00053122y)â
<
(279)
68
.
(4a? + 2ay + 2.7795a â 2y? â 0.9269y â 0.00027798)â ((2x + y)? + 5.394672) .
This is equivalent to
0 < (42? + 2xy + 2.77952 â 2y? â 0.9269y â 0.00027798)â (2 + y)? + 5.394672) â (280)
â8x° â 827y â 10.9554x? + 2ary? + 1.76901 xy â 0.00106244a + 2y° + 3.62336y? â 0.00053122y)â x - 4168614250 - 10-â â y?2.049216091 - 10-7 â 0.0279456a°-+ 43.087524y + 30.81132* + 43.10842°%y? + 68.9892 y + 41.63572° + 10.792827y? â 13.172627y?â 27.814827y â 0.00833715x? + 0.0139728ay* + 5.47537xry>+ 4.65089xy? + 0.00277916xy â 10.7858y° â 12.2664y* + 0.00436492y° .
We obtain the inequalities:
x · 4.168614250 · 10â7 â y22.049216091 · 10â7 â 0.0279456x5+ 43.0875x4y + 30.8113x4 + 43.1084x3y2 + 68.989x3y + 41.6357x3 + 10.7928x2y3â 13.1726x2y2 â 27.8148x2y â 0.00833715x2+ 0.0139728xy4 + 5.47537xy3 + 4.65089xy2 + 0.00277916xy â 10.7858y5 â 12.2664y4 + 0.00436492y3 > x · 4.168614250 · 10â7 â (0.01)22.049216091 · 10â7 â 0.0279456x5+ 0.0 · 43.0875x4 + 30.8113x4 + 43.1084(0.0)2x3 + 0.0 · 68.989x3 + 41.6357x3+ 10.7928(0.0)3x2 â 13.1726(0.01)2x2 â 27.8148(0.01)x2 â 0.00833715x2+ 0.0139728(0.0)4x + 5.47537(0.0)3x + 4.65089(0.0)2x+ 0.0 · 0.00277916x â 10.7858(0.01)5 â 12.2664(0.01)4 + 0.00436492(0.0)3 = x · 4.168614250 · 10â7 â 1.237626189 · 10â7 â 0.0279456x5 + 30.8113x4 + 41.6357x3 â 0.287802x2 >
We used x > 0.007 and x < 0.875 (reducing the negative x*-term to a x?-term). We have proofed the last inequality > 0 of Eq. (275).
Consequently the derivative is always positive independent of y, thus
aty)? ety)? 2. e om erfc (4) - 2" a erfc ( ot 7) (282) V2fe
is strictly monotonically increasing in x.
Next we show that the sub-function Eq. (111) is smaller than zero. We consider the limit:
. (ety)? aty (ety)? Qa + â) lim e 2 ~ erfe â 2e7 2 ~ erfc =~) =0 283 e090 ( V2Vz ) ( V2Vz es)
The limit follows from Lemma 22. Since the function is monotonic increasing in x, it has to approach 0 from below. Thus,
(ety? , faty (22+)? +) e 2= erfe â2e~ 22 ~ erfe 284 (55) (aA â
is smaller than zero.
We now consider the derivative of sub-function Eq. (111) with respect to y. We proofed that sub- function Eq. (111) is strictly monotonically increasing independent of y. In the proof of Theorem 3, we need the minimum of sub-function Eq. (111). First, we are interested in the derivative of sub- function Eq. (111) with respect to y for the minimum x = 0.007 = 7/1000.
69
=
Consequently, we insert the minimum x = 0.007 = 7/1000 into the sub-function Eq. (111):
â
(ae) vais e\Â¥?V T0005 erfc (285) Yi4y 7 2 V3) ahs v2 2 eye =a) v2, /7 1000 1000 ee tute erfe ( + â) _ 2% (s00y-t7)? erfe (a + ") ; 20V'35 10V35
The derivative of this function with respect to y is
(~~ 4 1) ee tut ote erfe (= + â) _ (286) 7 20V35 1, coma? (500y + 7) erfe 500y+7)\ | 20 5 S 7 10/35 in (: + 1000 - cont) eA O.01+ ggg + OG HOD? (2 + 1000 + conn) _ 7 20/35 1 cruso0.0012 500 - 0.01 5 ode so (7+ 500 0.01) ere ( 00 0 ) +20,/=- > 3.56. 7 10V35 (Gs
For the ï¬rst inequality, we use Lemma 24. Lemma 24 says that the function xex2 erfc(x) has the sign of x and is monotonically increasing to 1â Ï . Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
zty)? c ety)? 2a e 3 erfe (4) â 26 erfe CG) (287)
is strictly monotonically increasing in y for the minimal x = 0.007.
Next, we consider x = 0.7 · 0.8 = 0.56, which is the maximal ν = 0.7 and minimal Ï = 0.8. We insert the minimum x = 0.56 = 56/100 into the sub-function Eq. (111):
â
(are) g(a. VE e\V7V tot ° erfc y + (288) V3, [56 V2 100
â
2
2 (ate ive) Qe\ ¥?V 105 erfc J2 56 100 00
The derivative with respect to y is:
solar) (24 +) ente (2% + Â¥2) : oy + Â¥ 27! 5 a - (289) Loe)â (2+) erte(S +) 5 A tae > pel âF~2R) (2 - 3255) erfe (4F - 2058) vi
70
â
27740015)"
5 + 0.01·5 2 â 7 For the ï¬rst inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
iv? , (aty Qxty)? =) e 2 erfe | â~â ]} â 2eâ 2 ~ erfe | ââ 290 ( V2Se ) ( v2Va ow
is strictly monotonically increasing in y for x = 0.56.
Next, we consider x = 0.16 · 0.8 = 0.128, which is the minimal Ï = 0.8. We insert the minimum x = 0.128 = 128/1000 into the sub-function Eq. (111):
â
2 ( yy a) mee 128 . 000 e\ 2 V t600 ve erfc 128 aie (=e) 5 [BS #28. 1000 CBE ts enfe (tm) 20 ee oi + ~*) ; - (291)
2
The derivative with respect to y is:
1 125y? 125y + 16 â (et tut os 16 (« (125y + 16) erfe (Gus Ovi0 )- (292) wm) *? es ") ° 1 («x \ snt-oonpe-t0rde onfe (* + 5000) _ (125y+32)? G25y 432)" . (125 32 2e~ 40 (125y 4 22) ene (= a 16 20/10 : 5 2000 (32 + 1250.01) erfe (a) +20)/22) > o.446s . 20V10 T
For the ï¬rst inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
tw? ay) (22-4)? CG) e 2 erfe ( 2e~ 2= ~ erfe | ââ (293)
is strictly monotonically increasing in y for x = 0.128.
Next, we consider x = 0.24 · 0.9 = 0.216, which is the minimal Ï = 0.9 (here we consider 0.9 as lower bound for Ï ). We insert the minimum x = 0.216 = 216/1000 into the sub-function Eq. (111):
â
â\2 A y__, V toon ) 216 as, Vz . 1000 ve erfc - (294) 5 [26 1000
â 2
# (ae)
â
â
# y
216 1000
+
â
216 1000
2
# 2e
# erfc
â
zl 216
+
â
eo 1000
=
71
(291)
G25yt27)2, ( 125y + 27 G2zsyts4y2 â ( 125y + 54 e 6750 erfe ( 2 © =" ) _ 2¢ e750 â erfe ( â2 ES 15/30 15/30
The derivative with respect to y is:
1 (125y-427)2 125y + =") â { e3750 â (125y + 27) erfe ( ââ7> â ] â 295 7 ( (125y + 27) ( 15/30 >) a2syes4y? _ (125y + 2) 30 2e 6750 125y + 54) erfe +15 > (125y + 54) ( 15V30 7 1 (274125(-0.01))2 | (274 a) â | (274+ 125(â0.01))e 6750 erfe | ââââââââ_ ] - 7 (« (â0.01)) ( 15/30 5441280.01)? 5 . : 20 ee (54 4 1250.01) erfe (qa) + 15/9 ) > 0.211288 . 1530 T
For the ï¬rst inequality we applied Lemma 24 which states that the function xex2 erfc(x) is monotoni- cally increasing. Consequently, we inserted the maximal y = 0.01 to make the negative term more negative and the minimal y = â0.01 to make the positive term less positive.
Consequently
iv? , (aty Qxty)? =) e 2 erfe â2e~ 2 ~ erfe 296 ( V2Se ) ( v2Va oo)
is strictly monotonically increasing in y for x = 0.216.
Lemma 46 (Monotone Derivative). For X = Aoi, @ = Qo and the domain â0.1 < pw < 0.1, â0.1 <w < 0.1, 0.00875 < v < 0.7, and 0.8 < T < 1.25. We are interested of the derivative of
T (a) erfc (â*) - gel âAHF ) erfe (A) . (297)
# Ï
The derivative of the equation above with respect to
⢠ν is larger than zero;
e 7 is smaller than zero for maximal v = 0.7, v = 0.16, and v = 0.24 (with 0.9 < T);
⢠y = ÂµÏ is larger than zero for Î½Ï = 0.00875 · 0.8 = 0.007, Î½Ï = 0.7 · 0.8 = 0.56, Î½Ï = 0.16 · 0.8 = 0.128, and Î½Ï = 0.24 · 0.9 = 0.216.
Proof. We consider the domain: â0.1 < uw < 0.1, -0.1 < w < 0.1, 0.00875 < v < 0.7, and 0.8<¢7 < 1.25.
We use Lemma|I7]to determine the derivatives. Consequently, the derivative of r (a) erfe (â*) acl FE) erte (â*))
# Ï
with respect to ν is larger than zero, which follows directly from Lemma 17 using the chain rule. Consequently, the derivative of
(<(a") erfc (â*) _ ae a) erfc (â5 ~*)) (299)
# Ï
with respect to y = ÂµÏ is larger than zero for Î½Ï = 0.00875 · 0.8 = 0.007, Î½Ï = 0.7 · 0.8 = 0.56, Î½Ï = 0.16 · 0.8 = 0.128, and Î½Ï = 0.24 · 0.9 = 0.216, which also follows directly from Lemma 17.
We now consider the derivative with respect to Ï , which is not trivial since Ï is a factor of the whole expression. The sub-expression should be maximized as it appears with negative sign in the mapping for ν.
72
(298)
First, we consider the function for the largest ν = 0.7 and the largest y = ÂµÏ = 0.01 for determining the derivative with respect to Ï .
The expression becomes
(Ee) ry 4 (# ih ea ) ot 1 TO. Too c T r v2/%5) rfc | 12 100 | _oe\ V7 / erfe | 20-7 100 ; (300)
The derivative with respect to 7 is (7or+1)?
(7or+1)? . 707 +1 m | e 10007 (7007(77 + 20) â 1) erfe {| ââââ } - 301 (ve( ( )-1) (vas) G01) 20â (28007(7r +5) â L) erfe ( Mor +1 )) + 20/35(2107 â tv) 20V35/T)) (14000 Vm) ~*
We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 97 < E < 186. Therefore we add 200 to the numerator when we use the approximation Ren and MacKenzie [30]. We obtain the inequalities:
Or (7or+1 : 707 +1 a ( @Tra807 7007 (77 + 20) â 1) erfe va *(700r( )-0) (was) 1407 +1 20V35,/T 20 Et (28007(Tr +5) âlerfe ( )) + 20V35(2107 â 1) V7 < Vi 2.911(7007 (77 + 20) â y _ Va(2.911â-1)(707+1) 7Or+1 ' 2 20V35/T ryt (x sh) + 2.911 2. 2.911(28007(77 +5) â 1) 2 Vi(2.911-1)(1407+1) | 140741 1 2 20735 /7 ryt (4) + 2.911 + 20V35(2107 â 1),/7 + 200 = Vi (7007(77 + 20) â 1) (20- V35 - 2.911/7) _ V/n(2.911 â 1)(707 +1) + V0 . 2.911V35V7)" +7(707 + 1)? 2(2800r(7r +5) â 1) (20- V35- 2.911,/7) Vr(2.911 â 1)(1407 + 1) + (20: V35 - 2.911 V7) + (1407 + 1)? (20V35(210r ~lVvr+ 200) = ( (enact â 1) v7 + 200) (vee â1)(70r +1) + + (20: V35 + 2. gliv7) (vavon â1)(1407 +1) + y (20 -V35- 2.911vF). + (1407 + 0) + 2.911 - 20V35V/7(7007(77 + 20) â 1) V7 (vaeon â1)(1407 +1) + y (20 -V35- 2.911v7)â + m(1407 + ) -
73
(302)
+ Ï(70Ï + 1)2
)
â
â
V72- 20 - 35 - 2.911(28007 (77 +5) â 1) vr (vaeon â1)(70r +1) + y (20 . 35 - 20llyF) + (707 + 1) ((vaeon â1)(70r +1) + (cove 2.911- vi). + -n(707 + ») -1 (vaeon â1)(1407 +1) + y (cove -2.911- vi) + m(1407 + ))
.
After applying the approximation of Ren and MacKenzie [30] and adding 200, we ï¬rst factored out 20
We now consider the numerator:
(20v35(2 Or â 1) Vr+ 200) (vem â1)(70r +1) + (20: V35 + 2. ouiy7) + n(707 +1) (303)
(303) (vaem â1)(1407 +1) + y (2 -V35 2.9 IVF). m (1407 ») + 2.911 - 20V35Vx(7007 (77 + 20) â 1) V7 (vaem ~1)(1407 +1) + (20 . 35 - 2.9 17). (1407 v) - Vr2- 20 - 35 - 2.911(28007 (77 +5) â 1) V7 (vaem â1)(707 + 1) + y (eo . 352.91 vi). + (707 + 0) = â 1.70658 x 10° (707 + 1)? + 1186357 79/2 + 200V35\/m(70r + 1)? + 1186357 V/(1407 + 1)? + 118635773/? + 8.60302 x 10° \/7(1407 + 1)? + 118635r77/? â 2.89498 x 10779/? â .21486 x 107 \/x(707 + 1)? + 11863577°/? + 8.8828 x 10° \/n (1407 + 1)? + 11863577°/? â 2.43651 x 10775/? â 1.46191 x 10°77/? + 2.24868 x 1077? + 94840.5./2(707 + 1)? + 11863577 + 47420.2/ (1407 + 1)? + 11863577 + 4818607 + 710.354V7 + 820.213,/7 /0(707 + 1)? + 1186357 + 677.432 \/n(707 + 1)? + 1186357 â 011.27 V7 /n(1407 + 1)? + 1186357 â 20V35/7 (707 + 1)? + 1186357 \/7 (1407 + 1)? + 1186357 + 200/71 (707 + 1)? + 1186357 (1407 + 1)? + 1186357 + 677.432,/7 (1407 + 1)? + 1186357 + 2294.57 = â 2.89498 x 107r9/? â 2.43651 x 10779/? â 1.46191 x 10°77/? + s (-1.70658 x 107r9/? â 1.21486 x 1077°/? + 94840.57 + 820.213/7 + 677.432) m(707 + 1)? + 1186357 + (8.60302 x 10°79/? + 8.8828 x 10°r5/? + 47420.27 â 1011.27 V7 + 677.432) s/n(1407 + 1)? + 1186357 + (4200 3573/2 â 20/357 + 200) /m(0r + 1)? + 1186357 \/m (1407 + 1)? + 1186357 +
74
â
2.24868 x 10âr? + 481860.7 + 710.3547 + 2294.57 < â 2.89498 x 10773/? â 2.43651 x 1077°/? â 1.46191 x 1097 7/24
+ 2294.57 <
â 2.89498 x 10773/? â 2.43651 x 1077°/? â 1.46191 x 1097 7/24 (â1.70658 x 10773/? â 1.21486 x 10775/? + 820.213V1.25 + 1.25 - 94840.5 + 677.432) m(707 + 1)? + 1186357+ (8.60302 x 10°79/? + 8.8828 x 10°r5/? â 1011.27V0.8 + 1.25 - 47420.2 + 677.432) s/m(1407 + 1)? + 1186357+ (4200 3573/2 â 20V35 V7 + 200) /m(70r + 1)? + 1186357 (1407 + 1)? + 1186357+ 2.24868 x 10"r? + 710.354V1.25 + 1.25 - 481860 + 2294.57 = â 2.89498 x 10779/? â 2.43651 x 10779/? â 1.46191 x 1097 7/24 â1.70658 x 10°r3/? â 1.21486 x 1077>/? + 120145.) m(707 + 1)? + 1186357+ 8.60302 x 10°79/? + 8.8828 x 10°7°/? + 59048.2) m(1407 + 1)? + 1186357+ 4200V357°/? â 20V35/7 + 200) Va(70r + 1)? + 1186357 (1407 + 1)? + 11863574 2.24868 x 10°r? + 605413 = â 2.89498 x 10773/? â 2.43651 x 107r°/? â 1.46191 x 1097 7/24 8.60302 x 10°7/? + 8.8828 x 10°r°/? + 59048.2) s/196007(r + 1.94093)(7 + 0.0000262866)+ â1.70658 x 10°r3/? â 1.21486 x 1077>/? + 120145.) 9/4900 (7 + 7.73521) (7 + 0.0000263835)-+ 4200V3573/2 â 20/357 + 200) s/196007(r + 1.94093) (7 + 0.0000262866) \/49007(7 + 7.73521) (7 + 0.0000263835)+ 2.24868 x 10'r? + 605413 < â 2.89498 x 10773/? â 2.43651 x 107r°/? â 1.46191 x 1097 7/24 (8.60302 x 10°79/? + 8.8828 x 1087°/? + 59048.2) 196007 (7 + 1.94093)7+ (-1.70658 x 10%r9/? â 1.21486 x 10779/? + 120145.) 949007 1.00003(7 + 7.73521)7+ (4200 3573/2 â 20V35V7 + 200) 4/1960071.00003(7 + 1.94093)r s/490071.00003(r + 7.73521)T+ 2.24868 x 10°r? + 605413 = â 2.89498 x 107r3/? â 2.43651 x 1077>/? â 1.46191 x 1097/24
10°r)
â3.64296 Ã 106Ï 3/2 + 7.65021 Ã 108Ï 5/2 + 6.15772 Ã 106Ï
V7 + 1.940937 + 7.73521 + 2.24868 x 107774 (2.20425 x 10°? + 2.13482 x 10°r? + 1.46527 x 10/7) (=1.5073 x 10°73 â 2.11738 x 10°r? + 1.49066 x 1077)
â
Ï + 7.73521 + 2.24868 Ã 107Ï 2+
# Vr
+ 1.94093-+ V7 + 7.73521 + 605413 <
â
â3.64296 Ã 106Ï 3/2 + 7.65021 Ã 108Ï 5/2 + 6.15772 Ã 106Ï
V1.25 $+ 1.94093V1.25 + 7.73521 (â3.61296 x 10%r3/? + 7.65021 x 1.25 + 1.94093 (2.20425 x 10°r? + 2.13482 x 10°r? + 1.46527 x 107V/r) V0.8 + 7.73521 (â1.5073 x 10°r3 â 2.11738 x 10°r? + 1.49066 x 1077)
1.25 + 7.73521
â
+ â
â
75
10°r)
+
2.89498 x 10°7r3/? â 2.43651 x 1077°/? â 1.46191 x 109r7/? + 2.24868 x 1077? + 605413 = â 4.84561 x 1077/2 + 4.07198 x 10°7°/? â 1.46191 x 10977/2â 4.66103 x 10°? â 2.34999 x 10°7?+ 3.29718 x 10°r + 6.97241 x 10â \/7 + 605413 < 60541373/? 0.83/2 4.07198 x 109r°/? â 1.46191 x 10°77/?â 3.29718 x LO" /7r 6.97241 x 10% r/r V0.8 0.8 73/2 (â4.66103 x 1083/2 â 1.46191 x 1097? â 2.34999 x 10°V/7+ â 4.84561 x 1073/24 4.66103 x 10°? â 2.34999 x 10°7? 4 4.07198 x 10°r + 7.64087 x 107) < 7 7 ee (~s.00103 x 10%r4/2 â 1.46191 x 10%7? 4 TOAST x10" V7 v0.8 2.34999 x 10°\/7 + 4.07198 x 10°r) = â (-14s191 x 10°r3/2 + 4.07198 x 10° V7 â 4.66103 x 10®7 â 2.26457 x 10°) < (â2.26457 x 109 + 4.07198 x 10°V0.8 â 4.66103 x 1080.8 â 1.46191 x 10°0.8°/?) P= 4.14199 10â7? 0.
â 4.14199 Ã 107Ï 2 < 0 .
First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.8 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting T = 7 + 0.0000263835 under the root. We increased positive terms by setting tT + 0.000026286 = 1.000037 and 7 + 0.000026383 = 1.000037 under the root for positive terms. The positive terms are increase, since 9-8+0-000026383 â 1 (0003, thus r + 0.000026286 < r + 0.000026383 < 1.00003r. For the next inequality we decreased negative terms by inserting 7 = 0.8 and increased positive terms by inserting 7 = 1.25. The next equality expands the terms. We use upper bound of 1.25 and lower bound of 0.8 to obtain terms with corresponding exponents of T. For the last <-sign we used the function
â1.46191 Ã 109Ï 3/2 + 4.07198 Ã 109â
Ï â 4.66103 Ã 108Ï â 2.26457 Ã 109 (304) The derivative of this function is
â2.19286 Ã 109â Ï + 2.03599 Ã 109 â Ï â 4.66103 Ã 108 (305)
and the second order derivative is
â 1.01799 Ã 109 Ï 3/2 â 1.09643 Ã 109 â Ï < 0 . (306)
The derivative at 0.8 is smaller than zero:
â 2.19286 Ã 109 â 0.8 â 4.66103 Ã 108 + 2.03599 Ã 109 0.8 â = (307)
â 1.51154 Ã 108 < 0 .
Since the second order derivative is negative, the derivative decreases with increasing Ï . Therefore the derivative is negative for all values of Ï that we consider, that is, the function Eq. (304) is strictly monotonically decreasing. The maximum of the function Eq. (304) is therefore at 0.8. We inserted 0.8 to obtain the maximum.
76
Consequently, the derivative of
1 (CY ete (â*) 96 AE). ont fc e(â*)) (308)
with respect to Ï is smaller than zero for maximal ν = 0.7.
Next, we consider the function for the largest ν = 0.16 and the largest y = ÂµÏ = 0.01 for determining the derivative with respect to Ï .
The expression becomes 16r 1 2
16r 1 2 2167 1 2 (# Too * Too i 16r ( x1 ) 2167 | 7 ; or t r v2/i00/ erfe x00 + 700 100) _ ¢\ v2V/ tt J erfc | 200 = 100 : (309) [167 167 v2 100 âvay 100
The derivative with respect to Ï is
( (SS care +25) â Lerfe ( 16741 ) _ G10) 402.7 r+1 327 +1 2¢e S00" (1287 (87 + 25) â 1 erfe + 40\/2(487 â 1) V7 *(128r(r-+ 25) ~ ene (ET) ) + anv as â Dv) (3200 /nr) *
We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 1.1 < E < 12. Therefore we add 20 to the numerator when we use the approximation of Ren and MacKenzie [30]. We obtain the inequalities:
vi(¢ OSs (1287 (2 27 + 25) 1) ert ( 16741 ) 40V 2/7 327 +1 mas) + 40V/2(487r â1)/7 < (32741)? . 2e~ 32007 (1287 (87 + 25) â 1) erfe ( 2.911(1287(27 + 25) â 1) Vr 2 Vm(2.911-1)(167+1) | 16741 I 2 40V2/7 ryt (a2¢4) + 2.911 2+ 2.911(1287(87 + 25) â 1) 2 Va(2.911â1)(327+1) , 32741 j 2 40V2V7 ryt (224) + 2.911 + 40V/2(487 â 1) V7 +20 = (1287 (27 + 25) â 1) (40V22.911,/r) Jn (2.911 â 1)(167 + 1) + \ (4ov2.911V7)" + (167 + 1)? 2(1287(87 + 25) â 1) (40,/22.911/7) Vn(2.911 â 1)(327 +1) + \(4ov2.911V7)" + (327 + 1)? 40V/2(487 â 1) /7 +20 = ((avaus: -~Vyvr+ 20) (vaca â1)(16r +1) + y (aov22.0n1vF)â + Vr (vaeon ~ 1)(32r + 1) + | (1ov olivz) + (327 +1) ) +4
(311)
0)
+ Ï(16Ï + 1)2
77
â
â 2
â
2.911 - 40V2V/7 (1287 (27 + 25) â 1) V7 (vaeon ~1)(32r +1) + y (sov olivz) + 1(327 +1) â)- 2V/740/22.911 (1287 (87 + 25) â 1) Vr (vieon â 1)(16r +1) +y/ (ove ovr) + (167 + )) (Cae â 1)(327 +1) + y (aova2.011vF)â + (327 + 0) -1 (vizon â1)(32r +1) + (sov% ouiyF) + (327 +1)? ))
.
After applying the approximation of Ren and MacKenzie [30] and adding 20, we ï¬rst factored out 40
We now consider the numerator:
2 (40v2(48r -~vr+ 20) (vaeon â 1)(167 +1) + y (sova2.001y) + m(16r + 0) (312)
(312) A (vaeon â1)(327 +1) + V 2.911 - 40V2V/7 (1287 (27 + 25) â 1) V7 (vaeon â1)(327 +1) + V 4ov22.911V7). + 1(327 + 0) - 2/740 22.911 (1287(87 + 25) â 1)/7 (vacon â1)(167 +1) + / â 1.86491 x 10° (167 + 1)? + 27116.5779/24 1920V2./m(16r + 1)? + 27116.57 V/7(327 + 1)? + 27116.57 79/24 940121 /7(327 + 1)? + 27116.577°/? â 3.16357 x 10°79/?â 303446 7 (167 + 1)? + 27116.577°/? + 221873 ,/7(327 + 1)? + 27116.577°/? â 6085887°/? â 8.34635 x 10°r7/? + 117482.77 + 2167.78\/n(167 + 1)? + 27116.577+ 1083.89 \/7(32r + 1)? + 27116.577+ 11013.97 + 339.614\/F + 392.137, /7\/n(167 + 1)? + 27116.57-+ 67.7432,/m (167 + 1)? + 27116.57 â 483.4787 (327 + 1)? + 27116.57â 40V 2/7 /(167 + 1)? + 27116.57 \/7(327 + 1)? + 27116.57+ 20./ (167 + 1)? + 27116.57 \/1(327 + 1)? + 27116.57+ 67.7432 \/7(327 + 1)? + 27116.57 + 229.457 = â 3.16357 x 10°7°/? â 60858875/? â 8.34635 x 1077/24 (-1.86491 x 1053/2 â 30344675/2 4 2167.787 + 392.137/7 + 67.7432) ov2.911y7). + (327 + 0) + fs oy22.911VF) + n(16r + 0) = m(167 + 1)? + 27116.57+ (94012179? + 2218737°/? + 1083.897 â 483.478,/7 + 67.7432)
78
m(327 + 1)? + 27116.57 + 1920V2r3/? â 40V 2/7 + 20) s/n (167 + 1)? + 27116.57 V/n(327 + 1)? + 27116.57 + 117482.7? + 11013.97 + 339.6147 + 229.457 < â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 ~1.86491 x 10°r3/? â 30344675/? + 392.187V/1.25 + 1.252167.78 + 67.7432) s/n (167 + 1)? + 27116.57+ 94012179/? + 2218737°/? â 483.478V0.8 + 1.251083.89 + 67.7432) s/7(827 + 1)? + 27116.57+ 1920V2r9/? â 40V2V7 + 20) (167 + 1)? + 27116.57 (327 + 1)? + 27116.57+ 117482.r? + 339.614V1.25 + 1.2511013.9 + 229.457 = â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 â1.86491 x 10°r9/? â 30344675/? + 3215.89) s/n(16r + 1)? + 27116.57+ 94012179/? + 2218737°/? + 990.171) m(327 + 1)? + 27116.57+ 1920V2r3/? â 40V 2/7 + 20) s/n (167 + 1)? + 27116.57 V/n(327 + 1)? + 27116.57 + 1174827? + 14376.6 = â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 94012179/? + 2218737°/? + 990.171) s/10247 (7 + 8.49155)(7 + 0.000115004)+ â1.86491 x 10°79/? â 30344675/? + 3215.89) \/256n(7 + 33.8415)(7 + 0.000115428)+ 1920V2r3/? â 40/2\/7 + 20) s/10247(r + 8.49155)(7 + 0.000115004) »/256n(7 + 33.8415) (7 + 0.000115428)+ 117482.7? + 14376.6 < â 3.16357 x 10°r3/? â 6085887°>/? â 8.34635 x 10°r7/24 94012179/? + 2218737°/? + 990.171) s/102471.00014(7 + 8.49155)7+ 1920V2r3/? â 40V2/F + 20) 9/25671.00014(7 + 33.8415)7 10247 1.00014 (7 + 8.49155)7-+ ~1.86491 x 10°r3/? â 3034467°/? + 3215.89) \/2560(7 + 33.8415)T+ 117482.7? + 14376.6 = â 3.16357 x 10°7°/? â 60858875/? â 8.34635 x 1077/24
117482.7? + 14376.6 = â 3.16357 x 10°7°/? â 60858875/? â 8.34635 x 1077/24
â
â9100379/? + 4.36814 x 10°79/? + 32174.4r) 1.25852 x 10°73 + 5.33261 x 10â7? + 56165.1/7) â8.60549 x 10°7* â 5.28876 x 10'r? + 91200.4V/r)
Ï + 33.8415 + 117482.Ï 2+
V7 + 8.49155/7 V7 + 8.49155+ Vr + 33.8415
â
Ï + 8.49155+
+ 33.8415 + 14376.6 <
â
â91003Ï 3/2 + 4.36814 Ã 106Ï 5/2 + 32174.4Ï
+
1.25 + 33.8415
+ 8.49155 V1.25 + 33.8415 (â910037%/? + 4.36814 x + 8.49155 (1.25852 x 10âr? + 5.33261 x 10â? + 56165.1/7) V1.25 4 1.25 + V0.8 + 33.8415 (â8.60549 x 10°r* â 5.28876 x 10°r? + 91200.4\/r)
â
â
+ â
â
3.16357 Ã 106Ï 3/2 â 608588Ï 5/2 â 8.34635 Ã 106Ï 7/2 + 117482.Ï 2 + 14376.6 =
79
â 4.84613 x 10%r3/? + 8.01543 x 1077>/? â 8.34635 x 106r7/?â 1.13691 x 107? â 1.44725 x 108774 594875.r + 712078.\/7 + 14376.6 < 14376.673/2 0.8/2 8.01543 x 1077°/? â 8.34635 x 10°r7/2â 594875./Tr | 712078.7./7 vos 0.8 â 3.1311 - 10°r?/? â 1.44725 - 1087? + 8.01543 - 1077°/? â 1.13691 - 10773 8.34635 - 10°77/? < 3.1311 x 10%78/2 4 8.01543 x < 1.2575/? 8.34635 x 10°r7/? â 1.13691 x 1077? â 1.44725 x 108+? = â 3.1311 x 10°r9/? â 8.34635 x 10°r7/? â 1.13691 x 10773 â 5.51094 x 10â772 < 0. â 4.84613 x 10%r3/24 1.13691 x 1077? â 1.44725 x 10°7? 4
First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.8 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting T = 7 + 0.00011542 under the root. We increased positive terms by setting 7 + 0.00011542 = 1.000147 and 7 + 0.000115004 = 1.000147 under the root for positive terms. The positive terms are increase, since 2S+0-00011542 < 1 000142, thus 7 + 0.000115004 < r + 0.00011542 < 1.000147. For the next inequality we decreased negative terms by inserting t = 0.8 and increased positive terms by inserting 7 = 1.25. The next equality expands the terms. We use upper bound of 1.25 and lower bound of 0.8 to obtain terms with corresponding exponents of T.
Consequently, the derivative of
1 (ele) erfc (â*) _ 2¢ HE) erfc (â>*)) (313)
with respect to Ï is smaller than zero for maximal ν = 0.16.
Next, we consider the function for the largest v = 0.24 and the largest y = pw = 0.01 for determining the derivative with respect to 7. However we assume 0.9 < 7, in order to restrict the domain of 7.
# The expression becomes
(4 Too + Too xb 2) 247 + 1 ( Too + 700 ) 2247 , 1 . Dar t r v2 385) orfe | 1007 100 | _ .\ eV a5 erfe | 200 100 : (314) 247 247 v2 Too aes 100
The derivative with respect to 7 is (24741)?
(24741)? . 247 +1 mT ( e 00r~ (1927(37 + 25) â 1) erfe | ââ-â ] - (ve (evar 29 â net (aye) 26 Stk (1927 (127 + 25) 1)exte (5) + 40v3(72r ~ 1)v7) (4800/r)~ (315)
We are considering only the numerator and use again the approximation of Ren and MacKenzie [30]. The error analysis on the whole numerator gives an approximation error 14 < E < 32. Therefore we add 32 to the numerator when we use the approximation of Ren and MacKenzie [30]. We obtain the inequalities: r41)2 24 1
r41)2 24 1 va (S88 (1927(37 +25) â Lerfe Cra) (16)
80
(48741)? 487 +1 Qe~ a0" (1927 (127 + 25) â 1) erfc + 40V3(727 â 1 < e (1927 (127 5) ) erfc (e5)) (727 VT < 2.911(1927(37 + 25) â 1) Va 2 Ve(2.911â-1)(247+1) , 24741 f 2 10V3V7 ' n (204) + 2.911 2 2.911(1927 (127 + 25) â 1) 2 Vi(2.911-1)(487+1) | m( 487+1 ) + 2.9112 40V3/7 ' 40V3/T 40V3(727 â 1) V7 +32 = Vi ( (1927(37 + 25) â 1) (40V32.911/7) Va(2.911 â 1)(247 +1) + \ (4ov32.911v7)" + (247 +1)? 2(1927 (127 + 25) â 1) (40V32.911\/7) ; Va(2.911 = 1)(487 +1) + Vdove2.911 7)" + (487 +1)? 40V3(727 â 1) V7 +32 = ((avace: âDyrt 32) (vacon â 1)(247 +1) + y (sovazonye)â 4 (247 + 0) 0) + 2.911 - 40V3.V7(1927 (37 + 25) â 1)/7 (vavon â1)(487 +1) + (sovia olivz) + 1(487 + 0) - 2/740/32.911(1927(127 + 25) â 1) Vr (vaeon â1)(247 +1) + | (ova. ouivz)â + (247 + 1)? )) (vavon ~ 1)(487 + 1) + (sova2 giyr) + n(48r 4 ((vaemn â1)(247 +1) + y (aovaz.onye)â + (247 + 0) -1 (vaeon ~ 1)(487 + 1) + | (sovan olivz) + (487 +1) ))
After applying the approximation of Ren and MacKenzie [30] and adding 200, we ï¬rst factored out 40
We now consider the numerator:
(40v3(727 ~1)VvFt 32) (vaeon â1)(247 +1) + y (sovazonye)" + (247 + 0) BIT)
(em â1)(487 +1) + y(sovazonye)â + 0(487 + ») + 2.911 - 40V3V/7(1927(3r + 25) â I) V7
81
(317)
(veem ~1)(48r +1) + y (aovaz.on1yr)â + (487 + ») - 2/740V/32.911(1927 (127 + 25) â 1) /7 (veem ~1)(24r +1) + y (ovaz.snve)â + (247 + 0) = â 3.42607 x 10° \/m(247 + 1)? + 40674.8773/7+ 2880V3\/ (247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.8779/2 4 1.72711 x 10° \/n(48r + 1)? + 40674.8779/? â 5.81185 x 10°r3/? â 836198,/7(247 + 1)? + 40674.877°/? + 6114107 (48r + 1)? + 40674.877°/?â 1.67707 x 10°7°/? â 3.44998 x 10777/? + 422935.7? + 5202.68 /7 (247 + 1)? + 40674.877-+ 2601.34/7 (487 + 1)? + 40674.877 + 26433.47 + 415.94\/7 + 480.268,/7 \/m(247 + 1)? + 40674.87 + 108.389 /7(247 + 1)? + 40674.87 â 592.138 V7 /2(487 + 1)? + 40674.87â 40V3/7 (247 + 1)? + 40674.87 V7 (487 + 1)? + 40674.87 + 32/7 (247 + 1)? + 40674.87 V/7(487 + 1)? + 40674.87 + 108.389 \/7(48r + 1)? + 40674.87 + 367.131 = â 5.81185 x 10°r3/? â 1.67707 x 10°r°/? â 3.44998 x 1077/24 â3.42607 x 10°7*/? â 8361987°/? + 5202.687 + 480.268/7 + 108.389) m (247 + 1)? + 40674.87+ 1.72711 x 10°r/? + 6114107°/? + 2601.347 â 592.138/7 + 108.389) (487 + 1)? + 40674.87-+ 2880V3r3/? â 40V3.V7 + 32) V/n(247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.87+ 422935.7? + 26433.47 + 415.94\/7 + 367.131 < â 5.81185 x 1073/2 â 1.67707 x 10°7°/? â 3.44998 x 1077/24 â3.42607 x 10°r/? â 8361987°/? + 480.268V1.25 + 1.255202.68 + 108.389) V1(247 + 1)? + 40674.87+ 1.72711 x 10°r3/? 4+ 6114107°/? â 592.138V0.9 + 1.252601.34 + 108.389) (487 + 1)? + 40674.87-+ 2880V37°/? â 40V3V7 + 32) Va(24r + 1)? + 40674.87 \/7 (487 + 1)? + 40674.87+ 229357? 415.94V1.25 1.2526433.4 367.131 ~
229357? + 415.94V1.25 + 1.2526433.4 + 367.131 = â 5.81185 x 10°79/? â 1.67707 x 10°r5/? â 3.44998 x 10777/24 ~
1.25 + 1.2526433.4 + 367.131 =
+ 7148.69) m(247 + 1)? + 40674.87-+ + 2798.31) s/7(487 + 1)? + 40674.87+ V/n(247 + 1)? + 40674.87 \/7(487 + 1)? + 40674.87+
â3.42607 Ã 106Ï 3/2 â 836198Ï 5/2 + 7148.69
1.72711 Ã 106Ï 3/2 + 611410Ï 5/2 + 2798.31 â 3
â
â
3Ï 3/2 â 40
2880
Ï + 32
422935Ï 2 + 33874 =
82
â 5.81185 x 1073/2 â 1.67707 x 10°7°/? â 3.44998 x 1077/24 1.72711 x 10°73/? + 6114107°/? + 2798.31) 4/2304x(7 + 5.66103) (7 + 0.0000766694)+ ~3.42607 x 10°r3/? â 8361987°/? + 7148.69) V/576n(7 + 22.561)(7 + 0.0000769518)+ 2880V3r3/? â 40V3.V7 + 32) 23041 (r + 5.66103)(7 + 0.0000766694) /576n(7 + 22.561)(7 + 0.0000769518)+ 229357? + 33874 < â 5.8118510°r?/? â 1.67707 x 10°r°/? â 3.44998 x 1077/24 1.72711 x 10°73/? + 6114107°/? + 2798.31) 923041 1.0001 (7 + 5.66103)7+ ~ 2880V37°/? â 40V3V7 + 32) Â¥/230411.0001(7 + 5.66103)7 /57671.0001(7 + 22.561)r+ ~3.42607 x 10°r3/? â 8361987°/? + 7148.69) 576m(7 + 22.561)r+ 4229357? + 33874. = â 5.8118510°r?/? â 1.67707 x 10°r°/? â 3.44998 x 1077/24 2 a 0764.79/2 + 1.8055 x 1079/2 4 115823.7) V7 + 5.661037 + 22.561 + 422935.774+ 5.20199 x 10â? + 1.46946 x 10°r? + 238086./7) V7 + 5.66103-+ â3.55709 x 10â â 1.45741 x 1087? + 304097../r) Vr + 22.561 + 33874. < V1.25 + 5.06103 1.25 + 22.561 (â250764.7° + 1.8055 x 1075/2 4 115823.) + V1.25 + 5.66103 (5.20199 x 1077? + 1.46946 x 10°77 + 238086../7) + V0.9 + 22.561 (â3.55709 x 10°r? â 1.45741 x 10°r? + 304097./7) â 5.8118510°r?/? â 1.67707 x 10°r°/? â 3.44998 x 10777/? + 422935.7? + 33874. < 33874.73/? 0.93/2 3.5539 x 10773 â 3.19193 x â 9.02866 x 10°7/? + 2.29933 x 10°r°/? â 3.44998 x 10777/2â 082 4 1.48578 x 10°./r7 ; 2.09884 x L08rV/7 V0.9 0.9 â 5.09079 x 10°r3/? + 2.29933 x 10°79/?â 3.44998 x 1077/2 â 3.5539 x 1077? â 3.19193 x 1087? < 2.29933 x 108./1.2575/? JT 3.5539 x 1077? â 3.19193 x 1087? = â 5.09079 x 10°r3/? â 3.44998 x 10777/? â 3.5539 x 10773 â 6.21197 x 1077? < 0. â 5.09079 x 10%r3/? â 3.44998 x 10777/?â
First we expanded the term (multiplied it out). The we put the terms multiplied by the same square root into brackets. The next inequality sign stems from inserting the maximal value of 1.25 for 7 for some positive terms and value of 0.9 for negative terms. These terms are then expanded at the =-sign. The next equality factors the terms under the squared root. We decreased the negative term by setting r = 7 + 0.0000769518 under the root. We increased positive terms by setting T + 0.0000769518 = 1.00009627 and 7 + 0.0000766694 = 1.00009627 under the root for positive terms. The positive terms are increase, since 0-8:+0.0000769518 < 1.0000962, thus T + 0.0000766694 < 7 + 0.0000769518 < 1.00009627. For the next inequality we decreased negative terms by inserting 7 = 0.9 and increased positive terms by inserting 7 = 1.25. The next
83
equality expands the terms. We use upper bound of 1.25 and lower bound of 0.9 to obtain terms with corresponding exponents of Ï .
Consequently, the derivative of
# Ï
wut \? wd vr)? (clea) erfc (ââ*) _ 9¢( Gz) erfe re (4) (318) V2V0T V2V0T
with respect to 7 is smaller than zero for maximal v = 0.24 and the domain 0.9 < 7 < 1.25.
Lemma 47. In the domain â0.01 < y < 0.01 and 0.64 < a < 1.875, the function f(x,y) = 2 (2U+*) erfe =) has a global maximum at y = 0.64 and x = 0.01 and a global minimum at y = 1.875 and x = 0.01.
Proof. f (x, y) = e 1 with respect to x is negative: 2 (2y+x) erfc 2x is strictly monotonically decreasing in x, since its derivative
eo (Vax 3/2¢ = onfe (£4) + V2w-2)) 2/rx3/2 <0 3/2 (etu)? (=) <> Vraâl*e =~ erfe + V2(yâ2) <0 vo. Vivi (yâ 2) (ew? . c+y Viv! erfe ( ) + viy-2) < Vai ~ Qe°3/2 * =+ yV2-2V2< Fee +4 seu + (Sw ys 2- 06s + 0.01V2 â 0.642 = â0.334658 < 0. (319) 0.01+0.64 , , /(0.01+0.64)? | 4 v2V0.64 | 2-0.64 Ur
The two last inqualities come from applying Abramowitz bounds [22] [22] and from the fact that the expression SE +yV2 â v2 does not change monotonicity in the domain and hence ety 4,/ (ty
the maximum must be found at the border. For x = 0.64 that maximizes the function f (x, y) is monotonically in y, because its derivative w.r.t. y at x = 0.64 is
e (1.37713 erfc(0.883883y + 0.565685) â 1.37349¢~0-78125(440.60") <0 = (1.37713 erfe(0.883883y + 0.565685) â 1.37349¢~0-78129(0-0.68)") <0 (. 37713 erfe(0.883883y + 0.565685) â 1.37349" 0-78129(040.68)") < (. 37713 erfc(0.883883 - 0.01 + 0.565685) â 1.37349e~0-78125(0.01+40.6 oa) = 0.5935272325870631 â 0.987354705867739 < 0.
Therefore, the values SY = 0.64 and « = â0.01 give a global maximum of the function f(x, y) in the domain â0.01 < y < 0.01 and 0.64 < x < 1.875 and the values y = 1.875 and a = 0.01 give the global minimum.
# A4 Additional information on experiments
In this section, we report the hyperparameters that were considered for each method and data set and give details on the processing of the data sets.
84
(320)
# 121 UCI Machine Learning Repository data sets: Hyperparameters
For the UCI data sets, the best hyperparameter setting was determined by a grid-search over all hyperparameter combinations using 15% of the training data as validation set. The early stopping parameter was determined on the smoothed learning curves of 100 epochs of the validation set. Smoothing was done using moving averages of 10 consecutive values. We tested ârectangularâ and âconicâ layers â rectangular layers have constant number of hidden units in each layer, conic layers start with the given number of hidden units in the ï¬rst layer and then decrease the number of hidden units to the size of the output layer according to the geometric progession. If multiple hyperparameters provided identical performance on the validation set, we preferred settings with a higher number of layers, lower learning rates and higher dropout rates. All methods had the chance to adjust their hyperparameters to the data set at hand.
Table A4: Hyperparameters considered for self-normalizing networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {0.05, 0} {rectangular, conic}
Table A5: Hyperparameters considered for ReLU networks with MS initialization in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {1024, 512, 256} {2,3,4,8,16,32} {0.01, 0.1, 1} {0.5, 0} {rectangular, conic}
Table A6: Hyperparameters considered for batch normalized networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {Batchnorm} {rectangular, conic}
85
Table A7: Hyperparameters considered for weight normalized networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {Weightnorm} {rectangular, conic}
Table A8: Hyperparameters considered for layer normalized networks in the UCI data sets.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {1024, 512, 256} {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {Layernorm} {rectangular, conic}
Table A9: Hyperparameters considered for Highway networks in the UCI data sets.
Hyperparameter Considered values Number of hidden layers Learning rate Dropout rate {2, 3, 4, 8, 16, 32} {0.01, 0.1, 1} {0, 0.5}
Table A10: Hyperparameters considered for Residual networks in the UCI data sets.
Hyperparameter Considered values Number of blocks Number of neurons per blocks Block form Bottleneck Learning rate {2, 3, 4, 8, 16} {1024, 512, 256} {rectangular, diavolo} {25%, 50%} {0.01, 0.1, 1}
86
# 121 UCI Machine Learning Repository data sets: detailed results
Methods compared. We used data sets and preprocessing scripts by Fernández-Delgado et al. [10] for data preparation and deï¬ning training and test sets. With several ï¬aws in the method comparison[37] that we avoided, the authors compared 179 machine learning methods of 17 groups in their experiments. The method groups were deï¬ned by Fernández-Delgado et al. [10] as follows: Support Vector Machines, RandomForest, Multivariate adaptive regression splines (MARS), Boosting, Rule-based, logistic and multinomial regression, Discriminant Analysis (DA), Bagging, Nearest Neighbour, DecisionTree, other Ensembles, Neural Networks, Bayesian, Other Methods, generalized linear models (GLM), Partial least squares and principal component regression (PLSR), and Stacking. However, many of methods assigned to those groups were merely different implementations of the same method. Therefore, we selected one representative of each of the 17 groups for method compar- ison. The representative method was chosen as the groupâs method with the median performance across all tasks. Finally, we included 17 other machine learning methods of Fernández-Delgado et al. [10], and 6 FNNs, BatchNorm, WeightNorm, LayerNorm, Highway, Residual and MSRAinit networks, and self-normalizing neural networks (SNNs) giving a total of 24 compared methods.
Results of FNN methods for all 121 data sets. The results of the compared FNN methods can be found in Table A11.
Small and large data sets. We assigned each of the 121 UCI data sets into the group âlarge datasetsâ or âsmall datasetsâ if the had more than 1,000 data points or less, respectively. We expected that Deep Learning methods require large data sets to competitive to other machine learning methods. This resulted in 75 small and 46 large data sets.
Results. The results of the method comparison are given in Tables A12 and A13 for small and large data sets, respectively. On small data sets, SVMs performed best followed by RandomForest and SNNs. On large data sets, SNNs are the best method followed by SVMs and Random Forest.
87
Table A11: Comparison of FNN methods on all 121 UCI data sets.. The table reports the accuracy of FNN methods at each individual task of the 121 UCI data sets. The ï¬rst column gives the name of the data set, the second the number of training data points N , the third the number of features M and the consecutive columns the accuracy values of self-normalizing networks (SNNs), ReLU networks without normalization and with MSRA initialization (MS), Highway networks (HW), Residual Networks (ResNet), networks with batch normalization (BN), weight normalization (WN), and layer normalization (LN).
dataset N M SNN MS HW ResNet BN abalone acute-inï¬ammation acute-nephritis adult annealing arrhythmia audiology-std balance-scale balloons bank blood breast-cancer breast-cancer-wisc breast-cancer-wisc-diag breast-cancer-wisc-prog breast-tissue car cardiotocography-10clases cardiotocography-3clases chess-krvk chess-krvkp congressional-voting conn-bench-sonar-mines-rocks conn-bench-vowel-deterding connect-4 contrac credit-approval cylinder-bands dermatology echocardiogram ecoli energy-y1 energy-y2 fertility ï¬ags glass haberman-survival hayes-roth heart-cleveland heart-hungarian heart-switzerland heart-va hepatitis hill-valley horse-colic ilpd-indian-liver 4177 120 120 48842 898 452 196 625 16 4521 748 286 699 569 198 106 1728 2126 2126 28056 3196 435 208 990 67557 1473 690 512 366 131 336 768 768 100 194 214 306 160 303 294 123 200 155 1212 368 583 9 7 7 15 32 263 60 5 5 17 5 10 10 31 34 10 7 22 22 7 37 17 61 12 43 10 16 36 35 11 8 9 9 10 29 10 4 4 14 13 13 13 20 101 26 10 0.6657 1.0000 1.0000 0.8476 0.7600 0.6549 0.8000 0.9231 1.0000 0.8903 0.7701 0.7183 0.9714 0.9789 0.6735 0.7308 0.9838 0.8399 0.9153 0.8805 0.9912 0.6147 0.7885 0.9957 0.8807 0.5190 0.8430 0.7266 0.9231 0.8182 0.8929 0.9583 0.9063 0.9200 0.4583 0.7358 0.7368 0.6786 0.6184 0.7945 0.3548 0.3600 0.7692 0.5248 0.8088 0.6986 0.6284 1.0000 1.0000 0.8487 0.7300 0.6372 0.6800 0.9231 0.5000 0.8876 0.7754 0.6901 0.9714 0.9718 0.7347 0.4615 0.9861 0.8418 0.8964 0.8606 0.9900 0.6055 0.8269 0.9935 0.8831 0.5136 0.8430 0.7656 0.9121 0.8485 0.8333 0.9583 0.8958 0.8800 0.4583 0.6038 0.7237 0.4643 0.6053 0.8356 0.3871 0.2600 0.7692 0.5116 0.8529 0.6644 0.6427 1.0000 1.0000 0.8453 0.3600 0.6283 0.7200 0.9103 0.2500 0.8885 0.7968 0.7465 0.9771 0.9789 0.8367 0.6154 0.9560 0.8456 0.9171 0.5255 0.9900 0.5872 0.8462 0.9784 0.8599 0.5054 0.8547 0.7969 0.9780 0.6061 0.8690 0.8802 0.9010 0.8800 0.4375 0.6415 0.6447 0.7857 0.6316 0.7945 0.5806 0.4000 0.6667 0.5000 0.7794 0.6781 0.6466 1.0000 1.0000 0.8484 0.2600 0.6460 0.8000 0.9167 1.0000 0.8796 0.8021 0.7465 0.9714 0.9507 0.8163 0.4231 0.9282 0.8173 0.9021 0.8543 0.9912 0.5963 0.8077 0.9935 0.8716 0.5136 0.8430 0.7734 0.9231 0.8485 0.8214 0.8177 0.8750 0.8400 0.3750 0.6415 0.6842 0.7143 0.5658 0.8082 0.3226 0.2600 0.7692 0.5396 0.8088 0.6712 0.6303 1.0000 1.0000 0.8499 0.1200 0.5929 0.6400 0.9231 1.0000 0.8823 0.7647 0.7324 0.9829 0.9789 0.7755 0.4615 0.9606 0.7910 0.9096 0.8781 0.9862 0.5872 0.7115 0.9610 0.8729 0.4538 0.8721 0.7500 0.9341 0.8485 0.8214 0.8646 0.8750 0.6800 0.4167 0.5849 0.7368 0.7500 0.5789 0.8493 0.3871 0.2800 0.8718 0.5050 0.8529 0.5959 WN 0.6351 1.0000 1.0000 0.8453 0.6500 0.6018 0.7200 0.9551 0.0000 0.8850 0.7594 0.6197 0.9657 0.9718 0.8367 0.5385 0.9769 0.8606 0.8945 0.7673 0.9912 0.5872 0.8269 0.9524 0.8833 0.4755 0.9070 0.7578 0.9451 0.7879 0.8452 0.9010 0.8906 0.6800 0.4167 0.6792 0.7500 0.5714 0.5658 0.7534 0.2581 0.2200 0.8462 0.4934 0.7059 0.6918 LN
88
image-segmentation ionosphere iris led-display lenses letter libras low-res-spect lung-cancer lymphography magic mammographic miniboone molec-biol-promoter molec-biol-splice monks-1 monks-2 monks-3 mushroom musk-1 musk-2 nursery oocytes_merluccius_nucleus_4d oocytes_merluccius_states_2f oocytes_trisopterus_nucleus_2f oocytes_trisopterus_states_5b optical ozone page-blocks parkinsons pendigits pima pittsburg-bridges-MATERIAL pittsburg-bridges-REL-L pittsburg-bridges-SPAN pittsburg-bridges-T-OR-D pittsburg-bridges-TYPE planning plant-margin plant-shape plant-texture post-operative primary-tumor ringnorm seeds semeion soybean spambase spect spectf statlog-australian-credit statlog-german-credit 2310 351 150 1000 24 20000 360 531 32 148 19020 961 130064 106 3190 556 601 554 8124 476 6598 12960 1022 1022 912 912 5620 2536 5473 195 10992 768 106 103 92 102 105 182 1600 1600 1599 90 330 7400 210 1593 683 4601 265 267 690 1000 19 34 5 8 5 17 91 101 57 19 11 6 51 58 61 7 7 7 22 167 167 9 42 26 26 33 63 73 11 23 17 9 8 8 8 8 8 13 65 65 65 9 18 21 8 257 36 58 23 45 15 25 0.9114 0.8864 0.9730 0.7640 0.6667 0.9726 0.7889 0.8571 0.6250 0.9189 0.8692 0.8250 0.9307 0.8462 0.9009 0.7523 0.5926 0.6042 1.0000 0.8739 0.9891 0.9978 0.8235 0.9529 0.7982 0.9342 0.9711 0.9700 0.9583 0.8980 0.9706 0.7552 0.8846 0.6923 0.6957 0.8400 0.6538 0.6889 0.8125 0.7275 0.8125 0.7273 0.5244 0.9751 0.8846 0.9196 0.8511 0.9409 0.6398 0.4973 0.5988 0.7560 0.9090 0.9091 0.9189 0.7200 1.0000 0.9712 0.8667 0.8496 0.3750 0.7297 0.8629 0.8083 0.9250 0.7692 0.8482 0.6551 0.6343 0.7454 1.0000 0.8655 0.9945 0.9988 0.8196 0.9490 0.8728 0.9430 0.9666 0.9732 0.9708 0.9184 0.9714 0.7656 0.8462 0.7692 0.5217 0.8800 0.6538 0.6667 0.8125 0.6350 0.7900 0.7273 0.5000 0.9843 0.8654 0.9296 0.8723 0.9461 0.6183 0.6043 0.6802 0.7280 0.9024 0.9432 0.8378 0.7040 1.0000 0.8984 0.8222 0.9023 0.1250 0.7297 0.8673 0.7917 0.9270 0.6923 0.8833 0.5833 0.6389 0.5880 1.0000 0.8992 0.9915 1.0000 0.7176 0.9490 0.8289 0.9342 0.9644 0.9716 0.9656 0.8367 0.9671 0.7188 0.9231 0.6923 0.5652 0.8800 0.5385 0.6000 0.8375 0.6325 0.7900 0.5909 0.4512 0.9692 0.9423 0.9447 0.8617 0.9435 0.6022 0.8930 0.6802 0.7760 0.8919 0.9545 0.9730 0.7160 0.6667 0.9762 0.7111 0.8647 0.2500 0.6757 0.8723 0.7833 0.9254 0.7692 0.8557 0.7546 0.6273 0.5833 1.0000 0.8739 0.9964 0.9994 0.8000 0.9373 0.7719 0.8947 0.9627 0.9669 0.9605 0.9184 0.9708 0.7135 0.9231 0.8462 0.5652 0.8800 0.6538 0.7111 0.7975 0.5150 0.8000 0.7273 0.3902 0.9811 0.8654 0.9146 0.8670 0.9461 0.6667 0.7005 0.6395 0.7720 0.8481 0.9432 0.9189 0.6280 0.8333 0.9796 0.7444 0.8571 0.5000 0.7568 0.8713 0.8167 0.9262 0.7692 0.8519 0.9074 0.3287 0.5278 0.9990 0.8235 0.9982 0.9994 0.8078 0.9333 0.7456 0.8947 0.9716 0.9669 0.9613 0.8571 0.9734 0.7188 0.8846 0.7692 0.5652 0.8800 0.1154 0.6222 0.7600 0.2850 0.8200 0.5909 0.5122 0.9843 0.8654 0.9372 0.8883 0.9426 0.6344 0.2299 0.6802 0.7520 0.8938 0.9318 1.0000 0.6920 0.8333 0.9580 0.8000 0.8872 0.5000 0.7568 0.8690 0.8292 0.9272 0.6923 0.8494 0.5000 0.6644 0.5231 0.9995 0.8992 0.9927 0.9966 0.8078 0.9020 0.7939 0.9254 0.9638 0.9748 0.9730 0.8163 0.9620 0.6979 0.8077 0.6538 0.6522 0.8800 0.4615 0.6444 0.8175 0.6575 0.8175 0.5455 0.5000 0.9719 0.8846 0.9322 0.8537 0.9504 0.6398 0.4545 0.6860 0.7400
89
statlog-heart statlog-image statlog-landsat statlog-shuttle statlog-vehicle steel-plates synthetic-control teaching thyroid tic-tac-toe titanic trains twonorm vertebral-column-2clases vertebral-column-3clases wall-following waveform waveform-noise wine wine-quality-red wine-quality-white yeast zoo 270 2310 6435 58000 846 1941 600 151 7200 958 2201 10 7400 310 310 5456 5000 5000 178 1599 4898 1484 101 14 19 37 10 19 28 61 6 22 10 4 0.9254 0.9549 0.9100 0.9990 0.8009 0.7835 0.9867 0.5000 0.9816 0.9665 0.7836 30 NA 21 7 7 25 22 41 14 12 12 9 17 0.9805 0.8312 0.8312 0.9098 0.8480 0.8608 0.9773 0.6300 0.6373 0.6307 0.9200 0.8358 0.9757 0.9075 0.9983 0.8294 0.7567 0.9800 0.6053 0.9770 0.9833 0.7909 NA 0.9778 0.8701 0.8052 0.9076 0.8312 0.8328 0.9318 0.6250 0.6479 0.6173 1.0000 0.7761 0.9584 0.9110 0.9977 0.7962 0.7608 0.9867 0.5263 0.9708 0.9749 0.7927 NA 0.9708 0.8571 0.7922 0.9230 0.8320 0.8696 0.9091 0.5625 0.5564 0.6065 0.8800 0.8657 0.9584 0.9055 0.9992 0.7583 0.7629 0.9600 0.5526 0.9799 0.9623 0.7727 NA 0.9735 0.8312 0.7532 0.9223 0.8360 0.8584 0.9773 0.6150 0.6307 0.5499 1.0000 0.7910 0.9671 0.9040 0.9988 0.7583 0.7031 0.9733 0.5000 0.9778 0.9833 0.7800 0.5000 0.9757 0.8312 0.7792 0.9333 0.8360 0.8480 0.9773 0.5450 0.5335 0.4906 0.7200 0.8657 0.9515 0.8925 0.9988 0.8009 0.7856 0.9867 0.3158 0.9807 0.9707 0.7818 0.5000 0.9730 0.6623 0.7403 0.9274 0.8376 0.8640 0.9773 0.5575 0.5482 0.5876 0.9600 0.7910 0.9757 0.9040 0.9987 0.7915 0.7588 0.9733 0.6316 0.9752 0.9791 0.7891 1.0000 0.9724 0.8442 0.8312 0.9128 0.8448 0.8504 0.9773 0.6100 0.6544 0.6092 0.9600
Table A12: UCI comparison reporting the average rank of a method on 75 classiï¬cation task of the UCI machine learning repository with less than 1000 data points. For each dataset, the 24 compared methods, were ranked by their accuracy and the ranks were averaged across the tasks. The ï¬rst column gives the method group, the second the method, the third the average rank , and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. SNNs are ranked third having been outperformed by Random Forests and SVMs.
methodGroup method avg. rank p-value SVM RandomForest SNN LMR NeuralNetworks MARS MSRAinit LayerNorm Highway DiscriminantAnalysis mda_R Boosting Bagging ResNet BatchNorm Rule-based WeightNorm DecisionTree OtherEnsembles Nearest Neighbour OtherMethods PLSR Bayesian GLM Stacking LibSVM_weka RRFglobal_caret SNN SimpleLogistic_weka lvq_caret gcvEarth_caret MSRAinit LayerNorm Highway LogitBoost_weka ctreeBag_R ResNet BatchNorm JRip_caret WeightNorm rpart2_caret Dagging_weka NNge_weka pam_caret simpls_R NaiveBayes_weka bayesglm_caret Stacking_weka 9.3 9.6 9.6 9.9 10.1 10.7 11.0 11.3 11.5 11.8 11.9 12.1 12.3 12.6 12.9 13.0 13.6 13.9 14.0 14.2 14.3 14.6 15.0 20.9 2.5e-01 3.8e-01 1.5e-01 1.0e-01 3.6e-02 4.0e-02 7.2e-02 8.9e-03 2.6e-03 2.4e-02 1.8e-03 3.5e-03 4.9e-04 1.7e-04 8.3e-05 7.0e-04 3.0e-05 7.7e-04 1.5e-04 4.6e-05 1.2e-04 1.6e-06 2.2e-12
90
Table A13: UCI comparison reporting the average rank of a method on 46 classiï¬cation task of the UCI machine learning repository with more than 1000 data points. For each dataset, the 24 compared methods, were ranked by their accuracy and the ranks were averaged across the tasks. The ï¬rst column gives the method group, the second the method, the third the average rank , and the last the p-value of a paired Wilcoxon test whether the difference to the best performing method is signiï¬cant. SNNs are ranked ï¬rst having outperformed diverse machine learning methods and other FNNs.
methodGroup method avg. rank p-value SNN SVM RandomForest MSRAinit LayerNorm Highway ResNet WeightNorm BatchNorm MARS Boosting LMR Rule-based Bagging DiscriminantAnalysis mda_R Nearest Neighbour DecisionTree OtherEnsembles NeuralNetworks Bayesian OtherMethods GLM PLSR Stacking SNN LibSVM_weka RRFglobal_caret MSRAinit LayerNorm Highway ResNet WeightNorm BatchNorm gcvEarth_caret LogitBoost_weka SimpleLogistic_weka JRip_caret ctreeBag_R NNge_weka rpart2_caret Dagging_weka lvq_caret NaiveBayes_weka pam_caret bayesglm_caret simpls_R Stacking_weka 5.8 6.1 6.6 7.1 7.2 7.9 8.4 8.7 9.7 9.9 12.1 12.4 12.4 13.5 13.9 14.1 15.5 16.1 16.3 17.9 18.3 18.7 19.0 22.5 5.8e-01 2.1e-01 4.5e-03 7.1e-02 1.7e-03 1.7e-04 5.5e-04 1.8e-04 8.2e-05 2.2e-07 3.8e-09 9.0e-08 1.6e-05 1.4e-10 1.6e-10 2.3e-08 4.4e-12 1.6e-12 1.6e-12 2.8e-14 1.5e-11 3.4e-11 2.8e-14
91
# A4.3 Tox21 challenge data set: Hyperparameters
For the Tox21 data set, the best hyperparameter setting was determined by a grid-search over all hyperparameter combinations using the validation set deï¬ned by the challenge winners [28]. The hyperparameter space was chosen to be similar to the hyperparameters that were tested by Mayr et al. [28]. The early stopping parameter was determined on the smoothed learning curves of 100 epochs of the validation set. Smoothing was done using moving averages of 10 consecutive values. We tested ârectangularâ and âconicâ layers â rectangular layers have constant number of hidden units in each layer, conic layers start with the given number of hidden units in the ï¬rst layer and then decrease the number of hidden units to the size of the output layer according to the geometric progession. All methods had the chance to adjust their hyperparameters to the data set at hand.
Table A14: Hyperparameters considered for self-normalizing networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form L2 regularization parameter {1024, 2048} {2,3,4,6,8,16,32} {0.01, 0.05, 0.1} {0.05, 0.10} {rectangular, conic} {0.001,0.0001,0.00001}
Table A15: Hyperparameters considered for ReLU networks with MS initialization in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form L2 regularization parameter {1024, 2048} {2,3,4,6,8,16,32} {0.01, 0.05, 0.1} {0.5, 0} {rectangular, conic} {0.001,0.0001,0.00001}
Table A16: Hyperparameters considered for batch normalized networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form L2 regularization parameter {1024, 2048} {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {Batchnorm} {rectangular, conic} {0.001,0.0001,0.00001}
92
Table A17: Hyperparameters considered for weight normalized networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Dropout rate Layer form L2 regularization parameter {1024, 2048} {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {Weightnorm} {0, 0.5} {rectangular, conic} {0.001,0.0001,0.00001}
Table A18: Hyperparameters considered for layer normalized networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Dropout rate Layer form L2 regularization parameter {1024, 2048} {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {Layernorm} {0, 0.5} {rectangular, conic} {0.001,0.0001,0.00001}
Table A19: Hyperparameters considered for Highway networks in the Tox21 data set.
Hyperparameter Considered values Number of hidden layers Learning rate Dropout rate L2 regularization parameter {2, 3, 4, 6, 8, 16, 32} {0.01, 0.05, 0.1} {0, 0.5} {0.001,0.0001,0.00001}
Table A20: Hyperparameters considered for Residual networks in the Tox21 data set.
Hyperparameter Considered values Number of blocks Number of neurons per blocks Block form Bottleneck Learning rate L2 regularization parameter {2, 3, 4, 6, 8, 16} {1024, 2048} {rectangular, diavolo} {25%, 50%} {0.01, 0.05, 0.1} {0.001,0.0001,0.00001}
93
density network inputs network inputs
# density
Figure A8: Distribution of network inputs of an SNN for the Tox21 data set. The plots show the distribution of network inputs z of the second layer of a typical Tox21 network. The red curves display a kernel density estimator of the network inputs and the black curve is the density of a standard normal distribution. Left panel: At initialization time before learning. The distribution of network inputs is close to a standard normal distribution. Right panel: After 40 epochs of learning. The distributions of network inputs is close to a normal distribution.
Distribution of network inputs. We empirically checked the assumption that the distribution of network inputs can well be approximated by a normal distribution. To this end, we investigated the density of the network inputs before and during learning and found that these density are close to normal distributions (see Figure A8).
94
# A4.4 HTRU2 data set: Hyperparameters
For the HTRU2 data set, the best hyperparameter setting was determined by a grid-search over all hyperparameter combinations using one of the 9 non-testing folds as validation fold in a nested cross-validation procedure. Concretely, if M was the testing fold, we used M â 1 as validation fold, and for M = 1 we used fold 10 for validation. The early stopping parameter was determined on the smoothed learning curves of 100 epochs of the validation set. Smoothing was done using moving averages of 10 consecutive values. We tested ârectangularâ and âconicâ layers â rectangular layers have constant number of hidden units in each layer, conic layers start with the given number of hidden units in the ï¬rst layer and then decrease the number of hidden units to the size of the output layer according to the geometric progession. All methods had the chance to adjust their hyperparameters to the data set at hand.
Table A21: Hyperparameters considered for self-normalizing networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} { 0, 0.05} {rectangular, conic}
Table A22: Hyperparameters considered for ReLU networks with Microsoft initialization on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Dropout rate Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {0, 0.5} {rectangular, conic}
Table A23: Hyperparameters considered for BatchNorm networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {Batchnorm} {rectangular, conic}
95
Table A24: Hyperparameters considered for WeightNorm networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {Weightnorm} {rectangular, conic}
Table A25: Hyperparameters considered for LayerNorm networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of hidden layers Learning rate Normalization Layer form {256, 512, 1024} {2, 4, 8, 16, 32} {0.1, 0.01, 1} {Layernorm} {rectangular, conic}
Table A26: Hyperparameters considered for Highway networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden layers Learning rate Dropout rate {2, 4, 8, 16, 32} {0.1, 0.01, 1} {0, 0.5}
Table A27: Hyperparameters considered for Residual networks on the HTRU2 data set.
Hyperparameter Considered values Number of hidden units Number of residual blocks Learning rate Block form Bottleneck {256, 512, 1024} {2, 3, 4, 8, 16} {0.1, 0.01, 1} {rectangular, diavolo} {0.25, 0.5}
96
# A5 Other ï¬xed points
A similar analysis with corresponding function domains can be performed for other ï¬xed points, for example for µ = ˵ = 0 and ν = Ëν = 2, which leads to a SELU activation function with parameters α02 = 1.97126 and λ02 = 1.06071.
# A6 Bounds determined by numerical methods
In this section we report bounds on previously discussed expressions as determined by numerical methods (min and max have been computed).
0(µ=0.06,Ï=0,ν=1.35,Ï =1.12) < âJ11 âµ < .00182415(µ=â0.1,Ï=0.1,ν=1.47845,Ï =0.883374)
0.905413(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < â0.0151177(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.015194(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.0151177(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.00785613(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < 0.0799824(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < 0(µ=0.06,Ï=0,ν=1.35,Ï =1.12) < 0.0849308(µ=0.1,Ï=â0.1,ν=0.8,Ï =0.8) < â0.0600823(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.0673083(µ=0.1,Ï=â0.1,ν=1.5,Ï =0.8) < â0.0600823(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.0600823(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < â0.276862(µ=â0.01,Ï=â0.01,ν=0.8,Ï =1.25) < âJ11 âÏ âJ11 âν âJ11 âÏ âJ12 âµ âJ12 âÏ âJ12 âν âJ12 âÏ âJ21 âµ âJ21 âÏ âJ21 âν âJ21 âÏ âJ22 âµ âJ22 âÏ âJ22 âν âJ22 âÏ < 1.04143(µ=0.1,Ï=0.1,ν=0.8,Ï =0.8) < 0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.015194(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.0151177(µ=0.1,Ï=â0.1,ν=0.8,Ï =1.25) < 0.0315805(µ=0.1,Ï=0.1,ν=0.8,Ï =0.8) < 0.110267(µ=â0.1,Ï=0.1,ν=0.8,Ï =0.8) < 0.0174802(µ=0.1,Ï=0.1,ν=0.8,Ï =0.8) < 0.695766(µ=0.1,Ï=0.1,ν=1.5,Ï =1.25) < 0.0600823(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < 0.0673083(µ=â0.1,Ï=0.1,ν=1.5,Ï =0.8) < 0.0600823(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < 0.0600823(µ=â0.1,Ï=0.1,ν=0.8,Ï =1.25) < â0.084813(µ=â0.1,Ï=0.1,ν=1.5,Ï =0.8) 0.562302(µ=0.1,Ï=â0.1,ν=1.5,Ï =1.25) < < 0.664051(µ=0.1,Ï=0.1,ν=0.8,Ï =0.8)
(321)
OF : 299916 On <_ 0.00182415(0.0031049101995398316) (322) 0. | ais < 1.04143(1.055872374194189) <_ 0.0151177(0.031242911235461816) OF Ov
97
OFu Or OP2 Ou Oia Ow OP2 Ov Oia Or Oa Ou Oa Ow Oa Ov Oa Or OFe2 Ou OFe2 Ow OFe2 Ov OFe2 Or <_ 0.015194(0.03749149348255419) <_ 0.0151177(0.031242911235461816) <_ 0.0151177(0.031242911235461816) <_ 0.0315805(0.21232788238624354) an <_ 0.110267(0.2124377655377270) <_0.0174802(0.02220441024325437) <_ 0.695766(1.146955401845684) <_ 0.0600823(0.14983446469110305) <_ 0.0673083(0.17980135762932363) <_ 0.0600823(0.14983446469110305) <_ 0.0600823(0.14983446469110305) <_ 0.562302(1.805740052651535) < 0.664051 (2.396685907216327)
# A7 References
[1] Abramowitz, M. and Stegun, I. (1964). Handbook of Mathematical Functions, volume 55 of Applied Mathematics Series. National Bureau of Standards, 10th edition.
[2] Ba, J. L., Kiros, J. R., and Hinton, G. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.
[3] Bengio, Y. (2013). Deep learning of representations: Looking forward. In Proceedings of the First International Conference on Statistical Language and Speech Processing, pages 1â37, Berlin, Heidelberg.
[4] Blinn, J. (1996). Consider the lowly 2Ã2 matrix. IEEE Computer Graphics and Applications, pages 82â88.
[5] Bradley, R. C. (1981). Central limit theorems under weak dependence. Journal of Multivariate Analysis, 11(1):1â16.
[6] Cire¸san, D. and Meier, U. (2015). Multi-column deep neural networks for ofï¬ine handwritten chinese character classiï¬cation. In 2015 International Joint Conference on Neural Networks (IJCNN), pages 1â6. IEEE.
[7] Clevert, D.-A., Unterthiner, T., and Hochreiter, S. (2015). Fast and accurate deep network learning by exponential linear units (ELUs). 5th International Conference on Learning Representations, arXiv:1511.07289.
98
[8] Dugan, P., Clark, C., LeCun, Y., and Van Parijs, S. (2016). Phase 4: Dcl system using deep learning approaches for land-based or ship-based real-time recognition and localization of marine mammals-distributed processing and big data applications. arXiv preprint arXiv:1605.00982.
[9] Esteva, A., Kuprel, B., Novoa, R., Ko, J., Swetter, S., Blau, H., and Thrun, S. (2017). Nature, Dermatologist-level classiï¬cation of skin cancer with deep neural networks. 542(7639):115â118.
[10] Fernández-Delgado, M., Cernadas, E., Barro, S., and Amorim, D. (2014). Do we need hundreds of classiï¬ers to solve real world classiï¬cation problems. Journal of Machine Learning Research, 15(1):3133â3181.
[11] Goldberg, D. (1991). What every computer scientist should know about ï¬oating-point arithmetic. ACM Comput. Surv., 223(1):5â48.
[12] Graves, A., Mohamed, A., and Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In IEEE International conference on acoustics, speech and signal processing (ICASSP), pages 6645â6649.
[13] Graves, A. and Schmidhuber, J. (2009). Ofï¬ine handwriting recognition with multidimensional recurrent neural networks. In Advances in neural information processing systems, pages 545â552.
[14] Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA, 316(22):2402â2410.
[15] Harrison, J. (1999). A machine-checked theory of ï¬oating point arithmetic. In Bertot, Y., Dowek, G., Hirschowitz, A., Paulin, C., and Théry, L., editors, Theorem Proving in Higher Order Logics: 12th International Conference, TPHOLsâ99, volume 1690 of Lecture Notes in Computer Science, pages 113â130. Springer-Verlag.
[16] He, K., Zhang, X., Ren, S., and Sun, J. (2015a). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] He, K., Zhang, X., Ren, S., and Sun, J. (2015b). Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1026â1034.
[18] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735â1780.
[19] Huval, B., Wang, T., Tandon, S., et al. (2015). An empirical evaluation of deep learning on highway driving. arXiv preprint arXiv:1504.01716.
[20] Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448â456.
[21] Kahan, W. (2004). A logarithm too clever by half. Technical report, University of California, Berkeley.
[22] Korolev, V. and Shevtsova, I. (2012). An improvement of the BerryâEsseen inequality with applications to Poisson and mixed Poisson random sums. Scandinavian Actuarial Journal, 2012(2):81â105.
[23] Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105.
[24] LeCun, Y. and Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995.
[25] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436â444.
99
[26] Loosemore, S., Stallman, R. M., McGrath, R., Oram, A., and Drepper, U. (2016). The GNU C Library: Application Fundamentals. GNU Press, Free Software Foundation, 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA, 2.24 edition.
[27] Lyon, R., Stappers, B., Cooper, S., Brooke, J., and Knowles, J. (2016). Fifty years of pulsar candidate selection: From simple ï¬lters to a new principled real-time classiï¬cation approach. Monthly Notices of the Royal Astronomical Society, 459(1):1104â1123.
[28] Mayr, A., Klambauer, G., Unterthiner, T., and Hochreiter, S. (2016). DeepTox: Toxicity prediction using deep learning. Frontiers in Environmental Science, 3:80.
[29] Muller, J.-M. (2005). On the deï¬nition of ulp(x). Technical Report Research report RR2005-09, Laboratoire de lâInformatique du Parallélisme.
[30] Ren, C. and MacKenzie, A. R. (2007). Closed-form approximations to the error and comple- mentary error functions and their applications in atmospheric science. Atmos. Sci. Let., pages 70â73.
[31] Sak, H., Senior, A., Rao, K., and Beaufays, F. (2015). Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947.
[32] Salimans, T. and Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901â909.
[33] Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61:85â117.
[34] Silver, D., Huang, A., Maddison, C., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484â489.
[35] Srivastava, R. K., Greff, K., and Schmidhuber, J. (2015). Training very deep networks. In Advances in Neural Information Processing Systems, pages 2377â2385.
[36] Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104â3112.
[37] Wainberg, M., Alipanahi, B., and Frey, B. J. (2016). Are random forests truly the best classiï¬ers? Journal of Machine Learning Research, 17(110):1â5.
# List of Figures
1 FNN and SNN trainin error curves . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Visualization of the mapping g . . . . . . . . . . . . . . . . . . . . . . . . . . . . A3 Graph of the main subfunction of the derivative of the second moment . . . . . . . erfc(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 30 37 38 56 57
# List of Tables
Comparison of seven FNNs on 121 UCI tasks . . . . . . . . . . . . . . . . . . . .
Comparison of FNNs at the Tox21 challenge dataset . . . . . . . . . . . . . . . . .
100
8
3 Comparison of FNNs and reference methods at HTRU2 . . . . . . . . . . . . . . . A4 Hyperparameters considered for self-normalizing networks in the UCI data sets. . . A5 Hyperparameters considered for ReLU networks in the UCI data sets. . . . . . . . A6 Hyperparameters considered for batch normalized networks in the UCI data sets. . A7 Hyperparameters considered for weight normalized networks in the UCI data sets. . A8 Hyperparameters considered for layer normalized networks in the UCI data sets. . . A9 Hyperparameters considered for Highway networks in the UCI data sets. . . . . . . A10 Hyperparameters considered for Residual networks in the UCI data sets. . . . . . . A11 Comparison of FNN methods on all 121 UCI data sets. . . . . . . . . . . . . . . . A12 Method comparison on small UCI data sets . . . . . . . . . . . . . . . . . . . . . A13 Method comparison on large UCI data sets . . . . . . . . . . . . . . . . . . . . . . . 91 A14 Hyperparameters considered for self-normalizing networks in the Tox21 data set. . A15 Hyperparameters considered for ReLU networks in the Tox21 data set. . . . . . . . A16 Hyperparameters considered for batch normalized networks in the Tox21 data set. . A17 Hyperparameters considered for weight normalized networks in the Tox21 data set. A18 Hyperparameters considered for layer normalized networks in the Tox21 data set. . A19 Hyperparameters considered for Highway networks in the Tox21 data set. . . . . . A20 Hyperparameters considered for Residual networks in the Tox21 data set. . . . . . A21 Hyperparameters considered for self-normalizing networks on the HTRU2 data set. A22 Hyperparameters considered for ReLU networks on the HTRU2 data set. . . . . . . A23 Hyperparameters considered for BatchNorm networks on the HTRU2 data set. . . . A24 Hyperparameters considered for WeightNorm networks on the HTRU2 data set. . . A25 Hyperparameters considered for LayerNorm networks on the HTRU2 data set. . . . A26 Hyperparameters considered for Highway networks on the HTRU2 data set. . . . . 9 85 85 85 86 86 86 86 88 90 92 92 92 93 93 93 93 95 95 95 96 96 96
A27 Hyperparameters considered for Residual networks on the HTRU2 data set. . . . . 96
101
# Brief index
Abramowitz bounds, 37
Banach Fixed Point Theorem, 13 bounds derivatives of Jacobian entries, 21 Jacobian entries, 23 mean and variance, 24 singular value, 25, 27
central limit theorem, 6 complementary error function bounds, 37 deï¬nition, 37 computer-assisted proof, 33 contracting variance, 29
deï¬nitions, 2 domain singular value, 19 Theorem 1, 12 Theorem 2, 12 Theorem 3, 13 dropout, 6
erf, 37 erfc, 37 error function bounds, 37 deï¬nition, 37 properties, 39 expanding variance, 32 experiments, 7, 85 astronomy, 8 HTRU2, 8, 95 hyperparameters, 95 methods compared, 7 Tox21, 7, 92 hyperparameters, 8, 92 UCI, 7, 85 details, 85 hyperparameters, 85 results, 86
initialization, 6
Jacobian, 20 bounds, 23 deï¬nition, 20 derivatives, 21 entries, 20, 23 singular value, 21 singular value bound, 25
lemmata, 19 Jacobian bound, 19
mapping g, 2, 4
deï¬nition, 11 mapping in domain, 29
# ee
self-normalizing neural networks, 2 SELU deï¬nition, 3 parameters, 4, 11
Theorem 1, 5, 12 proof, 13 proof sketch, 5 Theorem 2, 6, 12 proof, 14 Theorem 3, 6, 12 proof, 18
102 | {
"id": "1504.01716"
} |
1706.02677 | Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour | Deep learning thrives with large neural networks and large datasets. However,
larger networks and larger datasets result in longer training times that impede
research and development progress. Distributed synchronous SGD offers a
potential solution to this problem by dividing SGD minibatches over a pool of
parallel workers. Yet to make this scheme efficient, the per-worker workload
must be large, which implies nontrivial growth in the SGD minibatch size. In
this paper, we empirically show that on the ImageNet dataset large minibatches
cause optimization difficulties, but when these are addressed the trained
networks exhibit good generalization. Specifically, we show no loss of accuracy
when training with large minibatch sizes up to 8192 images. To achieve this
result, we adopt a hyper-parameter-free linear scaling rule for adjusting
learning rates as a function of minibatch size and develop a new warmup scheme
that overcomes optimization challenges early in training. With these simple
techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of
8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using
commodity hardware, our implementation achieves ~90% scaling efficiency when
moving from 8 to 256 GPUs. Our findings enable training visual recognition
models on internet-scale data with high efficiency. | http://arxiv.org/pdf/1706.02677 | Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, Kaiming He | cs.CV, cs.DC, cs.LG | Tech report (v2: correct typos) | null | cs.CV | 20170608 | 20180430 | 8 1 0 2
r p A 0 3 ] V C . s c [ 2 v 7 7 6 2 0 . 6 0 7 1 : v i X r a
# Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
Priya Goyal Piotr Doll´ar Ross Girshick Pieter Noordhuis Lukasz Wesolowski Aapo Kyrola Andrew Tulloch Yangqing Jia Kaiming He
Facebook
# Abstract
Deep learning thrives with large neural networks and larger networks and larger large datasets. However, datasets result in longer training times that impede re- search and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efï¬cient, the per-worker workload must be large, which implies nontrivial growth in the SGD mini- batch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization dif- ï¬culties, but when these are addressed the trained networks exhibit good generalization. Speciï¬cally, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper- parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2- based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves â¼90% scaling efï¬ciency when moving from 8 to 256 GPUs. Our ï¬ndings enable training visual recognition models on internet-scale data with high efï¬ciency.
iS 3S oo a o i=) 25 ImageNet top-1 validation error 20 L L L L 64 128 256 512 1k 2k 4k 8k mini-batch size 16k 32k 64k
Figure 1. ImageNet top-1 validation error vs. minibatch size. Error range of plus/minus two standard deviations is shown. We present a simple and general technique for scaling distributed syn- chronous SGD to minibatches of up to 8k images while maintain- ing the top-1 error of small minibatch training. For all minibatch sizes we set the learning rate as a linear function of the minibatch size and apply a simple warmup phase for the ï¬rst few epochs of training. All other hyper-parameters are kept ï¬xed. Using this simple approach, accuracy of our models is invariant to minibatch size (up to an 8k minibatch size). Our techniques enable a lin- ear reduction in training time with â¼90% efï¬ciency as we scale to large minibatch sizes, allowing us to train an accurate 8k mini- batch ResNet-50 model in 1 hour on 256 GPUs.
# 1. Introduction
Scale matters. We are in an unprecedented era in AI research history in which the increasing data and model scale is rapidly improving accuracy in computer vision [22, 41, 34, 35, 36, 16], speech [17, 40], and natural lan- guage processing [7, 38]. Take the profound impact in com- puter vision as an example: visual representations learned by deep convolutional neural networks [23, 22] show excel- lent performance on previously challenging tasks like Ima- geNet classiï¬cation [33] and can be transferred to difï¬cult perception problems such as object detection and segmen-
tation [8, 10, 28]. Moreover, this pattern generalizes: larger datasets and neural network architectures consistently yield improved accuracy across all tasks that beneï¬t from pre- training [22, 41, 34, 35, 36, 16]. But as model and data scale grow, so does training time; discovering the potential and limits of large-scale deep learning requires developing novel techniques to keep training time manageable.
The goal of this report is to demonstrate the feasibility of, and to communicate a practical guide to, large-scale train- ing with distributed synchronous stochastic gradient descent (SGD). As an example, we scale ResNet-50 [16] training, originally performed with a minibatch size of 256 images (using 8 Tesla P100 GPUs, training time is 29 hours), to larger minibatches (see Figure 1). In particular, we show that with a large minibatch size of 8192, we can train ResNet-50 in 1 hour using 256 GPUs while maintaining
1
the same level of accuracy as the 256 minibatch baseline. While distributed synchronous SGD is now commonplace, no existing results show that generalization accuracy can be maintained with minibatches as large as 8192 or that such high-accuracy models can be trained in such short time.
To tackle this unusually large minibatch size, we employ a simple and hyper-parameter-free linear scaling rule to ad- just the learning rate. While this guideline is found in ear- lier work [21, 4], its empirical limits are not well under- stood and informally we have found that it is not widely known to the research community. To successfully apply this rule, we present a new warmup strategy, i.e., a strategy of using lower learning rates at the start of training [16], to overcome early optimization difï¬culties. Importantly, not only does our approach match the baseline validation error, but also yields training error curves that closely match the small minibatch baseline. Details are presented in §2.
Our comprehensive experiments in §5 show that opti- mization difï¬culty is the main issue with large minibatches, rather than poor generalization (at least on ImageNet), in contrast to some recent studies [20]. Additionally, we show that the linear scaling rule and warmup generalize to more complex tasks including object detection and instance seg- mentation [9, 31, 14, 28], which we demonstrate via the recently developed Mask R-CNN [14]. We note that a ro- bust and successful guideline for addressing a wide range of minibatch sizes has not been presented in previous work. While the strategy we deliver is simple, its successful application requires correct implementation with respect to seemingly minor and often not well understood implemen- tation details within deep learning libraries. Subtleties in the implementation of SGD can lead to incorrect solutions that are difï¬cult to discover. To provide more helpful guidance we describe common pitfalls and the relevant implementa- tion details that can trigger these traps in §3. Our strategy applies regardless of
framework, but achieving efï¬cient linear scaling requires nontrivial com- munication algorithms. We use the open-source Caffe21 deep learning framework and Big Basin GPU servers [24], which operates efï¬ciently using standard Ethernet network- ing (as opposed to specialized network interfaces). We de- scribe the systems algorithms that enable our approach to operate near its full potential in §4.
The practical advances described in this report are help- ful across a range of domains. In an industrial domain, our system unleashes the potential of training visual models from internet-scale data, enabling training with billions of images per day. Of equal importance, in a research domain, we have found it to simplify migrating algorithms from a single-GPU to a multi-GPU implementation without requir- ing hyper-parameter search, e.g. in our experience migrat- ing Faster R-CNN [31] and ResNets [16] from 1 to 8 GPUs.
1http://www.caffe2.ai
2
# 2. Large Minibatch SGD
We start by reviewing the formulation of Stochastic Gra- dient Descent (SGD), which will be the foundation of our discussions in the following sections. We consider super- vised learning by minimizing a loss L(w) of the form: 1 |X|
# wEX
Here w are the weights of a network, X is a labeled training set, and l(x, w) is the loss computed from samples x â X and their labels y. Typically l is the sum of a classiï¬cation loss (e.g., cross-entropy) and a regularization loss on w.
Minibatch Stochastic Gradient Descent [32], usually re- ferred to as simply as SGD in recent literature even though it operates on minibatches, performs the following update: 1 n
# 2eB
Here B is a minibatch sampled from X and n = |B| is the minibatch size, η is the learning rate, and t is the iteration index. Note that in practice we use momentum SGD; we return to a discussion of momentum in §3.
# 2.1. Learning Rates for Large Minibatches
Our goal is to use large minibatches in place of small minibatches while maintaining training and generalization accuracy. This is of particular interest in distributed learn- ing, because it can allow us to scale to multiple workers2 us- ing simple data parallelism without reducing the per-worker workload and without sacriï¬cing model accuracy.
As we will show in comprehensive experiments, we found that the following learning rate scaling rule is sur- prisingly effective for a broad range of minibatch sizes:
Linear Scaling Rule: When the minibatch size is multiplied by k, multiply the learning rate by k.
All other hyper-parameters (weight decay, etc.) are kept un- changed. As we will show in §5, the linear scaling rule can help us to not only match the accuracy between using small and large minibatches, but equally importantly, to largely match their training curves, which enables rapid debugging and comparison of experiments prior to convergence.
Interpretation. We present an informal discussion of the linear scaling rule and why it may be effective. Consider a network at iteration t with weights wt, and a sequence of k minibatches Bj for 0 ⤠j < k each of size n. We compare the effect of executing k SGD iterations with small minibatches Bj and learning rate η versus a single iteration with a large minibatch âªjBj of size kn and learning rate Ëη.
2We use the terms âworkerâ and âGPUâ interchangeably in this work, al- though other implementations of a âworkerâ are possible. âServerâ denotes a set of 8 GPUs that does not require communication over a network.
According to (2), after k iterations of SGD with learning rate η and a minibatch size of n we have:
1 Werk = We = Ss > VU (x, wi4j)- GB) j<k câ¬B;
On the other hand, taking a single step with the large mini- batch âªjBj of size kn and learning rate Ëη yields:
Wiel = Wi - ae > > Vi(x, wz). (4) j<k â¬B;
the updates differ, and it is unlikely that As expected, Ëwt+1 = wt+k. However, if we could assume âl(x, wt) â âl(x, wt+j) for j < k, then setting Ëη = kη would yield Ëwt+1 â wt+k, and the updates from small and large mini- batch SGD would be similar. Although this is a strong as- sumption, we emphasize that if it were true the two updates are similar only if we set Ëη = kη.
The above interpretation gives intuition for one case where we may hope the linear scaling rule to apply. In our experiments with Ëη = kη (and warmup), small and large minibatch SGD not only result in models with the same ï¬- nal accuracy, but also, the training curves match closely. Our empirical results suggest that the above approximation might be valid in large-scale, real-world data.
However, there are at least two cases when the condition âl(x, wt) â âl(x, wt+j) will clearly not hold. First, in ini- tial training when the network is changing rapidly, it does not hold. We address this by using a warmup phase, dis- cussed in §2.2. Second, minibatch size cannot be scaled in- deï¬nitely: while results are stable for a large range of sizes, beyond a certain point accuracy degrades rapidly. Interest- ingly, this point is as large as â¼8k in ImageNet experiments.
Discussion. The above linear scaling rule was adopted by Krizhevsky [21], if not earlier. However, Krizhevsky re- ported a 1% increase of error when increasing the minibatch size from 128 to 1024, whereas we show how to maintain accuracy across a much broader regime of minibatch sizes. Chen et al. [5] presented a comparison of numerous dis- tributed SGD variants, and although their work also em- ployed the linear scaling rule, it did not establish a small minibatch baseline. Li [25] (§4.6) showed distributed Ima- geNet training with minibatches up to 5120 without a loss in accuracy after convergence. However, their work did not demonstrate a hyper-parameter search-free rule for adjust- ing the learning rate as a function of minibatch size, which is a central contribution of our work.
In recent work, Bottou et al. [4] (§4.2) review theoretical tradeoffs of minibatching and show that with the linear scal- ing rule, solvers follow the same training curve as a function of number of examples seen, and suggest the learning rate should not exceed a maximum rate independent of mini- batch size (which justiï¬es warmup). Our work empirically tests these theories with unprecedented minibatch sizes.
3
# 2.2. Warmup
As we discussed, for large minibatches (e.g., 8k) the lin- ear scaling rule breaks down when the network is changing rapidly, which commonly occurs in early stages of train- ing. We ï¬nd that this issue can be alleviated by a properly designed warmup [16], namely, a strategy of using less ag- gressive learning rates at the start of training.
Constant warmup. The warmup strategy presented in [16] uses a low constant learning rate for the ï¬rst few epochs of training. As we will show in §5, we have found constant warmup particularly helpful for prototyping object detec- tion and segmentation methods [9, 31, 26, 14] that ï¬ne-tune pre-trained layers together with newly initialized layers.
In our ImageNet experiments with a large minibatch of size kn, we have tried to train with the low learning rate of η for the ï¬rst 5 epochs and then return to the target learn- ing rate of Ëη = kη. However, given a large k, we ï¬nd that this constant warmup is not sufï¬cient to solve the optimiza- tion problem, and a transition out of the low learning rate warmup phase can cause the training error to spike. This leads us to propose the following gradual warmup.
Gradual warmup. We present an alternative warmup that gradually ramps up the learning rate from a small to a large value. This ramp avoids a sudden increase of the learning rate, allowing healthy convergence at the start of training. In practice, with a large minibatch of size kn, we start from a learning rate of η and increment it by a constant amount at each iteration such that it reaches Ëη = kη after 5 epochs (re- sults are robust to the exact duration of warmup). After the warmup, we go back to the original learning rate schedule.
# 2.3. Batch Normalization with Large Minibatches
Batch Normalization (BN) [19] computes statistics along the minibatch dimension: this breaks the independence of each sampleâs loss, and changes in minibatch size change the underlying definition of the loss function being opti- mized. In the following we will show that a commonly used âshortcutâ, which may appear to be a practical consideration to avoid communication overhead, is actually necessary for preserving the loss function when changing minibatch size. We note that (1) and (2) assume the per-sample loss I(x, w) is independent of all other samples. This is not the case when BN is performed and activations are computed across samples. We write 1g (a, w) to denote that the loss of a single sample x depends on the statistics of all samples in its minibatch B. We denote the loss over a single minibatch B of size nas L(B,w) = + >.< le(#,w). With BN, the training set can be thought of as containing all distinct sub- sets of size n drawn from the original training set X, which we denote as X". The training loss L(w) then becomes:
L(w) = â > L(B,w). (5) x") oe,
If we view B as a âsingle sampleâ in X n, then the loss of each single sample B is computed independently.
Note that the minibatch size n over which the BN statis- tics are computed is a key component of the loss: if the per- worker minibatch sample size n is changed, it changes the underlying loss function L that is optimized. More specif- ically, the mean/variance statistics computed by BN with different n exhibit different levels of random variation.
In the case of distributed (and multi-GPU) training, if the per-worker sample size n is kept ï¬xed and the total mini- batch size is kn, it can be viewed a minibatch of k samples with each sample Bj independently selected from X n, so the underlying loss function is unchanged and is still de- ï¬ned in X n. Under this point of view, in the BN setting after seeing k minibatches Bj, (3) and (4) become:
wt+k = wt â η âL(Bj, wt+j), (6)
j<k 1 k
a wl Weyl = We - a 2, VHB). (7) jxk
Following similar logic as in §2.1, we set Ëη = kη and we keep the per-worker sample size n constant when we change the number of workers k.
In this work, we use n = 32 which has performed well for a wide range of datasets and networks [19, 16]. If n is adjusted, it should be viewed as a hyper-parameter of BN, not of distributed training. We also note that the BN statis- tics should not be computed across all workers, not only for the sake of reducing communication, but also for maintain- ing the same underlying loss function being optimized.
# 3. Subtleties and Pitfalls of Distributed SGD
In practice a distributed implementation has many sub- tleties. Many common implementation errors change the deï¬nitions of hyper-parameters, leading to models that train but whose error may be higher than expected, and such is- sues can be difï¬cult to discover. While the remarks below are straightforward, they are important to consider explic- itly to faithfully implement the underlying solver.
Weight decay. Weight decay is actually the outcome of the gradient of an L2-regularization term in the loss function. More formally, the per-sample loss in (1) can be written as U(a,w) = 3\lw||? + e(x,w). Here 4|\w||? is the sample- independent L2 regularization on the weights and <(zx, w) is a sample-dependent term such as the cross-entropy loss. The SGD update in (2) can be written as:
1 Wil = We â NAW â 1 > Ve(a, wr). (8) xeB
In practice, usually only the sample-dependent term > Ve(x, we) is computed by backprop; the term Aw; is computed separately and added to the aggregated gradients
4
contributed by ε(x, wt). If there is no weight decay term, there are many equivalent ways of scaling the learning rate, including scaling the term ε(x, wt). However, as can be seen from (8), in general this is not the case. We summarize these observations in the following remark:
Remark 1: Scaling the cross-entropy loss is not equivalent to scaling the learning rate. Momentum correction. Momentum SGD is a commonly adopted modiï¬cation to the vanilla SGD in (2). A reference implementation of momentum SGD has the following form:
1 = = U(a Url = muy + n XV (x, wr) 0) Wte+1 = We â NUt4+1-
Here m is the momentum decay factor and u is the update tensor. A popular variant absorbs the learning rate η into the update tensor. Substituting vt for ηut in (9) yields:
1 Upp. = Mv, + 7" > VIU(x, wr) «eB Wit = We â Ve41- (10)
For a fixed 77, the two are equivalent. However, we note that while u only depends on the gradients and is independent of 7, v is entangled with 7. When 7 changes, to maintain equivalence with the reference variant in (9), the update for v should be: vp41 = man up + mit SY Vi(a, w:). We refer to the factor a as the momentum correction. We found that this is especially important for stabilizing train- ing when 741 >> 7, otherwise the history term v;, is too small which leads to instability (for 741 < 7, momentum correction is less critical). This leads to our second remark: Remark 2: Apply momentum correction after changing learning rate if using (10). Gradient aggregation. For k workers each with a per- worker minibatch of size n, following (4), gradient aggre- gation must be performed over the entire set of kn examples according to 4 yj Lees, I(x, w;). Loss layers are typi- cally implemented to compute an average loss over their lo- cal input, which amounts to computing a per-worker loss of SY? U(x, w,)/n. Given this, correct aggregation requires av- eraging the k gradients in order to recover the missing 1/k factor. However, standard communication primitives like allreduce [11] perform summing, not averaging. Therefore, it is more efficient to absorb the 1/k scaling into the loss, in which case only the lossâs gradient with respect to its in- put needs to be scaled, removing the need to scale the entire gradient vector. We summarize this as follows:
Remark 3: Normalize the per-worker loss by total minibatch size kn, not per-worker size n. We also note that it may be incorrect to âcancel kâ by setting Ëη = η (not kη) and normalizing the loss by 1/n (not 1/kn), which can lead to incorrect weight decay (see Remark 1).
Data shufï¬ing. SGD is typically analyzed as a process that samples data randomly with replacement. In practice, com- mon SGD implementations apply random shufï¬ing of the training set during each SGD epoch, which can give better results [3, 13]. To provide fair comparisons with baselines that use shufï¬ing (e.g., [16]), we ensure the samples in one epoch done by k workers are from a single consistent ran- dom shufï¬ing of the training set. To achieve this, for each epoch we use a random shufï¬ing that is partitioned into k parts, each of which is processed by one of the k workers. Failing to correctly implement random shufï¬ing in multiple workers may lead to noticeably different behavior, which may contaminate results and conclusions. In summary:
Remark 4: Use a single random shufï¬ing of the training data (per epoch) that is divided amongst all k workers.
# 4. Communication
In order to scale beyond the 8 GPUs in a single Big Basin server [24], gradient aggregation has to span across servers on a network. To allow for near perfect linear scaling, the aggregation must be performed in parallel with backprop. This is possible because there is no data dependency be- tween gradients across layers. Therefore, as soon as the gra- dient for a layer is computed, it is aggregated across work- ers, while gradient computation for the next layer continues (as discussed in [5]). We give full details next.
# 4.1. Gradient Aggregation
For every gradient, aggregation is done using an allre- duce operation (similar to the MPI collective operation MPI Allreduce [11]). Before allreduce starts every GPU has its locally computed gradients and after allreduce completes every GPU has the sum of all k gradients. As the number of parameters grows and compute performance of GPUs in- creases, it becomes harder to hide the cost of aggregation in the backprop phase. Training techniques to overcome these effects are beyond the scope of this work (e.g., quantized gradients [18], Block-Momentum SGD [6]). However, at the scale of this work, collective communication was not a bottleneck, as we were able to achieve near-linear SGD scaling by using an optimized allreduce implementation.
Our implementation of allreduce consists of three phases for communication within and across servers: (1) buffers from the 8 GPUs within a server are summed into a sin- gle buffer for each server, (2) the results buffers are shared and summed across all servers, and ï¬nally (3) the results are broadcast onto each GPU. For the local reduction and broadcast in phases (1) and (3) we used NVIDIA Collective Communication Library (NCCL)3 for buffers of size 256 KB or more and a simple implementation consisting of a
# 3https://developer.nvidia.com/nccl
5
number of GPU-to-host memory copies and a CPU reduc- tion otherwise. NCCL uses GPU kernels to accelerate in- traserver collectives, so this approach dedicates more time on the GPU to backprop while using the CPU resources that would otherwise have been idle to improve throughput.
For interserver allreduce, we implemented two of the the re- best algorithms for bandwidth-limited scenarios: cursive halving and doubling algorithm [30, 37] and the bucket algorithm (also known as the ring algorithm) [2]. For both, each server sends and receives 2 pâ1 p b bytes of data, where b is the buffer size in bytes and p is the num- ber of servers. While the halving/doubling algorithm con- sists of 2 log2(p) communication steps, the ring algorithm consists of 2(p â 1) steps. This generally makes the halv- ing/doubling algorithm faster in latency-limited scenarios (i.e., for small buffer sizes and/or large server counts). In practice, we found the halving/doubling algorithm to per- form much better than the ring algorithm for buffer sizes up to a million elements (and even higher on large server counts). On 32 servers (256 GPUs), using halving/doubling led to a speedup of 3Ã over the ring algorithm.
The halving/doubling algorithm consists of a reduce- scatter collective followed by an allgather. In the ï¬rst step of reduce-scatter, servers communicate in pairs (rank 0 with 1, 2 with 3, etc.), sending and receiving for different halves of their input buffers. For example, rank 0 sends the second half of its buffer to 1 and receives the ï¬rst half of the buffer from 1. A reduction over the received data is performed be- fore proceeding to the next step, where the distance to the destination rank is doubled while the data sent and received is halved. After the reduce-scatter phase is ï¬nished, each server has a portion of the ï¬nal reduced vector.
This is followed by the allgather phase, which retraces the communication pattern from the reduce-scatter in re- verse, this time simply concatenating portions of the ï¬nal reduced vector. At each server, the portion of the buffer that was being sent in the reduce-scatter is received in the all- gather, and the portion that was being received is now sent. To support non-power-of-two number of servers, we used the binary blocks algorithm [30]. This is a generalized version of the halving/doubling algorithm where servers are partitioned into power-of-two blocks and two additional communication steps are used, one immediately after the intrablock reduce-scatter and one before the intrablock all- gather. Non-power-of-two cases have some degree of load imbalance compared to power-of-two, though in our runs we did not see signiï¬cant performance degradation.
# 4.2. Software
The allreduce algorithms described are implemented in Gloo4, a library for collective communication. It supports
# 4https://github.com/facebookincubator/gloo
multiple communication contexts, which means no addi- tional synchronization is needed to execute multiple allre- duce instances in parallel. Local reduction and broadcast (described as phases (1) and (3)) are pipelined with inter- server allreduce where possible.
Caffe2 supports multi-threaded execution of the compute graph that represents a training iteration. Whenever there is no data dependency between subgraphs, multiple threads can execute those subgraphs in parallel. Applying this to backprop, local gradients can be computed in sequence, without dealing with allreduce or weight updates. This means that during backprop, the set of runnable subgraphs may grow faster than we can execute them. For subgraphs that contain an allreduce run, all servers must choose to exe- cute the same subgraph from the set of runnable subgraphs. Otherwise, we risk distributed deadlock where servers are attempting to execute non-intersecting sets of subgraphs. With allreduce being a collective operation, servers would time out waiting. To ensure correct execution we impose a partial order on these subgraphs. This is implemented using a cyclical control input, where completion of the n-th allre- duce unblocks execution of the (n + c)-th allreduce, with c being the maximum number of concurrent allreduce runs. Note that this number should be chosen to be lower than the number of threads used to execute the full compute graph.
# 4.3. Hardware
We used Facebookâs Big Basin [24] GPU servers for our experiments. Each server contains 8 NVIDIA Tesla P100 GPUs that are interconnected with NVIDIA NVLink. For local storage, each server has 3.2TB of NVMe SSDs. the servers have a Mellanox For network connectivity, ConnectX-4 50Gbit Ethernet network card and are con- nected to Wedge100 [1] Ethernet switches.
We have found 50Gbit of network bandwidth sufï¬cient for distributed synchronous SGD for ResNet-50, per the following analysis. ResNet-50 has approximately 25 mil- lion parameters. This means the total size of parameters is 25 · 106 · sizeof(ï¬oat) = 100MB. Backprop for ResNet-50 on a single NVIDIA Tesla P100 GPU takes 120 ms. Given that allreduce requires â¼2à bytes on the network compared to the value it operates on, this leads to a peak bandwidth re- quirement of 200MB/0.125s = 1600MB/s, or 12.8 Gbit/s, not taking into account communication overhead. When we add a smudge factor for network overhead, we reach a peak bandwidth requirement for ResNet-50 of â¼15 Gbit/s.
As this peak bandwidth requirement only holds during backprop, the network is free to be used for different tasks that are less latency sensitive then aggregation (e.g. reading data or saving network snapshots) during the forward pass.
6
# 5. Main Results and Analysis
Our main result is that we can train ResNet-50 [16] on ImageNet [33] using 256 workers in one hour, while match- ing the accuracy of small minibatch training. Applying the linear scaling rule along with a warmup strategy allows us to seamlessly scale between small and large minibatches (up to 8k images) without tuning additional hyper-parameters or impacting accuracy. In the following subsections we: (1) describe experimental settings, (2) establish the effec- tiveness of large minibatch training, (3) perform a deeper experimental analysis, (4) show our ï¬ndings generalize to object detection/segmentation, and (5) provide timings.
# 5.1. Experimental Settings
The 1000-way ImageNet classiï¬cation task [33] serves as our main experimental benchmark. Models are trained on the â¼1.28 million training images and evaluated by top- 1 error on the 50,000 validation images.
We use the ResNet-50 [16] variant from [12], noting that the stride-2 convolutions are on 3Ã3 layers instead of on 1Ã1 layers as in [16]. We use Nesterov momentum [29] with m of 0.9 following [12] but note that standard mo- mentum as was used in [16] is equally effective. We use a weight decay λ of 0.0001 and following [16] we do not ap- ply weight decay on the learnable BN coefï¬cients (namely, γ and β in [19]). In order to keep the training objective ï¬xed, which depends on the BN batch size n as described in §2.3, we use n = 32 throughout, regardless of the overall minibatch size. As in [12], we compute the BN statistics using running average (with momentum 0.9).
All models are trained for 90 epochs regardless of mini- batch sizes. We apply the linear scaling rule from §2.1 and use a learning rate of η = 0.1 · kn 256 that is linear in the mini- batch size kn. With k = 8 workers (GPUs) and n = 32 samples per worker, η = 0.1 as in [16]. We call this num- ber (0.1 · kn 256 ) the reference learning rate, and reduce it by 1/10 at the 30-th, 60-th, and 80-th epoch, similar to [16].
We adopt the initialization of [15] for all convolutional layers. The 1000-way fully-connected layer is initialized by drawing weights from a zero-mean Gaussian with standard deviation of 0.01. We have found that although SGD with a small minibatch is not sensitive to initialization due to BN, this is not the case for a substantially large minibatch. Addi- tionally we require an appropriate warmup strategy to avoid optimization difï¬culties in early training.
For BN layers, the learnable scaling coefï¬cient γ is ini- tialized to be 1, except for each residual blockâs last BN where γ is initialized to be 0. Setting γ = 0 in the last BN of each residual block causes the forward/backward signal ini- tially to propagate through the identity shortcut of ResNets, which we found to ease optimization at the start of training. This initialization improves all models but is particularly helpful for large minibatch training as we will show.
We use scale and aspect ratio data augmentation [36] as in [12]. The network input image is a 224Ã224 pixel ran- dom crop from an augmented image or its horizontal ï¬ip. The input image is normalized by the per-color mean and standard deviation, as in [12].
Handling random variation. As models are subject to random variation in training, we compute a modelâs error rate as the median error of the ï¬nal 5 epochs. Moreover, we report the mean and standard deviation (std) of the error from 5 independent runs. This gives us more conï¬dence in our results and also provides a measure of model stability.
The random variation of ImageNet models has generally not been reported in previous work (largely due to resource limitations). We emphasize that ignoring random variation may cause unreliable conclusions, especially if results are from a single trial, or the best of many.
Baseline. Under these settings, we establish a ResNet-50 baseline using k = 8 (8 GPUs in one server) and n = 32 images per worker (minibatch size of kn = 256), as in [16]. Our baseline has a top-1 validation error of 23.60% ±0.12. As a reference, ResNet-50 from fb.resnet.torch [12] has 24.01% error, and that of the original ResNet paper [16] has 24.7% under weaker data augmentation.
# 5.2. Optimization or Generalization Issues?
We establish our main results on large minibatch train- ing by exploring optimization and generalization behaviors. We will demonstrate that with a proper warmup strategy, large minibatch SGD can both match the training curves of small minibatch SGD and also match the validation error. In other words, in our experiments both optimization and generalization of large minibatch training matches that of small minibatch training. Moreover, in §5.4 we will show that these models exhibit good generalization behavior to the object detection/segmentation transfer tasks, matching the transfer quality of small minibatch models.
For the following results, we use k = 256 and n = 32, which results in a minibatch size kn = 8k (we use â1kâ to denote 1024). As discussed, our baseline has a mini- batch size of kn = 256 and a reference learning rate of η = 0.1. Applying the linear scaling rule gives η = 3.2 as the reference learning rate for our large minibatch runs. We test three warmup strategies as discussed in §2.2: no warmup, constant warmup with η = 0.1 for 5 epochs, and gradual warmup which starts with η = 0.1 and is linearly increased to η = 3.2 over 5 epochs. All models are trained from scratch and all other hyper-parameters are kept ï¬xed. We emphasize that while better results for any particular minibatch size could be obtained by optimizing hyper-parameters for that case; our goal is to match er- rors across minibatch sizes by using a general strategy that avoids hyper-parameter tuning for each minibatch size.
7
k n kn η top-1 error (%) baseline (single server) no warmup, Figure 2a constant warmup, Figure 2b gradual warmup, Figure 2c 8 256 256 256 32 32 32 32 256 8k 8k 8k 0.1 3.2 3.2 3.2 23.60 ±0.12 24.84 ±0.37 25.88 ±0.56 23.74 ±0.09
Table 1. Validation error on ImageNet using ResNet-50 (mean and std computed over 5 trials). We compare the small minibatch model (kn=256) with large minibatch models (kn=8k) with vari- ous warmup strategies. Observe that the top-1 validation error for small and large minibatch training (with gradual warmup) is quite close: 23.60% ±0.12 vs. 23.74% ±0.09, respectively.
Training error. Training curves are shown in Figure 2. With no warmup (2a), the training curve for large minibatch of kn = 8k is inferior to training with a small minibatch of kn = 256 across all epochs. A constant warmup strategy (2b) actually degrades results: although the small constant learning rate can decrease error during warmup, the error spikes immediately after and training never fully recovers.
Our main result is that with gradual warmup, large mini- batch training error matches the baseline training curve ob- tained with small minibatches, see Figure 2c. Although the large minibatch curve starts higher due to the low η in the warmup phase, it catches up shortly thereafter. Af- ter about 20 epochs, the small and large minibatch training curves match closely. The comparison between no warmup and gradual warmup suggests that large minibatch sizes are challenged by optimization difï¬culties in early training and if these difï¬culties are addressed, the training error and its curve can match a small minibatch baseline closely.
Validation error. Table 1 shows the validation error for the three warmup strategies. The no-warmup variant has â¼1.2% higher validation error than the baseline which is likely caused by the â¼2.1% increase in training error (Fig- ure 2a), rather than overï¬tting or other causes for poor gen- eralization. This argument is further supported by our grad- ual warmup experiment. The gradual warmup variant has a validation error within 0.14% of the baseline (noting that std of these estimates is â¼0.1%). Given that the ï¬nal train- ing errors (Figure 2c) match nicely in this case, it shows that if the optimization issues are addressed, there is no apparent generalization degradation observed using large minibatch training, even if the minibatch size goes from 256 to 8k.
Finally, Figure 4 shows both the training and valida- tion curves for the large minibatch training with gradual warmup. As can be seen, validation error starts to match the baseline closely after the second learning rate drop; ac- tually, the validation curves can match earlier if BN statis- tics are recomputed prior to evaluating the error instead of using the running average (see also caption in Figure 4).
(a) no warmup (b) constant warmup (c) gradual warmup
Figure 2. Warmup. Training error curves for minibatch size 8192 using various warmup strategies compared to minibatch size 256. Validation error (mean±std of 5 runs) is shown in the legend, along with minibatch size kn and reference learning rate η.
training error % training error % training error % 100 90 100 90 100 90 kn=256, n= 0.1, 23.60%40.12 kn=128, n= 0.05 23.49840.12 kn=256, = 0.1, 23.60 0.12 kn=512, n= 0.2, 23.48%40.09 kn=256, n= 0.1, 23.60%40.12 kn= 1k, n= 0.4, 23.53%40.08 20 40 60 80 20 40 60 80 20 40 60 80 kn=256, n= 0.1, 23.60%40.12 kn= 2k, n= 0.8, 23.49840.11 kn=256, n= 0.1, 23.60%40.12 kn= 4k, n= 1.6, 23.56840.12 kn=256, n= 0.1, 23.60%40.12 kn= 8k, n= 3.2, 23.74%40.09 20 40 60 80 20 40 60 80 20 40 60 80 kn=256, n= 0.1, 23.60%40.12 kn=256, n= 0.1, 23.60%40.12 kn=256, n= 0.1, 23.60%40.12 kn=16k, n= 6.4, 24.79%40.27 kn=32k, =12.8, 27.55%40.28 kn=64k, =25.6, 33.96%40.80 20 40 60 80 epochs 20 40 60 epochs 80 20 40 60 80 epochs
Figure 3. Training error vs. minibatch size. Training error curves for the 256 minibatch baseline and larger minibatches using gradual warmup and the linear scaling rule. Note how the training curves closely match the baseline (aside from the warmup period) up through 8k minibatches. Validation error (mean±std of 5 runs) is shown in the legend, along with minibatch size kn and reference learning rate η.
8
100 80 x 4 5 60 © 40 20 0 20 40 60 80 epochs
Figure 4. Training and validation curves for large minibatch SGD with gradual warmup vs. small minibatch SGD. Both sets of curves match closely after training for sufï¬cient epochs. We note that the BN statistics (for inference only) are computed us- ing running average, which is updated less frequently with a large minibatch and thus is noisier in early training (this explains the larger variation of the validation error in early epochs).
# 5.3. Analysis Experiments
Minibatch size vs. error. Figure 1 (page 1) shows top- 1 validation error for models trained with minibatch sizes ranging from of 64 to 65536 (64k). For all models we used the linear scaling rule and set the reference learning rate as η = 0.1 · kn 256 . For models with kn > 256, we used the gradual warmup strategy always starting with η = 0.1 and increasing linearly to the reference learning rate after 5 epochs. Figure 1 illustrates that validation error remains stable across a broad range of minibatch sizes, from 64 to 8k, after which it begins to increase. Beyond 64k training diverges when using the linear learning rate scaling rule.5
Training curves for various minibatch sizes. Each of the nine plots in Figure 3 shows the top-1 training error curve for the 256 minibatch baseline (orange) and a second curve corresponding to different size minibatch (blue). Valida- tion errors are shown in the plot legends. As minibatch size increases, all training curves show some divergence from the baseline at the start of training. However, in the cases where the ï¬nal validation error closely matches the base- line (kn ⤠8k), the training curves also closely match after the initial epochs. When the validation errors do not match (kn ⥠16k), there is a noticeable gap in the training curves for all epochs. This suggests that when comparing a new setting, the training curves can be used as a reliable proxy for success well before training ï¬nishes.
Alternative learning rate rules. Table 2a shows results for multiple learning rates. For small minibatches (kn = 256),
5We note that because of the availability of hardware, we simulated dis- tributed training of very large minibatches (â¥12k) on a single server by us- ing multiple gradient accumulation steps between SGD updates. We have thoroughly veriï¬ed that gradient accumulation on a single server yields equivalent results relative to distributed training.
9
100 kn=256, n= 0.1, 23.60%+0.12 kn=256, n= 0.2, 23.68%3+0.09 training error % 0 20 40 60 80 epochs
Figure 5. Training curves for small minibatches with different learning rates η. As expected, changing η results in curves that do not match. This is in contrast to changing batch-size (and linearly scaling η), which results in curves that do match, e.g. see Figure 3.
η = 0.1 gives best error but slightly smaller or larger η also work well. When applying the linear scaling rule with a minibatch of 8k images, the optimum error is also achieved with η = 0.1 · 32, showing the successful application of the linear scaling rule. However, in this case results are more sensitive to changing η. In practice we suggest to use a minibatch size that is not close to the breaking point.
Figure 5 shows the training curves of a 256 minibatch using η = 0.1 or 0.2. It shows that changing the learning rate η in general changes the overall shapes of the train- ing curves, even if the ï¬nal error is similar. Contrasting this result with the success of the linear scaling rule (that can match both the ï¬nal error and the training curves when minibatch sizes change) may reveal some underlying invari- ance maintained between small and large minibatches.
We also show two alternative strategies: keeping η ï¬xed at 0.1 or using 0.1· 32 according to the square root scaling rule that was justiï¬ed theoretically in [21] on grounds that it scales η by the inverse amount of the reduction in the gradi- ent estimatorâs standard deviation. For fair comparisons we also use gradual warmup for 0.1 · 32. Both policies work poorly in practice as the results show.
Batch Normalization γ initialization. Table 2b controls for the impact of the new BN γ initialization introduced in §5.1. We show results for minibatch sizes 256 and 8k with the standard BN initialization (γ = 1 for all BN layers) and with our initialization (γ = 0 for the ï¬nal BN layer of each residual block). The results show improved per- formance with γ = 0 for both minibatch sizes, and the improvement is slightly larger for the 8k minibatch size. This behavior also suggests that large minibatches are more easily affected by optimization difï¬culties. We expect that improved optimization and initialization methods will help push the boundary of large minibatch training.
ResNet-101. Results for ResNet-101 [16] are shown in Ta- ble 2c. Training ResNet-101 with a batch-size of kn = 8k
kn 256 256 256 8k 8k 8k 8k 8k η 0.05 0.10 0.20 0.05 · 32 0.10 · 32 0.20 · 32 0.10 â top-1 error (%) 23.92 ±0.10 23.60 ±0.12 23.68 ±0.09 24.27 ±0.08 23.74 ±0.09 24.05 ±0.18 41.67 ±0.10 26.22 ±0.03 0.10 · 32
(a) Comparison of learning rate scaling rules. A reference learning rate of η = 0.1 works best for kn = 256 (23.68% error). The linear scal- ing rule suggests η = 0.1 · 32 when kn = 8k, which again gives best performance (23.74% error). Other ways of scaling η give worse results.
kn 256 256 8k 8k η 0.1 0.1 3.2 3.2 γ-init 1.0 0.0 1.0 0.0 top-1 error (%) 23.84 ±0.18 23.60 ±0.12 24.11 ±0.07 23.74 ±0.09
(b) Batch normalization γ initialization. Initializing γ = 0 in the last BN layer of each residual block improves results for both small and large minibatches. This initialization leads to better optimization behavior which has a larger positive impact when training with large minibatches.
model type ResNet-101 ResNet-101 kn 256 8k η 0.1 3.2 top-1 error (%) 22.08 ±0.06 22.36 ±0.09
(c) The linear scaling rule applied to ResNet-101. The difference in error is about 0.3% between small and large minibatch training.
Table 2. ImageNet classiï¬cation experiments. Unless noted all experiments use ResNet-50 and are averaged over 5 trials.
and a linearly scaled η = 3.2 results in an error of 22.36% vs. the kn = 256 baseline which achieves 22.08% with η = 0.1. In other words, ResNet-101 trained with mini- batch 8k has a small 0.28% increase in error vs. the baseline. It is likely that the minibatch size of 8k lies on the edge of the useful minibatch training regime for ResNet-101, simi- larly to ResNet-50 (see Figure 1).
The training time of ResNet-101 is 92.5 minutes in our implementation using 256 Tesla P100 GPUs and a mini- batch size of 8k. We believe this is a compelling result if the speed-accuracy tradeoff of ResNet-101 is preferred.
ImageNet-5k. Observing the sharp increase in validation error between minibatch sizes of 8k and 16k on ImageNet- 1k (Figure 1), a natural question is if the position of this âelbowâ in the error curve is a function of dataset infor- mation content. To investigate this question, we adopt the ImageNet-5k dataset suggested by Xie et al. [39] that extends ImageNet-1k to 6.8 million images (roughly 5à larger) by adding 4k additional categories from ImageNet- 22k [33]. We evaluate the 1k-way classiï¬cation error on the original ImageNet-1k validation set as in [39].
The minibatch size vs. validation error curve for ImageNet-5k is shown in Figure 6. Qualitatively, the curve
10
nD o o ES a 36 a 3S ImageNet top-1 validation error Py 3 32k 64k ny a too) 512 1k 2k 4k 8k mini-batch size 16k
Figure 6. ImageNet-5k top-1 validation error vs. minibatch size with a ï¬xed 90 epoch training schedule. The curve is qualitatively similar to results on ImageNet-1k (Figure 1) showing that a 5à increase in training data does not lead to a signiï¬cant change in the maximum effective minibatch size.
ImageNet pre-training COCO kn 256 512 1k 2k 4k 8k 16k η 0.1 0.2 0.4 0.8 1.6 3.2 6.4 top-1 error (%) 23.60 ±0.12 23.48 ±0.09 23.53 ±0.08 23.49 ±0.11 23.56 ±0.12 23.74 ±0.09 24.79 ±0.27 box AP (%) 35.9 ±0.1 35.8 ±0.1 35.9 ±0.2 35.9 ±0.1 35.8 ±0.1 35.8 ±0.1 35.1 ±0.3 mask AP (%) 33.9 ±0.1 33.8 ±0.2 33.9 ±0.2 33.9 ±0.1 33.8 ±0.1 33.9 ±0.2 33.2 ±0.3
(a) Transfer learning of large minibatch pre-training to Mask R-CNN. Box and mask AP (on COCO minival) are nearly identical for ResNet- 50 models pre-trained with minibatches from 256 to 8k examples. With a minibatch pre-training size of 16k both ImageNet validation error and COCO AP deteriorate. This indicates that as long as ImageNet error is matched, large minibatches do not degrade transfer learning performance.
box AP (%) mask AP (%)
1 2 4 8 2.5 5.0 10.0 20.0 35.7 35.7 35.7 35.6 33.6 33.7 33.5 33.6
(b) Linear learning rate scaling applied to Mask R-CNN. Using the sin- gle ResNet-50 model from [16] (thus no std is reported), we train Mask R-CNN using using from 1 to 8 GPUs following the linear learning rate scaling rule. Box and mask AP are nearly identical across all conï¬gurations showing the successful generalization of the rule beyond classiï¬cation.
# Table 3. Object detection on COCO with Mask R-CNN [14].
is very similar to the ImageNet-1k curve, showing that for practitioners it is unlikely that even a 5Ã increase in dataset size will automatically lead to a meaningful increase in use- able minibatch size. Quantitatively, using an 8k minibatch increases the validation error by 0.26% from 25.83% for a 256 minibatch to 26.09%. An understanding of the precise relationship between generalization error, minibatch size, and dataset information content is open for future work.
# 5.4. Generalization to Detection and Segmentation
A low error rate on ImageNet is not typically an end goal. Instead, the utility of ImageNet training lies in learn-
° o Cy > _ 8 0.28 , 2 < E S 0.26 4s FA 8 g 3 20.24 22 3 & c oO ® £ 0.22 1 = 0.2 0.5 256 512 1k 2k 4k 8k 11k mini-batch size
Figure 7. Distributed synchronous SGD timing. Time per itera- tion (seconds) and time per ImageNet epoch (minutes) for training with different minibatch sizes. The baseline (kn = 256) uses 8 GPUs in a single server , while all other training runs distribute training over (kn/256) server. With 352 GPUs (44 servers) our implementation completes one pass over all â¼1.28 million Ima- geNet training images in about 30 seconds.
ing good features that transfer, or generalize well, to re- lated tasks. A question of key importance is if the features learned with large minibatches generalize as well as the fea- tures learned with small minibatches? this, we adopt
the object detection and in- stance segmentation tasks on COCO [27] as these advanced perception tasks beneï¬t substantially from ImageNet pre- training [10]. We use the recently developed Mask R-CNN [14] system that is capable of learning to detect and segment object instances. We follow all of the hyper-parameter set- tings used in [14] and only change the ResNet-50 model used to initialize Mask R-CNN training. We train Mask R- CNN on the COCO trainval35k split and report results on the 5k image minival split used in [14].
It is interesting to note that the concept of minibatch size in Mask R-CNN is different from the classiï¬cation setting. As an extension of the image-centric Fast/Faster R-CNN [9, 31], Mask R-CNN exhibits different minibatch sizes for different layers: the network backbone uses two images (per GPU), but each image contributes 512 Regions- of-Interest for computing classiï¬cation (multinomial cross- entropy), bounding-box regression (smooth-L1/Huber), and pixel-wise mask (28 à 28 binomial cross-entropy) losses. This diverse set of minibatch sizes and loss functions pro- vides a good test case to the robustness of our approach.
Transfer learning from large minibatch pre-training. To test how large minibatch pre-training effects Mask R- CNN, we take ResNet-50 models trained on ImageNet-1k with 256 to 16k minibatches and use them to initialize Mask R-CNN training. For each minibatch size we pre-train 5 models and then train Mask R-CNN using all 5 models on COCO (35 models total). We report the mean box and mask APs, averaged over the 5 trials, in Table 3a. The results show that as long as ImageNet validation error is kept low, which is true up to 8k batch size, generalization to object de-
11
32k | |â*âideal 3 â actual 5 16k is} oO 2 8k 3 > 4k £ 2k 8 16 32 64 128 256 352 # GPUs
Figure 8. Distributed synchronous SGD throughput. The small overhead when moving from a single server with 8 GPUs to multi- server distributed training (Figure 7, blue curve) results in linear throughput scaling that is marginally below ideal scaling (â¼90% efï¬ciency). Most of the allreduce communication time is hid- den by pipelining allreduce operations with gradient computation. Moreover, this is achieved with commodity Ethernet hardware.
tection matches the AP of the small minibatch baseline. We emphasize that we observed no generalization issues when transferring across datasets (from ImageNet to COCO) and across tasks (from classiï¬cation to detection/segmentation) using models trained with large minibatches.
Linear scaling rule applied to Mask R-CNN. We also show evidence of the generality of the linear scaling rule us- ing Mask R-CNN. In fact, this rule was already used with- out explicit discussion in [16] and was applied effectively as the default Mask R-CNN training scheme when using 8 GPUs. Table 3b provides experimental results showing that when training with 1, 2, 4, or 8 GPUs the linear learning rate rule results in constant box and mask AP. For these experi- ments, we initialize Mask R-CNN from the released MSRA ResNet-50 model, as was done in [14].
# 5.5. Run Time
Figure 7 shows two visualizations of the run time char- acteristics of our system. The blue curve is the time per iteration as minibatch size varies from 256 to 11264 (11k). Notably this curve is relatively ï¬at and the time per itera- tion increases only 12% while scaling the minibatch size by 44Ã. Visualized another way, the orange curve shows the approximately linear decrease in time per epoch from over 16 minutes to just 30 seconds. Run time performance can also be viewed in terms of throughput (images / second), as shown in Figure 8. Relative to a perfectly efï¬cient extrapo- lation of the 8 GPU baseline, our implementation achieves â¼90% scaling efï¬ciency.
Acknowledgements. We would like to thank Leon Bottou for helpful discussions on theoretical background, Jerry Pan and Christian Puhrsch for discussions on efï¬cient data loading, An- drew Dye for help with debugging distributed training, and Kevin Lee, Brian Dodds, Jia Ning, Koh Yew Thoon, Micah Harris, and John Volk for Big Basin and hardware support.
# References
[1] J. Bagga, H. Morsy, for and Z. Yao. 100. Opening https: 6-pack and Wedge
designs //code.facebook.com/posts/203733993317833/ opening-designs-for-6-pack-and-wedge-100, 2016. [2] M. Barnett, L. Shuler, R. van De Geijn, S. Gupta, D. G. Payne, and J. Watts. Interprocessor collective communica- tion library (intercom). In Scalable High-Performance Com- puting Conference, 1994.
[3] L. Bottou. Curiously fast convergence of some stochastic gradient descent algorithms. Unpublished open problem of- fered to the attendance of the SLDS 2009 conference, 2009. [4] L. Bottou, F. E. Curtis, and J. Nocedal. Opt. methods for large-scale machine learning. arXiv:1606.04838, 2016. [5] J. Chen, X. Pan, R. Monga, S. Bengio, and R. Joze- Revisiting Distributed Synchronous SGD.
fowicz. arXiv:1604.00981, 2016.
[6] K. Chen and Q. Huo. Scalable training of deep learning ma- chines by incremental block training with intra-block par- allel optimization and blockwise model-update ï¬ltering. In ICASSP, 2016. [7] R. Collobert,
J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language pro- cessing (almost) from scratch. JMLR, 2011.
[8] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional acti- vation feature for generic visual recognition. In ICML, 2014.
[9] R. Girshick. Fast R-CNN. In ICCV, 2015. [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
[11] W. Gropp, E. Lusk, and A. Skjellum. Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT Press, Cambridge, MA, 1999.
[12] S. Gross and M. Wilber. Training and investigating Resid- https://github.com/facebook/fb. ual Nets. resnet.torch, 2016.
[13] M. G¨urb¨uzbalaban, A. Ozdaglar, and P. Parrilo. Why stochastic gradient descent. random reshufï¬ing beats arXiv:1510.08560, 2015.
[14] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask R- CNN. arXiv:1703.06870, 2017.
[15] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, 2015.
[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[17] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012.
[18] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neu- ral networks with low precision weights and activations. arXiv:1510.08560, 2016.
[19] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
12
[20] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Gen- eralization gap and sharp minima. ICLR, 2017.
[21] A. Krizhevsky. One weird trick for parallelizing convolu- tional neural networks. arXiv:1404.5997, 2014.
[22] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classi- ï¬cation with deep convolutional neural nets. In NIPS, 2012. [23] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural compu- tation, 1989.
[24] K. Lee. AI 1835166200089399/introducing-big-basin, 2017.
[25] M. Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Carnegie Mellon Uni- versity, 2017.
[26] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017.
[27] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft COCO: Com- mon objects in context. In ECCV. 2014.
[28] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [29] Y. Nesterov. Introductory lectures on convex optimization: A
basic course. Springer, 2004.
[30] R. Rabenseifner. Optimization of collective reduction oper- ations. In ICCS. Springer, 2004.
[31] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: To- wards real-time object detection with region proposal net- works. In NIPS, 2015.
[32] H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, 1951. [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
[34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [35] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, 2015.
[37] R. Thakur, R. Rabenseifner, and W. Gropp. Optimization of collective comm. operations in MPICH. IJHPCA, 2005. [38] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. Googleâs neural machine translation system: Bridg- ing the gap between human and machine translation. arXiv:1609.08144, 2016.
[39] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR, 2017.
[40] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stol- cke, D. Yu, and G. Zweig. The Microsoft 2016 Conversa- tional Speech Recognition System. arXiv:1609.03528, 2016. [41] M. D. Zeiler and R. Fergus. Visualizing and understanding
convolutional neural networks. In ECCV, 2014. | {
"id": "1606.04838"
} |
1706.02633 | Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a
framework for training models to produce realistic-looking data. In this work,
we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to
produce realistic real-valued multi-dimensional time series, with an emphasis
on their application to medical data. RGANs make use of recurrent neural
networks in the generator and the discriminator. In the case of RCGANs, both of
these RNNs are conditioned on auxiliary information. We demonstrate our models
in a set of toy datasets, where we show visually and quantitatively (using
sample likelihood and maximum mean discrepancy) that they can successfully
generate realistic time-series. We also describe novel evaluation methods for
GANs, where we generate a synthetic labelled training dataset, and evaluate on
a real test set the performance of a model trained on the synthetic data, and
vice-versa. We illustrate with these metrics that RCGANs can generate
time-series data useful for supervised training, with only minor degradation in
performance on real test data. This is demonstrated on digit classification
from 'serialised' MNIST and by training an early warning system on a medical
dataset of 17,000 patients from an intensive care unit. We further discuss and
analyse the privacy concerns that may arise when using RCGANs to generate
realistic synthetic medical time series data. | http://arxiv.org/pdf/1706.02633 | Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch | stat.ML, cs.LG | 13 pages, 4 figures, 3 tables (update with differential privacy) | null | stat.ML | 20170608 | 20171204 | 7 1 0 2
c e D 4 ] L M . t a t s [ 2 v 3 3 6 2 0 . 6 0 7 1 : v i X r a
# REAL-VALUED (MEDICAL) TIME SERIES GENERA- TION WITH RECURRENT CONDITIONAL GANS
Stephanie L. Hylandâ ETH Zurich, Switzerland Tri-Institutional Training Program in Computational Biology and Medicine, Weill Cornell Medical stephanie.hyland@inf.ethz.ch
Cristóbal Estebanâ ETH Zurich, Switzerland cristobal.esteban@inf.ethz.ch Gunnar Rätsch ETH Zurich, Switzerland raetsch@inf.ethz.ch
# ABSTRACT
Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classiï¬cation from âserialisedâ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN.
# INTRODUCTION
Access to data is one of the bottlenecks in the development of machine learning solutions to domain- speciï¬c problems. The availability of standard datasets (with associated tasks) has helped to advance the capabilities of learning systems in multiple tasks. However, progress appears to lag in other ï¬elds, such as medicine. It is tempting to suggest that tasks in medicine are simply harder - the data more complex, more noisy, the prediction problems less clearly deï¬ned. Regardless of this, the dearth of data accessible to researchers hinders model comparisons, reproducibility and ultimately scientiï¬c progress. However, due to the highly sensitive nature of medical data, its access is typically highly controlled, or require involved and likely imperfect de-identiï¬cation. The motivation for this work is therefore to exploit and develop the framework of generative adversarial networks (GANs) to generate realistic synthetic medical data. This data could be shared and published without privacy concerns, or even used to augment or enrich similar datasets collected in different or smaller cohorts of patients. Moreover, building a system capable of synthesizing realistic medical data implies modelling the processes that generates such information, and therefore it can represent the ï¬rst step towards developing a new approach for creating predictive systems in medical environments.
Beyond the utility to the machine learning research community, such a tool stands to beneï¬t the medical community for use in training simulators. In this work, we focus on synthesising real-valued
âAuthors contributed equally.
time-series data as from an Intensive Care Unit (ICU). In ICUs, doctors have to make snap decisions under time pressure, where they cannot afford to hesitate. It is already standard in medical training to use simulations to train doctors, but these simulations often rely on hand-engineered rules and physical props. Thus, a model capable of generating diverse and realistic ICU situations could have an immediate application, especially when given the ability to condition on underlying âstatesâ of the patient.
The success of GANs in generating realistic-looking images (Radford et al., 2015; Ledig et al., 2016; Gauthier, 2014; Reed et al., 2016) suggests their applicability for this task, however limited work has exploited them for generating time-series data. In addition, evaluation of GANs remains a largely-unsolved problem, with researchers often relying on visual evaluation of generated examples, an approach which is both impractical and inappropriate for multi-dimensional medical time series.
The primary contributions of this work are:
1. Demonstration of a method to generate real-valued sequences using adversarial training.
2. Showing novel approaches for evaluating GANs.
3. Generating synthetic medical time series data.
4. Empirical privacy analysis of both GANs and differential private GANs.
# 2 RELATED WORK
Since their inception in 2014 (Goodfellow et al., 2014), the GAN framework has attracted signiï¬cant attention from the research community, and much of this work has focused on image generation (Rad- ford et al., 2015; Ledig et al., 2016; Gauthier, 2014; Reed et al., 2016). Notably, (Choi et al., 2017) designed a GAN to generate synthetic electronic health record (EHR) datasets. These EHRs contain binary and count variables, such as ICD-9 billing codes, medication, and procedure codes. Their focus on discrete-valued data and generating snapshots of a patient is complementary to our real-valued, time series focus. Future work could combine these approaches to generate multi-modal synthetic medical time-series data.
The majority of sequential data generation with GANs has focused on discrete tokens useful for natural language processing (Yu et al., 2016), where an alternative approach based on Reinforcement Learning (RL) is used to train the GAN. We are aware of only one preliminary work using GANs to generate continuous-valued sequences, which aims to produce polyphonic music using a GAN with LSTM generator and discriminator (Mogren, 2016). The primary differences are architectural: we do not use a bidirectional discriminator, and outputs of the generator are not fed back as inputs at the next time step. Moreover, we introduce also a conditional version of this Recurrent GAN.
Conditional GANs (Mirza & Osindero, 2014; Gauthier, 2014) condition the model on additional information and therefore allow us to direct the data generation process. This approach has been mainly used for image generation tasks (Radford et al., 2015; Mirza & Osindero, 2014; Antipov et al., 2017). Recently, Conditional GAN architectures have been also used in natural language processing, including translation (Yang et al., 2017) and dialogue generation (Li et al., 2017), where none of them uses an RNN as the preferred choice for the discriminator and, as previously mentioned, a RL approach is used to train the models due to the discrete nature of the data.
In this work, we also introduce some novel approaches to evaluate GANs, using the capability of the generated synthetic data to train supervised models. In a related fashion, a GAN-based semi- supervised learning approach was introduced in (Salimans et al., 2016). However, our goal is to generate data that can be used to train models for tasks that are unknown at the moment the GAN is trained.
We brieï¬y explore the use of differentially private stochastic gradient descent (Abadi et al., 2016) to produce a RGAN with stronger privacy guarantees, which is especially relevant for sensitive medical data. An alternate method would be to use the PATE approach (Papernot et al., 2016) to train the discriminator. In this case, rather than introducing noise into gradients (as in (Abadi et al., 2016)), a student classiï¬er is trained to predict the noisy votes of an ensemble of teachers, each trained on disjoint sets of the data.
# 3 MODELS: RECURRENT GAN AND RECURRENT CONDITIONAL GAN
The model presented in this work follows the architecture of a regular GAN, where both the generator and the discriminator have been substituted by recurrent neural networks. Therefore, we present a Recurrent GAN (RGAN), which can generate sequences of real-valued data, and a Recurrent Conditional GAN (RCGAN), which can generate sequences of real-value data subject to some conditional inputs. As depicted in Figure 1a, the generator RNN takes a different random seed at each time step, plus an additional input if we want to condition the generated sequence with additional data. In Figure 1b, we show how the discriminator RNN takes the generated sequence, together with an additional input if it is a RCGAN, and produces a classiï¬cation as synthetic or real for each time step of the input sequence.
Speciï¬cally, the discriminator is trained to minimise the average negative cross-entropy between its predictions per time-step and the labels of the sequence. If we denote by RNN(X) the vector or t=1 (xt â Rd), matrix comprising the T outputs from a RNN receiving a sequence of T vectors {xt}T and by CE(a, b) the average cross-entropy between sequences a and b, then the discriminator loss for a pair {Xn, yn} (with Xn â RT Ãd and yn â {1, 0}T ) is:
Dloss(Xn, yn) = âCE(RNND(Xn), yn) For real sequences, yn is a vector of 1s, or 0s for synthetic sequences. In each training minibatch, the discriminator sees both real and synthetic sequences.
The objective for the generator is then to âtrickâ the discriminator into classifying its outputs as true, that is, it wishes to minimise the (average) negative cross-entropy between the discriminatorâs predictions on generated sequences and the âtrueâ label, the vector of 1s (we write as 1);
# Gloss(Zn) = Dloss(RNNG(Zn), 1) = âCE(RNND(RNNG(Zn)), 1)
Here Zn is a sequence of T points {zt}T t=1 sampled independently from the latent/noise space Z, thus Zn â RT Ãm since Z = Rm. Initial experimentation with non-independent sampling did not indicate any obvious beneï¬t, but would be a topic for further investigation.
In this work, LSTM (Hochreiter & Schmidhuber, 1997). the architecture selected for both discriminator and generator RNNs is the
In the conditional case (RCGAN), the inputs to each RNN are augmented with some conditional information cn (for sample n, say) by concatenation at each time-step; xnt â [xnt; cn]
# znt â [znt; cn]
In this way the RNN cannot discount the conditional information through forgetting.
Promising research into alternative GAN objectives, such as the Wasserstein GAN (Arjovsky et al., 2017; Gulrajani et al., 2017) unfortunately do not ï¬nd easy application to RGANs in our experiments. Enforcing the Lipschitz constraint on an RNN is a topic for further research, but may be aided by use of unitary RNNs (Arjovsky et al., 2016; Hyland & Rätsch, 2017).
All models and experiments were implemented in python with scikit-learn (Pedregosa et al., 2011) and Tensorï¬ow (Abadi et al., 2015), and the code is available in a public git repository: ANON.
3.1 EVALUATION
Evaluating the performance of a GAN is challenging. As illustrated in (Theis et al., 2015) and (Wu et al., 2016), evaluating likelihoods, with Parzen window estimates (Wu et al., 2016) or otherwise can be deceptive, and the generator and discriminator losses do not readily correspond to âvisual qualityâ. This nebulous notion of quality is best assessed by a human judge, but it is impractical and costly to do so. In the imaging domain, scores such as the Inception score (Salimans et al., 2016) have been developed to aid in evaluation, and Mechanical Turk exploited to distribute the human labour. However, in the case of real-valued sequential data, is not always easy or even possible to visually evaluate the generated data. For example, the ICU signals with which we work in this paper, could look completely random to a non-medical expert.
Therefore, in this work, we start by demonstrating our model with a number of toy datasets that can be visually evaluated. Next, we use a set of quantiï¬able methods (description below) that can be used as an indicator of the data quality.
generated sample generator conditional inputs Z.latent!
real or fake? YOOUO=O CoeRee | discriminator conditional inputs real or generated sample
(a) The generator RNN takes a different random seed at each temporal input, and produces a synthetic signal. In the case of the RCGAN, it also takes an additional input on each time step that conditions the output.
(b) The discriminator RNN takes real/synthetic se- quences and produces a classiï¬cation into real/synthetic for each time step. In the case of the RCGAN, it also takes an additional input on each time step that condi- tions the output.
Figure 1: Architecture of Recurrent GAN and Conditional Recurrent GAN models.
3.1.1 MAXIMUM MEAN DISCREPANCY
We consider a GAN successful if it implicitly learns the distribution of the true data. We assess this by studying the samples it generates. This is the ideal setting for maximum mean discrepancy (MMD) (Gretton et al., 2007), and has been used as a training objective for generative moment matching networks (Li et al., 2015). MMD asks if two sets of samples - one from the GAN, and one from the true data distribution, for example - were generated by the same distribution. It does this by comparing statistics of the samples. In practice, we consider the squared difference of the statistics between the two sets of samples (the MMD2), and replace inner products between (functions of) the two samples by a kernel. Given a kernel K : X Ã Y â R, and samples {xi}N j=1, an unbiased estimate of MMD2 is:
non nom mm MMD,, = Wary LL Keo) - => Klein) + amy Rv) i=l jf#i " i=1 j=l i=l ffi
Defining appropriate kernels between time series is an area of active research. However, much of the challenge arises from the need to align time series. In our case, the generated and real samples are already aligned by our fixing of the âtimeâ axis. We opt then to treat our time series as vectors (or matrices, in the multidimensional case) for comparisons, and use the radial basis function (RBF) kernel using the squared ¢j-norm or Frobenius norm between vectors/matrices; K(a,y) = exp(â||a â y||?/(207)). To select an appropriate kernel bandwidth o we maximise the ator of the t-statistic of the power of the MMD test between two distributions (Sutherland et al. ae : _ {= MMD | where V is the asymptotic variance of the estimator of MMD?. We do this using
# ae MMD =e
2
{= MMD | where V is the asymptotic variance of the estimator of MMD?. We do this using =e
a split of the validation set during training - the rest of the set is used to calculate the MMD2 using the optimised bandwidth. Following (Sutherland et al., 2016), we deï¬ne a mixed kernel as a sum of RBF kernels with two different Ïs, which we optimise simultaneously. We ï¬nd the MMD2 to be more informative than either generator or discriminator loss, and correlates well with quality as assessed by visualising.
3.1.2 TRAIN ON SYNTHETIC, TEST ON REAL (TSTR)
We propose a novel method for evaluating the output of a GAN when a supervised task can be deï¬ned on the domain of the training data. We call it âTrain on Synthetic, Test on Realâ (TSTR). Simply put, we use a dataset generated by the GAN to train a model, which is then tested on a held-out set of true examples. This requires the generated data to have labels - we can either provide these to a conditional GAN, or use a standard GAN to generate them in addition to the data features. In this work we opted for the former, as we describe below. For using GANs to share synthetic âde-identiï¬edâ
data, this evaluation metric is ideal, because it demonstrates the ability of the synthetic data to be used for real applications. We present the pseudocode for this GAN evaluation strategy in Algorithm 1.
# Algorithm 1 (TSTR) Train on Synthetic, Test on Real
1: train, test = split(data) 2: discriminator, generator = train_GAN(train) 3: with labels from train: 4: 5: 6: 7: with labels and features from test: 8: 9:
synthetic = generator.generate_synthetic(labels) classifier = train_classiï¬er(synthetic, labels) If validation set available, optionally optimise GAN over classiï¬er performance.
predictions = classifier.predict(features) TSTR_score = score(predictions, labels)
Train on Real, Test on Synthetic (TRTS): Similar to the TSTR method proposed above, we can consider the reverse case, called âTrain on Real, Test on Syntheticâ (TRTS). In this approach, we use real data to train a supervised model on a set of tasks. Then, we use the RCGAN to generate a synthetic test set for evaluation. In the case (as for MNIST) where the true classiï¬er achieves high accuracy, this serves to act as an evaluation of the RCGANâs ability to generate convincing examples of the labels, and that the features it generates are realistic. Unlike the TSTR setting however, if the GAN suffers mode collapse, TRTS performance will not degrade accordingly, so we consider TSTR the more interesting evaluation.
# 4 LEARNING TO GENERATE REALISTIC SEQUENCES
To demonstrate the modelâs ability to generate ârealistic-lookingâ sequences in controlled environ- ments, we consider several experiments on synthetic data. In the experiments that follow, unless otherwise speciï¬ed, the synthetic data consists of sequences of length 30. We focus on the non- conditional model RGAN in this section.
4.1 SINE WAVES
The quality of generated sine waves are easily conï¬rmed by visual inspection, but by varying the amplitudes and frequencies of the real data, we can create a dataset with nonlinear variations. We generate waves with frequencies in [1.0, 5.0], amplitudes in [0.1, 0.9], and random phases between [âÏ, Ï]. The left of Figure 2a shows examples of these signals, both real and generated (although they are hard to distinguish).
We found that, despite the absence of constraints to enforce semantics in the latent space (as in (Chen et al., 2016)), we could alter the frequency and phase of generated samples by varying the latent dimensions, although the representation was not âdisentangledâ, and one dimension of the latent space inï¬uenced multiple aspects of the signal.
At this point, we tried to train a recurrent version of the Variational Autoencoder (VAE) (Kingma & Welling, 2013) with the goal of comparing its performance with the RGAN. We tried the implemen- tation proposed in (Fabius & van Amersfoort, 2014), which is arguably the most straightforward solution to implement a Recurrent Variational Autoencoder (RVAE). It consists of replacing the encoder and decoder of a VAE with RNNs, and then using the last hidden state of the encoder RNN as the encoded representation of the input sequence. After performing the reparametrization trick, the resulting encoded representation is used to initialize the hidden state of the decoder RNN. Since in this simple dataset all sequences are of the same length, we also tried an alternative approach in which the encoding of the input sequence is computed as the concatenation of all the hidden states of the encoder RNN. Using these architechtures, we were only capable of generating sine waves with inconsistent amplitudes and frequencies, with a quality clearly inferior than the ones produced by the RGAN. The source code to reproduce these experiments is included in the git repository mentioned before. We believe that this approach needs further research, specially for the task of generating
Accuracy 0.991 ± 0.001 Real TSTR 0.975 ± 0.002 0.988 ± 0.005 TRTS
Table 1: Scores obtained by a convolutional neural network when: a) trained and tested on real data, b) trained on synthetic and tested on real data, and c) trained on real and tested on synthetic. In all cases, early stopping and (in the case of the synthetic data) epoch selection were determined using a validation set.
labeled data that will be presented later in this paper, which we also failed to accomplish with the RVAE so far.
4.2 SMOOTH FUNCTIONS
Sine waves are simple signals, easily reproduced by the model. In our ultimate medical application, we wish the model to reproduce complex physiological signals which may not follow simple dynamics. We therefore consider the harder task of learning arbitrary smooth signals. Gaussian processes offer a method to sample values of such smooth functions. We use a RBF kernel with to specify a GP with zero-valued mean function. We then draw 30 equally-spaced samples. This amounts to a single draw from a multivariate normal distribution with covariance function given by the RBF kernel evaluated on a grid of equally-spaced points. In doing so, we have speciï¬ed exactly the probability distribution generated the true data, which enables us to evaluate generated samples under this distribution. The right of Figure 2a shows examples (real and generated) of this experiment. The main feature of the real and generated time series is that they exhibit smoothness with local correlations, and this is rapidly captured by the RGAN.
Because we have access to the data distribution, in Figure 3 we show how the average (log) likelihood of a set of generated samples increases under the data distribution during training. This is an imperfect measure, as it is blind to the diversity of the generated samples - the oft-observed mode collapse, or âHelvetica Scenarioâ (Goodfellow et al., 2014) of GANs - hence we prefer the MMD2 measure (see Figure 3). It is nonetheless encouraging to observe that, although the GAN objective is unaware of the underlying data distribution, the likelihood of the generated samples improves with training.
# 4.3 MNIST AS A TIME SERIES
The MNIST hand-written digit dataset is ubiquitous in machine learning research. Accuracy on MNIST digit classiï¬cation is high enough to consider the problem âsolvedâ, and generating MNIST digits seems an almost trivial task for traditional GANs. However, generating MNIST sequentially is less commonly done (notable examples are PixelRNN (Oord et al., 2016), and the serialisation of MNIST in the long-memory RNN literature (Le et al., 2015)). To serialise MNIST, each 28 à 28 digit forms a 784-dimensional vector, which is a sequence we can aim to generate with the RGAN. This gives the added beneï¬t of producing samples we can easily assess visually.
To make the task more tractable and to explore the RGANâs ability to generate multivariate sequences, we treat each 28x28 image as a sequence of 28, 28-dimensional outputs. We show two types of
VOD Ve
se real MNIST ci # good RGAN samples bad RGAN samples & RAEN
# sine waves
# smooth signals
(a) Examples of real (coloured, top) and generated (black, lower two lines) samples.
_
(b) Left top: real MNIST digits. Left bottom: unrealistic digits generated at epoch 27. Right: digits with minimal distortion generated at epoch 100.
Figure 2: RGAN is capable of generating realistic-looking examples.
| 5.0 âhay ckhite ck ie, it Sao i ah alla AN 28 Dioss amin oC it Mp2 logvlikelihood epoch
Figure 3: Trace of generator (dotted), dis- criminator (solid) loss, MMD2 score and log likelihood of generated samples under the data distribution during training for RGAN generating smooth sequences (output in Fig- ure 2a.)
# Gloss
02
# oo
# distance from endpoints
+
# Sw
S/S
+ |
~/ \/~~
# LI
# pre
# Y\ N\A
Figure 4: Back-projecting training examples into the latent space and linearly in- terpolating them produces smooth variation in the sam- ple space. Top plot shows sample-space distance from top (green, dashed) sample to bottom (orange, dotted). Distance measure is RBF kernel with bandwidth cho- sen as median pairwise dis- tance between training sam- ples. The original training examples are shown in dot- ted lines in the bottom and second-from-top plots.
experiment with this dataset. In the ï¬rst one, we train a RGAN to generate MNIST digits in this sequential manner. Figure 2b demonstrates how realistic the generated digits appear.
For the second experiment, we downsample the MNIST digits to 14x14 pixels, and consider the ï¬rst three digits (0, 1, and 2). With this data we train a RCGAN and subsequently perform the TSTR (and TRTS) evaluations explained above, for the task of classifying the digits. That is, for the TSTR evaluation, we generate a synthetic dataset using the GAN, using the real training labels as input. We then train a classiï¬er (a convolutional neural network) on this data, and evaluate its performance on the real held-out test set. Conversely, for TRTS we train a classiï¬er on the real data, and evaluate it on a synthetic test dataset generated by the GAN. Results of this experiment are show in Table 1. To obtain error bars on the accuracies reported, we trained the RCGAN ï¬ve times with different random initialisations. The TSTR result shows that the RCGAN generates synthetic datasets realistic enough to train a classiï¬er which then achieves high performance on real test data. The TRTS result shows that the synthetic examples in the test set match their labels to a high degree, given the accuracy of the classiï¬er trained on real data is very high.
# 5 LEARNING TO GENERATE REALISTIC ICU DATA
One of the main goals of this paper is to build a model capable of generating realistic medical datasets, and speciï¬cally ICU data. For this purpose, we based our work on the recently-released Philips eICU database1. This dataset was collected by the critical care telehealth program provided by Philips. It contains around 200,000 patients from 208 care units across the US, with a total of 224,026,866 entries divided in 33 tables.
From this data, we focus on generating the four most frequently recorded, regularly-sampled variables measured by bedside monitors: oxygen saturation measured by pulse oximeter (SpO2), heart rate (HR), respiratory rate (RR) and mean arterial pressure (MAP). In the eICU dataset, these variables are measured every ï¬ve minutes. To reduce the length of the sequences we consider, we downsample to one measurement every ï¬fteen minutes, taking the median value in each window. This greatly speeds up the training of our LSTM-based GAN while still capturing the relevant dynamics of the data.
In the following experiments, we consider the beginning of the patientâs stay in the ICU, considering this a critical time in their care. We focus on the ï¬rst 4 hours of their stay, which results in 16 measurements of each variable. While medical data is typically fraught with missing values, in this work we circumvented the issue by discarding patients with missing data (after downsampling). After preprocessing the data this way, we end up with a cohort of 17,693 patients. Most restrictive was the requirement for non-missing MAP values, as these measurements are taken invasively.
# 1https://eicu-crd.mit.edu/
AUROC AUPRC real TSTR real TSTR random SpO2 < 95 0.9587 ± 0.0004 0.88 ± 0.01 0.9059 ± 0.0005 0.66 ± 0.02 0.16 HR < 70 0.9908 ± 0.0005 0.96 ± 0.01 0.9855 ± 0.0002 0.90 ± 0.02 0.26 HR > 100 0.9919 ± 0.0002 0.95 ± 0.01 0.9778 ± 0.0002 0.84 ± 0.03 0.18 AUROC AUPRC real TSTR real TSTR random RR < 13 0.9735 ± 0.0001 0.86 ± 0.01 0.9557 ± 0.0002 0.73 ± 0.02 0.26 RR > 20 0.963 ± 0.001 0.84 ± 0.02 0.891 ± 0.001 0.50 ± 0.06 0.1 MAP < 70 0.9717 ± 0.0001 0.875 ± 0.007 0.9653 ± 0.0001 0.82 ± 0.02 0.39 MAP > 110 0.960 ± 0.001 0.87 ± 0.04 0.8629 ± 0.0007 0.42 ± 0.07 0.05
Table 2: Performance of random forest classiï¬er for eICU tasks when trained with real data and when trained with synthetic data (test set is real), including random prediction baselines. AUPRC stands for area under the precision-recall curve, and AUROC stands for area under ROC curve. Italics denotes those tasks whose performance were optimised in cross-validation.
5.1 TSTR TASKS IN EICU
The data generated in a ICU is complex, so it is challenging for non-medical experts to spot patterns or trends on it. Thus, one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data. Therefore, we evaluate the performance of the ICU RCGAN using the TSTR method.
To perform the TSTR evaluation, we need a supervised task (or tasks) on the data. A relevant question in the ICU is whether or not a patient will become âcriticalâ in the near future - a kind of early warning system. For a model generating dynamic time-series data, this is especially appropriate, as trends in the data are likely most predictive. Based on our four variables (SpO2, HR, RR, MAP) we deï¬ne âcritical thresholdsâ and generate binary labels of whether or not that variable will exceed the threshold in the next hour of the patientâs stay - that is, between hour 4 and 5, since we consider the ï¬rst four hours âobservedâ. The thresholds are shown in the columns of Table 2. There is no upper threshold for SpO2, as it is a percentage with 100% denoting ideal conditions.
As for MNIST, we âsampleâ labels by drawing them from the real data labels, and use these as conditioning inputs for the RCGAN. This ensures the label distribution in the synthetic dataset and the real dataset is the same, respecting the fact that the labels are not independent (a patient is unlikely to simultaneously suffer from high and low blood pressure).
Following Algorithm 1, we train the RCGAN for 1000 epochs, saving one version of the dataset every 50 epochs. Afterwards, we evaluate the synthetic data using TSTR. We use cross validation to select the best synthetic dataset based on the classiï¬er performance, but since we assume that it might be also used for unknown tasks, we use only 3 of the 7 tasks of interest to perform this cross validation step (denoted in italics in Table 2). The results of this experiment are presented in Table 2, which compares the performance achieved by a random forest classiï¬er that has been trained to predict the 7 tasks of interest, in one experiment with real data and in a different experiment with the synthetically generated data.
6
IS THE GAN JUST MEMORISING THE TRAINING DATA?
One explanation for the TSTR performance in MNIST and eICU could be that the GAN is simply "memorising" the training data and reproducing it. If this were the case, then the (potentially private) data used to train the GAN would be leaked, raising privacy concerns when used on sensitive medical data. It is key that the training data for the model should not be recoverable by an adversary. In addition, while the typical GAN objective incentivises the generator to reproduce training examples, we hope that it does not overï¬t to the training data, and learn an implicit distribution which is peaked at training examples, and negligible elsewhere.
To answer this question we perform three tests - one qualitative, two statistical, outlined in the following subsections. While these evaluations are empirical in nature, we still believe that the proposed and tested privacy evaluation measures can be very useful to quickly check privacy properties of RGAN generated data â but without strong privacy guarantees.
6.1 COMPARING THE DISTRIBUTION OF RECONSTRUCTION ERRORS
To test if the generated samples look "too similar" to the training set, we could generate a large number of samples and calculate the distance to the nearest neighbour (in the training set) to each generated sample. We could compare the distribution of these distances with those comparing the generated samples and a held-out test set. However, to get an accurate estimate of the distances, we may need to generate many samples, and correspondingly calculate many pairwise distances. Instead, we intentionally generate the nearest neighbour to each training (or test) set point, and then compare the distances.
We generate these nearest neighbours by minimising the reconstruction error between target y and the generated point; Lrecon(y)(Z) = 1 â K(G(Z), y) where K is the RBF kernel described in Section 3.1.1, with bandwidth Ï chosen using the median heuristic (Bounliphone et al., 2015). We ï¬nd Z by minimising the error until approximate convergence (when the gradient norm drops below a threshold).
We can then ask if we can distinguish the distribution of reconstruction errors for different input data. Speciï¬cally, we ask if we can distinguish the distribution of errors between the training set and the test set. The intuition is that if the model has "memorised" training data, it will achieve identiï¬ably lower reconstruction errors than with the test set. We use the Kolmogorov-Smirnov two-sample test to test if these distributions differ. For the RGAN generating sine waves, the p-value is 0.2 ± 0.1, for smooth signals it is 0.09 ± 0.04, and for the MNIST experiment shown in Figure 2b it is 0.38 ± 0.06. For the MNIST trained with RCGAN (TSTR results in Table 1), the p-value is 0.57 ± 0.18. We conclude that the distribution of reconstruction errors is not signiï¬cantly different between training and test sets in any of these cases, and that the model does not appear to be biased towards reconstructing training set examples.
6.2 INTERPOLATION
Suppose that the model has overï¬t (the implicit distribution is highly peaked in the region of training examples), and most points in latent space map to (or near) training examples. If we take a smooth path in the latent space, we expect that at each point, the corresponding generated sample will have the appearance of the "closest" (in latent space) training example, with little variation until we reach the attractor basin of another training example, at which point the samples switch appearance.
We test this qualitatively as follows: we sample a pair of training examples (we conï¬rm by eye that they donât look "too similar"), and then "back-project" them into the latent space to ï¬nd the closest corresponding latent point, as described above. We then linearly interpolate between those latent points, and produce samples from the generator at each point. Figure 4 shows an example of this procedure using the "smooth function" dataset. The samples show a clear incremental variation between start and input sequences, contrary to what we would expect if the model had simply memorised the data.
6.3 COMPARING THE GENERATED SAMPLES
Rather than using a nearest-neighbours approach (as in Section 6.1), we can use the MMD three- sample test (Bounliphone et al., 2015) to compare the full set of generated samples. With X being the generated samples, Y and Z being the test and training set respectively, we ask if the MMD between X and Y is less than the MMD between X and Z. The test is constructed in this way because we expect that if the model has memorised the training data, that the MMD between the synthetic data and the training data will be signiï¬cantly lower than the MMD between the synthetic data and test data. In this case, the hypothesis that MMD(synthetic, test) ⤠MMD(synthetic, train) will be false. We are therefore testing (as in Section 6.1) if our null hypothesis (that the model has not memorised the training data) can be rejected. The average p-values we observed were: for the eICU data in Section 5.1: 0.40 ± 0.05, for MNIST data in Section 4.3: 0.47 ± 0.16, for sine waves: 0.41 ± 0.07, for smooth signals: 0.07 ± 0.04, and for the higher-resolution MNIST RGAN experiments in Section 4: 0.59 ± 0.12 (before correction for multiple hypothesis testing). We conclude that we cannot reject the null hypothesis that the MMD between the synthetic set and test set is at most as large as the MMD between the synthetic set and training set, indicating that the synthetic samples do not look more similar to the training set than they do to the test set.
# 7 TRAINING RGANS WITH DIFFERENTIAL PRIVACY
Although the analyses described in Section|6]indicate that the GAN is not preferentially generating training data points, we are conscious that medical data is often highly sensitive, and that privacy breaches are costly. To move towards stronger guarantees of privacy for synthetic medical data, we investigated the use of a differentially private training procedure for the GAN. Differential privacy is concerned with the influence of the presence or absence of individual records in a database. Intuitively, differential privacy places bounds on the probability of obtaining the same result (in our case, an instance of a trained GAN) given a small perturbation to the underlying dataset. If the training procedure guarantees (¢, 5) differential privacy, then given two âadjacentâ datasets (differing in one record) D, Dâ,
P[M(D) ⬠S] < e& P[M(Dâ) ⬠S] +6 (1) where M(D) is the GAN obtained from training on D, S is any subset of possible outputs of the training procedure (any subset of possible GANs), and the probability P takes into account the randomness in the procedure M(D). Thus, differential privacy requires that the distribution over GANs produced by M must vary âslowlyâ as D varies, where ⬠and 6 bound this âslownessâ. Inspired by a recent preprint (Beaulieu-Jones et al.||2017), we apply the differential private stochastic gradient descent (DP-SGD) algorithm of (Abadi et al.|/2016) to the discriminator (as the generator does not âseeâ the private data directly). For further details on the algorithm (and the above definition of differential privacy), we refer to and (Dwork et al.|{2006).
In practice, DP-SGD operates by clipping per-example gradients and adding noise in batches. This means the signal obtained from any individual example is limited, providing differential privacy. Some privacy budget is âspentâ every time the training procedure calculates gradients for the discriminator, which enables us to evaluate the effective values of ⬠and 6 throughout training. We use the moments accountant method from to track this privacy spending. Finding hyperparameters which yield both acceptable privacy and realistic GAN samples proved challenging. We focused on the MNIST and eICU tasks with RCGAN, using the TSTR evaluation.
For MNIST, we clipped gradients to 0.05 and added Gaussian noise with mean zero and standard deviation 0.05 x 2. For ¢ = 1 and < 1.8 x 10~%, we achieved an accuracy of 0.75+0.03. Sacrificing more privacy, with « = 2 and 6 < 2.5 x 10~4, the accuracy is 0.7740.03. These results are far below the performance reported by the non-private GAN (Table[Ip, highlighting the compounded difficulty of generating a realistic dataset while maintaining privacy. For comparison, in they report an accuracy of 0.95 for training an MNIST classifier (on the full task) on a real dataset in a differentially private manner. (Please note, however, that our GAN model had to solve the more challenging task of modeling digits as a time series.)
For eICU, the results are shown in Table B] For this case, we clipped gradients to 0.1 and added noise with standard deviation 0.1 x 2. In surprising contrast to our findings on MNIST, we observe that performance on the eICU tasks remains high with differentially private training, even for a stricter privacy setting (¢ = 0.5 and 5 < 9.8 x 107%). Visual assessment of samples generated by the differentially-private GAN indicate that while it is prone to producing less-realistic sequences, the mistakes it introduces appear to be unimportant for the tasks we consider. In particular, the DP-GAN produces more extreme-valued sequences, but as the tasks are to predict extreme values, it may be that the most salient part of the sequence is preserved. The possibility to introduce privacy-preserving noise which nonetheless allows for the training of downstream models suggests interesting directions of research in the intersection of privacy and GANs.
# 8 CONCLUSION
We have described, trained and evaluated a recurrent GAN architecture for generating real-valued sequential data, which we call RGAN. We have additionally developed a conditional variant (RCGAN) to generate synthetic datasets, consisting of real-valued time-series data with associated labels. As this task poses new challenges, we have presented novel solutions to deal with evaluation and questions of privacy. By generating labelled training data - by conditioning on the labels and generating the corresponding samples, we can evaluate the quality of the model using the âTSTR techniqueâ, where we train a model on the synthetic data, and evaluate it on a real, held-out test set. We have demonstrated this approach using âserialisedâ multivariate MNIST, and on a dataset of real ICU
AUROC AUPRC TSTR (DP) TSTR (DP) random SpO2 < 95 0.859 ± 0.004 0.582 ± 0.008 0.16 HR < 70 0.86 ± 0.01 0.77 ± 0.03 0.27 HR > 100 0.90 ± 0.01 0.75 ± 0.03 0.16 AUROC AUPRC TSTR (DP) TSTR (DP) random RR < 13 0.86 ± 0.01 0.72 ± 0.02 0.26 RR > 20 0.87 ± 0.01 0.48 ± 0.03 0.09 MAP < 70 0.78 ± 0.01 0.705 ± 0.005 0.39 MAP > 110 0.83 ± 0.06 0.26 ± 0.06 0.05
Table 3: Performance of random forest classifier trained on synthetic data generated by differentially private GAN, tested on real data. Compare with Table 2] The epoch from which data is generated was selected using a validation set, considering performance on a subset of the tasks (SpO2 < 95, HR > 100, and RR < 13, denoted in italics). In each replicate, the GAN was trained with (e, 6) differential privacy for ⬠= 0.5 and 6 < 9.8 x 107°.
patients, where models trained on the synthetic dataset achieved performance at times comparable to that of the real data. In domains such as medicine, where privacy concerns hinder the sharing of data, this implies that with reï¬nement of these techniques, models could be developed on synthetic data that are still valuable for real tasks. This could enable the development of synthetic âbenchmarkingâ datasets for medicine (or other sensitive domains), of the kind which have enabled great progress in other areas. We have additionally illustrated that such a synthetic dataset does not pose a major privacy concern or constitute a data leak for the original sensitive training data, and that for stricter privacy guarantees, differential privacy can be used in training the RCGAN with some loss to performance.
# REFERENCES
MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorï¬ow.org.
MartÃn Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308â318. ACM, 2016.
Grigory Antipov, Moez Baccouche, and Jean-Luc Dugelay. Face aging with conditional generative adversarial networks. arXiv preprint arXiv:1702.01983, 2017.
Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120â1128, 2016.
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. 26 January 2017.
Brett K. Beaulieu-Jones, Zhiwei Steven Wu, Chris Williams, and Casey S. Greene. Privacy-preserving generative deep neural networks support clinical data sharing. bioRxiv, 2017. doi: 10.1101/159756. URL https://www.biorxiv.org/content/early/2017/07/05/159756.
Wacha Bounliphone, Eugene Belilovsky, Matthew B Blaschko, Ioannis Antonoglou, and Arthur Gretton. A test of relative similarity for model selection in generative models. 14 November 2015.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: In- terpretable representation learning by information maximizing generative adversarial nets. 12 June 2016.
Edward Choi, Siddharth Biswal, Bradley Malin, Jon Duke, Walter F Stewart, and Jimeng Sun. Generating multi-label discrete electronic health records using generative adversarial networks. 19 March 2017.
Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Eurocrypt, volume 4004, pp. 486â503. Springer, 2006.
Otto Fabius and Joost R van Amersfoort. Variational recurrent auto-encoders. arXiv preprint arXiv:1412.6581, 2014.
Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014(5):2, 2014.
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. 10 June 2014.
Arthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex J Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pp. 513â520, 2007.
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein GANs. 31 March 2017.
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Stephanie L Hyland and Gunnar Rätsch. Learning unitary operators with help from u (n). In AAAI 2017, 2017.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectiï¬ed linear units. arXiv preprint arXiv:1504.00941, 2015.
Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017.
Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. 10 February 2015.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Olof Mogren. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. 29 November 2016.
Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
Nicolas Papernot, MartÃn Abadi, Ãlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi- supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825â2830, 2011.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In Proceedings of The 33rd International Conference on Machine Learning, volume 3, 2016.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. 10 June 2016.
Dougal J Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alex Smola, and Arthur Gretton. Generative models and model criticism via optimized maximum mean discrepancy. 14 November 2016.
Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. 5 November 2015.
Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of Decoder-Based generative models. 14 November 2016.
Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. Improving neural machine translation with conditional sequence generative adversarial nets. arXiv preprint arXiv:1703.04887, 2017.
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. SeqGAN: Sequence generative adversarial nets with policy gradient. 18 September 2016. | {
"id": "1511.06434"
} |
1705.08292 | The Marginal Value of Adaptive Gradient Methods in Machine Learning | Adaptive optimization methods, which perform local optimization with a metric
constructed from the history of iterates, are becoming increasingly popular for
training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We
show that for simple overparameterized problems, adaptive methods often find
drastically different solutions than gradient descent (GD) or stochastic
gradient descent (SGD). We construct an illustrative binary classification
problem where the data is linearly separable, GD and SGD achieve zero test
error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to
half. We additionally study the empirical generalization capability of adaptive
methods on several state-of-the-art deep learning models. We observe that the
solutions found by adaptive methods generalize worse (often significantly
worse) than SGD, even when these solutions have better training performance.
These results suggest that practitioners should reconsider the use of adaptive
methods to train neural networks. | http://arxiv.org/pdf/1705.08292 | Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, Benjamin Recht | stat.ML, cs.LG | null | null | stat.ML | 20170523 | 20180522 | 8 1 0 2
y a M 2 2 ] L M . t a t s [ 2 v 2 9 2 8 0 . 5 0 7 1 : v i X r a
# The Marginal Value of Adaptive Gradient Methods in Machine Learning
Ashia C. Wilsonâ, Rebecca Roelofsâ, Mitchell Sternâ, Nathan Srebro', and Benjamin Recht? {ashia,roelofs,mitchell}@berkeley.edu, nati@ttic.edu, brecht@berkeley.edu âUniversity of California, Berkeley +Toyota Technological Institute at Chicago
# Abstract
Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often ï¬nd drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classiï¬cation problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state- of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often signiï¬cantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.
# Introduction
An increasing share of deep learning researchers are training their models with adaptive gradient methods [3, 12] due to their rapid training time [6]. Adam [8] in particular has become the default algorithm used across many deep learning frameworks. However, the generalization and out-of- sample behavior of such adaptive gradient methods remains poorly understood. Given that many passes over the data are needed to minimize the training objective, typical regret guarantees do not necessarily ensure that the found solutions will generalize [17].
Notably, when the number of parameters exceeds the number of data points, it is possible that the choice of algorithm can dramatically inï¬uence which model is learned [15]. Given two different minimizers of some optimization problem, what can we say about their relative ability to generalize? In this paper, we show that adaptive and non-adaptive optimization methods indeed ï¬nd very different solutions with very different generalization properties. We provide a simple generative model for binary classiï¬cation where the population is linearly separable (i.e., there exists a solution with large margin), but AdaGrad [3], RMSProp [21], and Adam converge to a solution that incorrectly classiï¬es new data with probability arbitrarily close to half. On this same example, SGD ï¬nds a solution with zero error on new data. Our construction suggests that adaptive methods tend to give undue inï¬uence to spurious features that have no effect on out-of-sample generalization.
We additionally present numerical experiments demonstrating that adaptive methods generalize worse than their non-adaptive counterparts. Our experiments reveal three primary ï¬ndings. First, with the same amount of hyperparameter tuning, SGD and SGD with momentum outperform adaptive methods on the development/test set across all evaluated models and tasks. This is true even when the adaptive methods achieve the same training loss or lower than non-adaptive methods. Second, adaptive methods often display faster initial progress on the training set, but their performance quickly
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
plateaus on the development/test set. Third, the same amount of tuning was required for all methods, including adaptive methods. This challenges the conventional wisdom that adaptive methods require less tuning. Moreover, as a useful guide to future practice, we propose a simple scheme for tuning learning rates and decays that performs well on all deep learning tasks we studied.
# 2 Background
The canonical optimization algorithms used to minimize risk are either stochastic gradient methods or stochastic momentum methods. Stochastic gradient methods can generally be written
wk+1 = wk â αk Ëâf (wk), (2.1) where Ëâf (wk) := âf (wk; xik ) is the gradient of some loss function f computed on a batch of data xik . Stochastic momentum methods are a second family of techniques that have been used to accelerate training. These methods can generally be written as
wk+1 = wk â αk Ëâf (wk + γk(wk â wkâ1)) + βk(wk â wkâ1). The sequence of iterates (2.2) includes Polyakâs heavy-ball method (HB) with γk = 0, and Nesterovâs Accelerated Gradient method (NAG) [19] with γk = βk.
Notable exceptions to the general formulations (2.1) and (2.2) are adaptive gradient and adaptive momentum methods, which choose a local distance measure constructed using the entire sequence of iterates (w1, · · · , wk). These methods (including AdaGrad [3], RMSProp [21], and Adam [8]) can generally be written as
Ëâf (wk + γk(wk â wkâ1)) + βkHâ1 where Hk := H(w1, · · · , wk) is a positive deï¬nite matrix. Though not necessary, the matrix Hk is usually deï¬ned as
k 1/2 Hy = diag {> naca , (2.4) i=l
where âoâ denotes the entry-wise or Hadamard product, g, = Vf (wg + Yx~(we â Weâ1)), and np is some set of coefficients specified for each algorithm. That is, Hj, is a diagonal matrix whose entries are the square roots of a linear combination of squares of past gradient components. We will use the fact that H;, are defined in this fashion in the sequel. For the specific settings of the parameters for many of the algorithms used in deep learning, see Table [I] Adaptive methods attempt to adjust an algorithm to the geometry of the data. In contrast, stochastic gradient descent and related variants use the /2 geometry inherent to the parameter space, and are equivalent to setting Hj, = I in the adaptive methods.
Gk αk βk γ SGD HB NAG AdaGrad RMSProp I α I α I Gkâ1 + Dk β2Gkâ1 + (1 â β2)Dk α α α 0 0 β 0 β β 0 0 0 0 β2 1âβk 2 Adam Gkâ1 + (1âβ2) 1âβk 2 α 1âβ1 1âβk 1 β1(1âβkâ1 1 1âβk 1 0 ) Dk
Table 1: Parameter settings of algorithms used in deep learning. Here, Dy = diag(gx o gx) and G,, := Hy, o Hy. We omit the additional ⬠added to the adaptive methods, which is only needed to ensure non-singularity of the matrices H,.
In this context, generalization refers to the performance of a solution w on a broader population. Performance is often deï¬ned in terms of a different loss function than the function f used in training. For example, in classiï¬cation tasks, we typically deï¬ne generalization in terms of classiï¬cation error rather than cross-entropy.
2
# 2.1 Related Work
Understanding how optimization relates to generalization is a very active area of current machine learning research. Most of the seminal work in this area has focused on understanding how early stopping can act as implicit regularization [22]. In a similar vein, Ma and Belkin [10] have shown that gradient methods may not be able to ï¬nd complex solutions at all in any reasonable amount of time. Hardt et al. [17] show that SGD is uniformly stable, and therefore solutions with low training error found quickly will generalize well. Similarly, using a stability argument, Raginsky et al. [16] have shown that Langevin dynamics can ï¬nd solutions than generalize better than ordinary SGD in non-convex settings. Neyshabur, Srebro, and Tomioka [15] discuss how algorithmic choices can act as implicit regularizer. In a similar vein, Neyshabur, Salakhutdinov, and Srebro [14] show that a different algorithm, one which performs descent using a metric that is invariant to re-scaling of the parameters, can lead to solutions which sometimes generalize better than SGD. Our work supports the work of [14] by drawing connections between the metric used to perform local optimization and the ability of the training algorithm to ï¬nd solutions that generalize. However, we focus primarily on the different generalization properties of adaptive and non-adaptive methods.
A similar line of inquiry has been pursued by Keskar et al. [7]. Hochreiter and Schmidhuber [4] showed that âsharpâ minimizers generalize poorly, whereas âï¬atâ minimizers generalize well. Keskar et al. empirically show that Adam converges to sharper minimizers when the batch size is increased. However, they observe that even with small batches, Adam does not ï¬nd solutions whose performance matches state-of-the-art. In the current work, we aim to show that the choice of Adam as an optimizer itself strongly inï¬uences the set of minimizers that any batch size will ever see, and help explain why they were unable to ï¬nd solutions that generalized particularly well.
# 3 The potential perils of adaptivity
The goal of this section is to illustrate the following observation: when a problem has multiple global minima, different algorithms can ï¬nd entirely different solutions when initialized from the same point. In addition, we construct an example where adaptive gradient methods ï¬nd a solution which has worse out-of-sample error than SGD.
To simplify the presentation, let us restrict our attention to the binary least-squares classiï¬cation problem, where we can easily compute closed the closed form solution found by different methods. In least-squares classiï¬cation, we aim to solve
minimize, Rs[w] = $||Xw â yl[3. (3.1)
Here X is an n à d matrix of features and y is an n-dimensional vector of labels in {â1, 1}. We aim to ï¬nd the best linear classiï¬er w. Note that when d > n, if there is a minimizer with loss 0 then there is an inï¬nite number of global minimizers. The question remains: what solution does an algorithm ï¬nd and how well does it perform on unseen data?
# 3.1 Non-adaptive methods
Most common non-adaptive methods will ï¬nd the same solution for the least squares objective (3.1). Any gradient or stochastic gradient of RS must lie in the span of the rows of X. Therefore, any method that is initialized in the row span of X (say, for instance at w = 0) and uses only linear combinations of gradients, stochastic gradients, and previous iterates must also lie in the row span of X. The unique solution that lies in the row span of X also happens to be the solution with minimum Euclidean norm. We thus denote wSGD = X T (XX T )â1y. Almost all non-adaptive methods like SGD, SGD with momentum, mini-batch SGD, gradient descent, Nesterovâs method, and the conjugate gradient method will converge to this minimum norm solution. The minimum norm solutions have the largest margin out of all solutions of the equation Xw = y. Maximizing margin has a long and fruitful history in machine learning, and thus it is a pleasant surprise that gradient descent naturally ï¬nds a max-margin solution.
3
# 3.2 Adaptive methods
Next, we consider adaptive methods where H;, is diagonal. While it is difficult to derive the general form of the solution, we can analyze special cases. Indeed, we can construct a variety of instances where adaptive methods converge to solutions with low ¢,. norm rather than low ¢2 norm. For a vector x ⬠RY, let sign() denote the function that maps each component of z to its sign.
Lemma 3.1 Suppose there exists a scalar c such that X sign(X T y) = cy. Then, when initialized at w0 = 0, AdaGrad, Adam, and RMSProp all converge to the unique solution w â sign(X T y).
In other words, whenever there exists a solution of Xw = y that is proportional to sign(X T y), this is precisely the solution to which all of the adaptive gradient methods converge.
Proof We prove this lemma by showing that the entire trajectory of the algorithm consists of iterates whose components have constant magnitude. In particular, we will show that
wk = λk sign(X T y) ,
for some scalar λk. The initial point w0 = 0 satisï¬es the assertion with λ0 = 0.
Now, assume the assertion holds for all k ⤠t. Observe that
VRs(wr + Ve (we â Weâ1)) XT (X (we + Yn (Wk â Weâ-1)) â Y) = XT {An + (An â Anâ1))X sign(X7y) â y} = {(An + 7e(Ae â An-a))e- 1 XT y = unXTy,
where the last equation deï¬nes µk. Hence, letting gk = âRS(wk + γk(wk â wkâ1)), we also have
1/2 k 1/2 k Hy = diag {> Nhs Js © s} = diag {= nuh |X? y| | = v4 diag (|X yl) , s=1 s=1
1/2
where |u| denotes the component-wise absolute value of a vector and the last equation deï¬nes νk. In sum,
Wk = Wk - axH, VF (we + Yn (We â We-1)) + BH, He-1 (we â Wr-1) Mk By, â AHk BVi-1(y, _ a} sign(XTy), Vk Ve the clai
proving the claim.1
This solution is far simpler than the one obtained by gradient methods, and it would be surprising if such a simple solution would perform particularly well. We now turn to showing that such solutions can indeed generalize arbitrarily poorly.
# 3.3 Adaptivity can overï¬t
Lemma 3.1 allows us to construct a particularly pernicious generative model where AdaGrad fails to ï¬nd a solution that generalizes. This example uses inï¬nite dimensions to simplify bookkeeping, but one could take the dimensionality to be 6n. Note that in deep learning, we often have a number of parameters equal to 25n or more [20], so this is not a particularly high dimensional example by contemporary standards. For i = 1, . . . , n, sample the label yi to be 1 with probability p and â1 with probability 1 â p for some p > 1/2. Let xi be an inï¬nite dimensional vector with entries
xij = yi 1 1 0 j = 1 j = 2, 3 j = 4 + 5(i â 1), . . . , 4 + 5(i â 1) + 2(1 â yi) otherwise .
1In the event that X T y has a component equal to 0, we deï¬ne 0/0 = 0 so that the update is well-deï¬ned.
4
# |
In other words, the first feature of x; is the class label. The next 2 features are always equal to 1. After this, there is a set of features unique to x; that are equal to 1. If the class label is 1, then there is 1 such unique feature. If the class label is â1, then there are 5 such features. Note that the only discriminative feature useful for classifying data outside the training set is the first one! Indeed, one can perform perfect classification using only the first feature. The other features are all useless. Features 2 and 3 are constant, and each of the remaining features only appear for one example in the data set. However, as we will see, algorithms without such a priori knowledge may not be able to learn these distinctions. Take n samples and consider the AdaGrad solution for minimizing 5||X w â y||?. First we show that the conditions of Lemma)3.I|hold. Let b = S7i"_, y; and assume for the sake of simplicity that b > 0. This will happen with arbitrarily high probability for large enough n. Define u = X7 y and observe that
n j=l 1 j=l b j=2,3 d ien(u,) 1 j=2,3 j= See an sigi = See â Yj if j > Sand xj); =1 SID Yj ifj > 3and xj); =1 0 otherwise 0 otherwise
Thus we have (sign(w),2;) = yi + 2+ yi(3 â 2y;) = 4y; as desired. Hence, the AdaGrad solution wd « sign(u). In particular, w°4 has all of its components equal to +7 for some positive constant rT. Now since w?â* has the same sign pattern as wu, the first three components of Wada are equal to each other. But for a new data point, x'°**, the only features that are nonzero in both x'**t and w®¢# are the first three. In particular, we have
(42, ates") _ r(yltes) +2)>0.
Therefore, the AdaGrad solution will label all unseen data as a positive example!
Now, we turn to the minimum 2-norm solution. Let P and NV denote the set of positive and negative examples respectively. Letn4 = |P| and n_ = |N]. Assuming a; = a+ when y; = 1 and a; = a_ when y; = â1, we have that the minimum norm solution will have the form wS¢P = XTa = Diep O21 + View a_ax;. These scalars can be found by solving XXTa = y. Inclosed form we have
have
α+ = 4nâ + 3 9n+ + 3nâ + 8n+nâ + 3 and αâ = 4n+ + 1 9n+ + 3nâ + 8n+nâ + 3 . (3.2)
The algebra required to compute these coefï¬cients can be found in the Appendix. For a new data point, xtest, again the only features that are nonzero in both xtest and wSGD are the ï¬rst three. Thus we have
. (w8SP atest) â ytet(n say ân_a_)+2(nyay+n_a_). Using 8.2}, we see that whenever n,. > n_ /3, the SGD solution makes no errors.
A formal construction of this example using a data-generating distribution can be found in Appendix C. Though this generative model was chosen to illustrate extreme behavior, it shares salient features with many common machine learning instances. There are a few frequent features, where some predictor based on them is a good predictor, though these might not be easy to identify from ï¬rst inspection. Additionally, there are many other features which are sparse. On ï¬nite training data it looks like such features are good for prediction, since each such feature is discriminatory for a particular training example, but this is over-ï¬tting and an artifact of having fewer training examples than features. Moreover, we will see shortly that adaptive methods typically generalize worse than their non-adaptive counterparts on real datasets.
# 4 Deep Learning Experiments
Having established that adaptive and non-adaptive methods can ï¬nd different solutions in the convex setting, we now turn to an empirical study of deep neural networks to see whether we observe a similar discrepancy in generalization. We compare two non-adaptive methods â SGD and the heavy ball method (HB) â to three popular adaptive methods â AdaGrad, RMSProp and Adam. We study performance on four deep learning problems: (C1) the CIFAR-10 image classiï¬cation task, (L1)
5
Name C1 L1 L2 L3 Network type Deep Convolutional 2-Layer LSTM Architecture cifar.torch Dataset CIFAR-10 torch-rnn War & Peace 2-Layer LSTM + Feedforward span-parser Penn Treebank Framework Torch Torch DyNet 3-Layer LSTM emnlp2016 Penn Treebank Tensorï¬ow
Table 2: Summaries of the models we use for our experiments.2
character-level language modeling on the novel War and Peace, and (L2) discriminative parsing and (L3) generative parsing on Penn Treebank. In the interest of reproducibility, we use a network architecture for each problem that is either easily found online (C1, L1, L2, and L3) or produces state-of-the-art results (L2 and L3). Table 2 summarizes the setup for each application. We take care to make minimal changes to the architectures and their data pre-processing pipelines in order to best isolate the effect of each optimization algorithm.
We conduct each experiment 5 times from randomly initialized starting points, using the initialization scheme speciï¬ed in each code repository. We allocate a pre-speciï¬ed budget on the number of epochs used for training each model. When a development set was available, we chose the settings that achieved the best peak performance on the development set by the end of the ï¬xed epoch budget. CIFAR-10 did not have an explicit development set, so we chose the settings that achieved the lowest training loss at the end of the ï¬xed epoch budget.
Our experiments show the following primary ï¬ndings: (i) Adaptive methods ï¬nd solutions that gener- alize worse than those found by non-adaptive methods. (ii) Even when the adaptive methods achieve the same training loss or lower than non-adaptive methods, the development or test performance is worse. (iii) Adaptive methods often display faster initial progress on the training set, but their performance quickly plateaus on the development set. (iv) Though conventional wisdom suggests that Adam does not require tuning, we ï¬nd that tuning the initial learning rate and decay scheme for Adam yields signiï¬cant improvements over its default settings in all cases.
# 4.1 Hyperparameter Tuning
Optimization hyperparameters have a large influence on the quality of solutions found by optimization algorithms for deep neural networks. The algorithms under consideration have many hyperparameters: the initial step size ao, the step decay scheme, the momentum value {o, the momentum schedule 8, the smoothing term e, the initialization scheme for the gradient accumulator, and the parameter controlling how to combine gradient outer products, to name a few. A grid search on a large space of hyperparameters is infeasible even with substantial industrial resources, and we found that the parameters that impacted performance the most were the initial step size and the step decay scheme. We left the remaining parameters with their default settings. We describe the differences between the default settings of Torch, DyNet, and Tensorflow in Appendix [B]for completeness.
To tune the step sizes, we evaluated a logarithmically-spaced grid of ï¬ve step sizes. If the best performance was ever at one of the extremes of the grid, we would try new grid points so that the best performance was contained in the middle of the parameters. For example, if we initially tried step sizes 2, 1, 0.5, 0.25, and 0.125 and found that 2 was the best performing, we would have tried the step size 4 to see if performance was improved. If performance improved, we would have tried 8 and so on. We list the initial step sizes we tried in Appendix D.
For step size decay, we explored two separate schemes, a development-based decay scheme (dev- decay) and a ï¬xed frequency decay scheme (ï¬xed-decay). For dev-decay, we keep track of the best validation performance so far, and at each epoch decay the learning rate by a constant factor δ if the model does not attain a new best value. For ï¬xed-decay, we decay the learning rate by a constant factor δ every k epochs. We recommend the dev-decay scheme when a development set is available;
https://github. com/szagoruyko/cifar.torch; (2) torch-rnn: https://github.com/jcjohnson/torch-rnn; (3) span-parser: https://github.com/jhcross/span-parser; (4) emnlp2016: https://github.com/ cdg720/emnlp2016.
6
(a) CIFAR-10 (Train) (b) CIFAR-10 (Test)
Figure 1: Training (left) and top-1 test error (right) on CIFAR-10. The annotations indicate where the best performance is attained for each method. The shading represents ± one standard deviation computed across ï¬ve runs from random initial starting points. In all cases, adaptive methods are performing worse on both train and test than non-adaptive methods.
not only does it have fewer hyperparameters than the ï¬xed frequency scheme, but our experiments also show that it produces results comparable to, or better than, the ï¬xed-decay scheme.
# 4.2 Convolutional Neural Network
We used the VGG+BN+Dropout network for CIFAR-10 from the Torch blog [23], which in prior work achieves a baseline test error of 7.55%. Figure 1 shows the learning curve for each algorithm on both the training and test dataset.
We observe that the solutions found by SGD and HB do indeed generalize better than those found by adaptive methods. The best overall test error found by a non-adaptive algorithm, SGD, was 7.65 ± 0.14%, whereas the best adaptive method, RMSProp, achieved a test error of 9.60 ± 0.19%.
Early on in training, the adaptive methods appear to be performing better than the non-adaptive methods, but starting at epoch 50, even though the training error of the adaptive methods is still lower, SGD and HB begin to outperform adaptive methods on the test error. By epoch 100, the performance of SGD and HB surpass all adaptive methods on both train and test. Among all adaptive methods, AdaGradâs rate of improvement ï¬atlines the earliest. We also found that by increasing the step size, we could drive the performance of the adaptive methods down in the ï¬rst 50 or so epochs, but the aggressive step size made the ï¬atlining behavior worse, and no step decay scheme could ï¬x the behavior.
# 4.3 Character-Level Language Modeling
Using the torch-rnn library, we train a character-level language model on the text of the novel War and Peace, running for a ï¬xed budget of 200 epochs. Our results are shown in Figures 2(a) and 2(b).
Under the ï¬xed-decay scheme, the best conï¬guration for all algorithms except AdaGrad was to decay relatively late with regards to the total number of epochs, either 60 or 80% through the total number of epochs and by a large amount, dividing the step size by 10. The dev-decay scheme paralleled (within the same standard deviation) the results of the exhaustive search over the decay frequency and amount; we report the curves from the ï¬xed policy.
Overall, SGD achieved the lowest test loss at 1.212 ± 0.001. AdaGrad has fast initial progress, but ï¬atlines. The adaptive methods appear more sensitive to the initialization scheme than non-adaptive methods, displaying a higher variance on both train and test. Surprisingly, RMSProp closely trails SGD on test loss, conï¬rming that it is not impossible for adaptive methods to ï¬nd solutions that generalize well. We note that there are step conï¬gurations for RMSProp that drive the training loss
7
|
below that of SGD, but these conï¬gurations cause erratic behavior on test, driving the test error of RMSProp above Adam.
# 4.4 Constituency Parsing
A constituency parser is used to predict the hierarchical structure of a sentence, breaking it down into nested clause-level, phrase-level, and word-level units. We carry out experiments using two state- of-the-art parsers: the stand-alone discriminative parser of Cross and Huang [2], and the generative reranking parser of Choe and Charniak [1]. In both cases, we use the dev-decay scheme with δ = 0.9 for learning rate decay.
Discriminative Model. Cross and Huang [2] develop a transition-based framework that reduces constituency parsing to a sequence prediction problem, giving a one-to-one correspondence between parse trees and sequences of structural and labeling actions. Using their code with the default settings, we trained for 50 epochs on the Penn Treebank [11], comparing labeled F1 scores on the training and development data over time. RMSProp was not implemented in the used version of DyNet, and we omit it from our experiments. Results are shown in Figures 2(c) and 2(d).
We ï¬nd that SGD obtained the best overall performance on the development set, followed closely by HB and Adam, with AdaGrad trailing far behind. The default conï¬guration of Adam without learning rate decay actually achieved the best overall training performance by the end of the run, but was notably worse than tuned Adam on the development set.
Interestingly, Adam achieved its best development F1 of 91.11 quite early, after just 6 epochs, whereas SGD took 18 epochs to reach this value and didnât reach its best F1 of 91.24 until epoch 31. On the other hand, Adam continued to improve on the training set well after its best development performance was obtained, while the peaks for SGD were more closely aligned.
Generative Model. Choe and Charniak [1] show that constituency parsing can be cast as a language modeling problem, with trees being represented by their depth-ï¬rst traversals. This formulation requires a separate base system to produce candidate parse trees, which are then rescored by the generative model. Using an adapted version of their code base,3 we retrained their model for 100 epochs on the Penn Treebank. However, to reduce computational costs, we made two minor changes: (a) we used a smaller LSTM hidden dimension of 500 instead of 1500, ï¬nding that performance decreased only slightly; and (b) we accordingly lowered the dropout ratio from 0.7 to 0.5. Since they demonstrated a high correlation between perplexity (the exponential of the average loss) and labeled F1 on the development set, we explored the relation between training and development perplexity to avoid any conï¬ation with the performance of a base parser.
Our results are shown in Figures 2(e) and 2(f). On development set performance, SGD and HB obtained the best perplexities, with SGD slightly ahead. Despite having one of the best performance curves on the training dataset, Adam achieves the worst development perplexities.
# 5 Conclusion
Despite the fact that our experimental evidence demonstrates that adaptive methods are not advan- tageous for machine learning, the Adam algorithm remains incredibly popular. We are not sure exactly as to why, but hope that our step-size tuning suggestions make it easier for practitioners to use standard stochastic gradient methods in their research. In our conversations with other researchers, we have surmised that adaptive gradient methods are particularly popular for training GANs [18, 5] and Q-learning with function approximation [13, 9]. Both of these applications stand out because they are not solving optimization problems. It is possible that the dynamics of Adam are accidentally well matched to these sorts of optimization-free iterative search procedures. It is also possible that carefully tuned stochastic gradient methods may work as well or better in both of these applications.
3While the code of Choe and Charniak treats the entire corpus as a single long example, relying on the network to reset itself upon encountering an end-of-sentence token, we use the more conventional approach of resetting the network for each example. This reduces training efï¬ciency slightly when batches contain examples of different lengths, but removes a potential confounding factor from our experiments.
8
It is an exciting direction of future work to determine which of these possibilities is true and to understand better as to why.
# Acknowledgements
The authors would like to thank Pieter Abbeel, Moritz Hardt, Tomer Koren, Sergey Levine, Henry Milner, Yoram Singer, and Shivaram Venkataraman for many helpful comments and suggestions. RR is generously supported by DOE award AC02-05CH11231. MS and AW are supported by NSF Graduate Research Fellowships. NS is partially supported by NSF-IIS-13-02662 and NSF-IIS- 15-46500, an Inter ICRI-RI award and a Google Faculty Award. BR is generously supported by NSF award CCF-1359814, ONR awards N00014-14-1-0024 and N00014-17-1-2191, the DARPA Fundamental Limits of Learning (Fun LoL) Program, a Sloan Research Fellowship, and a Google Faculty Award.
| â
# SGD
â
# HB
â
# AdaGrad
â_
# RMSProp
â
# Adam
â
# Adam (Default) |
(a) War and Peace (Training Set) (b) War and Peace (Test Set) (c) Discriminative Parsing (Training Set) (d) Discriminative Parsing (Development Set) (e) Generative Parsing (Training Set) (f) Generative Parsing (Development Set)
Figure 2: Performance curves on the training data (left) and the development/test data (right) for three experiments on natural language tasks. The annotations indicate where the best performance is attained for each method. The shading represents one standard deviation computed across ï¬ve runs from random initial starting points.
9
# References
[1] Do Kook Choe and Eugene Charniak. Parsing as language modeling. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2331â2336. The Association for Computational Linguistics, 2016.
[2] James Cross and Liang Huang. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pages 1â11. The Association for Computational Linguistics, 2016.
[3] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121â2159, 2011.
[4] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1â42, 1997.
[5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv:1611.07004, 2016.
[6] Andrej Karparthy. A peak at trends in machine learning. https://medium.com/@karpathy/ a-peek-at-trends-in-machine-learning-ab8a1085a106. Accessed: 2017-05-17.
[7] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In The International Conference on Learning Representations (ICLR), 2017.
[8] D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR), 2015.
[9] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations (ICLR), 2016.
[10] Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on large-scale shallow learning. arXiv:1703.10622, 2017.
[11] Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS, 19(2):313â330, 1993.
[12] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010.
[13] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lilli- crap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.
[14] Behnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized optimization in deep neural networks. In Neural Information Processing Systems (NIPS), 2015.
[15] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference on Learning Representations (ICLR), 2015.
[16] Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky. Non-convex learning via stochastic gradient Langevin dynamics: a nonasymptotic analysis. arXiv:1702.03849, 2017.
[17] Benjamin Recht, Moritz Hardt, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the International Conference on Machine Learning (ICML), 2016.
[18] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In Proceedings of The International Conference on Machine Learning (ICML), 2016.
[19] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning (ICML), 2013.
10
[20] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[21] T. Tieleman and G. Hinton. Lecture 6.5âRmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
[22] Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289â315, 2007.
# [23] Sergey Zagoruyko. Torch blog. http://torch.ch/blog/2015/07/30/cifar.html, 2015.
11
# A Full details of the minimum norm solution from Section 3.3
Full Details. The simplest derivation of the minimum norm solution uses the kernel trick. We know that the optimal solution has the form wSGD = X T α where α = K â1y and K = XX T . Note that
4 ifi=j and y,=1 8 ift=j7 and y, =â-1 3 1 Kij = wp ye y ifi Aj and yy; =1 ifi Aj and yy; = â1
Positing that αi = α+ if yi = 1 and αi = αâ if yi = â1 leaves us with the equations
(3n+ + 1)α+ + nâαâ = 1, n+α+ + (3nâ + 3)αâ = â1.
Solving this system of equations yields (3.2).
# B Differences between Torch, DyNet, and Tensorï¬ow
[| Torch | Tensorflow | DyNet SGD Momentum 0 No default 0.9 AdaGrad Initial Mean 0 0.1 0 AdaGrad ¢⬠le-10 Not used le-20 RMSProp Initial Mean 0 1.0 - RMSProp 3 0.99 0.9 - RMSProp ⬠le-8 le-10 - Adam 6; 0.9 0.9 0.9 Adam (5 0.999 0.999 0.999
Table 3: Default hyperparameters for algorithms in deep learning frameworks.
Table 3 lists the default values of the parameters for the various deep learning packages used in our experiments. In Torch, the Heavy Ball algorithm is callable simply by changing default momentum away from 0 with nesterov=False. In Tensorï¬ow and DyNet, SGD with momentum is implemented separately from ordinary SGD. For our Heavy Ball experiments we use a constant momentum of β = 0.9.
# C Data-generating distribution
We sketch here how the example from Section 3.3 can be modiï¬ed to use a data-generating distribution. To start, let D be a uniform distribution over N examples constructed as before, and let D = {(x1, y1), . . . , (xn, yn)} be a training set consisting of n i.i.d. draws from D. We will ultimately want to take N to be large enough so that the probability of a repeated training example is small.
Let E be the event that there is a repeated training example. We have by a simple union bound that
n n n n PIE\=P|\U U tei =2)}) <5 YO Phi =2)) mn ce i=1j=it1 i=1 j=i41
If the training set has no repeats, the result from Section 3.3 tells us that SGD will learn a perfect classiï¬er, while AdaGrad will ï¬nd a solution that correctly classiï¬es the training examples but predicts Ëy = 1 for all unseen data points. Hence, conditioned on ¬E, the error for SGD is
P(eyy~p [sign (wS°?, x) Ay | aE] =0,
12
while the error for AdaGrad is
error for AdaGrad is Pee.y)xD [sign (wt® x) #y|7â¬] = Piey)~D [si en (wt* 0) # y|(z,y)â¬D 7] Peay)~ pl(z,y) ⬠D | 7â¬] + Pwoy~p [sig un re) FUL a) â¬D.E] Pree pl(z,y) ⬠D | ~â¬] p> 0-â+(1 -u-n(i-2).-
Otherwise, if there is a repeat, we have trivial bounds of 0 and 1 for the conditional error in each case:
0< Pway~d [sign (w8SP, x) #y| é] <1, 0< Pyed [sign (wt, x) Fy| â¬] <1.
Putting these together, we ï¬nd that the unconditional error for SGD is bounded above by P(x,y)â¼D
[sign (wS°?, 2) % y]
Prawn [sign (wS°?, 2) % y] = Pray)~d sign (w°% Pe) £y | WE] PLE] + Pooyno [sign (ws? x) # y | E] PIE =0-P[-E] + Pw,y)~p [sign (wS?, x) 4 y | E] P(E] <0-P[Hâ¬]+1-P[â¬] n? <=, Sonâ
while the unconditional error for AdaGrad is bounded below by
[sign (w**, 2) A y]
# P(x,y)â¼D
Pxy)~v [sign (w**, 2) A y] = Pog) ~0 [sen (wi, 2) #9 |E] PLE + Peogymo [sen (ui, 2) # 9 |â¬] PIE] (1âp) (1 - *) P(E] + Peg wp [sign (w**, 2) # y | E] PIE] > (1p) (1-2) Pe] +0-PEE]
Let ⬠> 0 be a tolerance. For the error of SGD to be at most ¢, it suffices to take N > me case we have
2 P(ey)~D [sign (w8SP, x) x y] < on <e.
For the error of AdaGrad to be at least (1 â p) (1 â 6), it suffices to take N > we assuming n > 2, in which case we have
Pw,y)~p [sign (w*"*, x) # y] > (1 âp) (1 - x) (1 - x) 20-m (1-4) (1-4) 0-0 (1-§) (5) = (1-7) (1-e+ 5) >(1-p)(1â6).
# oan
Both of these conditions will be satisï¬ed by taking N ⥠max
2 =f,
Since the choice of ⬠was arbitrary, taking ⬠â 0 drives the SGD error to 0 and the AdaGrad error to 1 â p, matching the original result in the non-i.i.d. setting.
13
# in which
# D Step sizes used for parameter tuning
# Cifar-10
⢠SGD: {2, 1, 0.5 (best), 0.25, 0.05, 0.01}
⢠HB: {2, 1, 0.5 (best), 0.25, 0.05, 0.01}
⢠AdaGrad: {0.1, 0.05, 0.01 (best, def.), 0.0075, 0.005}
⢠RMSProp: {0.005, 0.001, 0.0005, 0.0003 (best), 0.0001}
⢠Adam: {0.005, 0.001 (default), 0.0005, 0.0003 (best), 0.0001, 0.00005}
The default Torch step sizes for SGD (0.001) , HB (0.001), and RMSProp (0.01) were outside the range we tested.
# War & Peace
⢠SGD: {2, 1 (best), 0.5, 0.25, 0.125}
⢠HB: {2, 1 (best), 0.5, 0.25, 0.125}
⢠AdaGrad: {0.4, 0.2, 0.1, 0.05 (best), 0.025}
⢠RMSProp: {0.02, 0.01, 0.005, 0.0025, 0.00125, 0.000625, 0.0005 (best), 0.0001}
⢠Adam: {0.005, 0.0025, 0.00125, 0.000625 (best), 0.0003125, 0.00015625}
Under the ï¬xed-decay scheme, we selected learning rate decay frequencies from the set {10, 20, 40, 80, 120, 160, â} and learning rate decay amounts from the set {0.1, 0.5, 0.8, 0.9}.
# Discriminative Parsing
⢠SGD: {1.0, 0.5, 0.2, 0.1 (best), 0.05, 0.02, 0.01}
⢠HB: {1.0, 0.5, 0.2, 0.1, 0.05 (best), 0.02, 0.01, 0.005, 0.002}
⢠AdaGrad: {1.0, 0.5, 0.2, 0.1, 0.05, 0.02 (best), 0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001}
⢠RMSProp: Not implemented in DyNet at the time of writing.
⢠Adam: {0.01, 0.005, 0.002 (best), 0.001 (default), 0.0005, 0.0002, 0.0001}
# Generative Parsing
⢠SGD: {1.0, 0.5 (best), 0.25, 0.1, 0.05, 0.025, 0.01}
⢠HB: {0.25, 0.1, 0.05, 0.02, 0.01 (best), 0.005, 0.002, 0.001}
⢠AdaGrad: {5.0, 2.5, 1.0, 0.5, 0.25 (best), 0.1, 0.05, 0.02, 0.01}
⢠RMSProp: {0.05, 0.02, 0.01, 0.005, 0.002 (best), 0.001, 0.0005, 0.0002, 0.0001}
⢠Adam: {0.005, 0.002, 0.001 (default), 0.0005 (best), 0.0002, 0.0001}
14 | {
"id": "1703.10622"
} |
1705.07565 | Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods. | http://arxiv.org/pdf/1705.07565 | Xin Dong, Shangyu Chen, Sinno Jialin Pan | cs.NE, cs.CV, cs.LG | null | null | cs.NE | 20170522 | 20171109 | 7 1 0 2
v o N 9 ] E N . s c [
2 v 5 6 5 7 0 . 5 0 7 1 : v i X r a
# Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon
# Xin Dong Nanyang Technological University, Singapore n1503521a@e.ntu.edu.sg
# Shangyu Chen Nanyang Technological University, Singapore schen025@e.ntu.edu.sg
# Sinno Jialin Pan Nanyang Technological University, Singapore sinnopan@ntu.edu.sg
# Abstract
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to signiï¬cantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the ï¬nal prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS.
# 1 Introduction
Intuitively, deep neural networks [1] can approximate predictive functions of arbitrary complexity well when they are of a huge amount of parameters, i.e., a lot of layers and neurons. In practice, the size of deep neural networks has been being tremendously increased, from LeNet-5 with less than 1M parameters [2] to VGG-16 with 133M parameters [3]. Such a large number of parameters not only make deep models memory intensive and computationally expensive, but also urge researchers to dig into redundancy of deep neural networks. On one hand, in neuroscience, recent studies point out that there are signiï¬cant redundant neurons in human brain, and memory may have relation with vanishment of speciï¬c synapses [4]. On the other hand, in machine learning, both theoretical analysis and empirical experiments have shown the evidence of redundancy in several deep models [5, 6]. Therefore, it is possible to compress deep neural networks without or with little loss in prediction by pruning parameters with carefully designed criteria.
However, ï¬nding an optimal pruning solution is NP-hard because the search space for pruning is exponential in terms of parameter size. Recent work mainly focuses on developing efï¬cient algorithms to obtain a near-optimal pruning solution [7, 8, 9, 10, 11]. A common idea behind most exiting approaches is to select parameters for pruning based on certain criteria, such as increase in training error, magnitude of the parameter values, etc. As most of the existing pruning criteria are
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
designed heuristically, there is no guarantee that prediction performance of a deep neural network can be preserved after pruning. Therefore, a time-consuming retraining process is usually needed to boost the performance of the trimmed neural network.
Instead of consuming efforts on a whole deep network, a layer-wise pruning method, Net-Trim, was proposed to learn sparse parameters by minimizing reconstructed error for each individual layer [6]. A theoretical analysis is provided that the overall performance drop of the deep network is bounded by the sum of reconstructed errors for each layer. In this way, the pruned deep network has a theoretical guarantee on its error. However, as Net-Trim adopts ¢;-norm to induce sparsity for pruning, it fails to obtain high compression ratio compared with other methods (9|{I1].
In this paper, we propose a new layer-wise pruning method for deep neural networks, aiming to achieve the following three goals: 1) For each layer, parameters can be highly compressed after pruning, while the reconstructed error is small. 2) There is a theoretical guarantee on the overall prediction performance of the pruned deep neural network in terms of reconstructed errors for each layer. 3) After the deep network is pruned, only a light retraining process is required to resume its original prediction performance.
To achieve our ï¬rst goal, we borrow an idea from some classic pruning approaches for shallow neural networks, such as optimal brain damage (OBD) [12] and optimal brain surgeon (OBS) [13]. These classic methods approximate a change in the error function via functional Taylor Series, and identify unimportant weights based on second order derivatives. Though these approaches have proven to be effective for shallow neural networks, it remains challenging to extend them for deep neural networks because of the high computational cost on computing second order derivatives, i.e., the inverse of the Hessian matrix over all the parameters. In this work, as we restrict the computation on second order derivatives w.r.t. the parameters of each individual layer only, i.e., the Hessian matrix is only over parameters for a speciï¬c layer, the computation becomes tractable. Moreover, we utilize characteristics of back-propagation for fully-connected layers in well-trained deep networks to further reduce computational complexity of the inverse operation of the Hessian matrix.
To achieve our second goal, based on the theoretical results in [6], we provide a proof on the bound of performance drop before and after pruning in terms of the reconstructed errors for each layer. With such a layer-wise pruning framework using second-order derivatives for trimming parameters for each layer, we empirically show that after signiï¬cantly pruning parameters, there is only a little drop of prediction performance compared with that before pruning. Therefore, only a light retraining process is needed to resume the performance, which achieves our third goal.
The contributions of this paper are summarized as follows. 1) We propose a new layer-wise pruning method for deep neural networks, which is able to signiï¬cantly trim networks and preserve the prediction performance of networks after pruning with a theoretical guarantee. In addition, with the proposed method, a time-consuming retraining process for re-boosting the performance of the pruned network is waived. 2) We conduct extensive experiments to verify the effectiveness of our proposed method compared with several state-of-the-art approaches.
# 2 Related Works and Preliminary
Pruning methods have been widely used for model compression in early neural networks [7] and modern deep neural networks [6, 8, 9, 10, 11]. In the past, with relatively small size of training data, pruning is crucial to avoid overï¬tting. Classical methods include OBD and OBS. These methods aim to prune parameters with the least increase of error approximated by second order derivatives. However, computation of the Hessian inverse over all the parameters is expensive. In OBD, the Hessian matrix is restricted to be a diagonal matrix to make it computationally tractable. However, this approach implicitly assumes parameters have no interactions, which may hurt the pruning performance. Different from OBD, OBS makes use of the full Hessian matrix for pruning. It obtains better performance while is much more computationally expensive even using Woodbury matrix identity [14], which is an iterative method to compute the Hessian inverse. For example, using OBS 133M. on VGG-16 naturally requires to compute inverse of the Hessian matrix with a size of 133M
Regarding pruning for modern deep models, Han et al. [9] proposed to delete unimportant parameters based on magnitude of their absolute values, and retrain the remaining ones to recover the original prediction performance. This method achieves considerable compression ratio in practice. However,
2
Ã
as pointed out by pioneer research work [12, 13], parameters with low magnitude of their absolute values can be necessary for low error. Therefore, magnitude-based approaches may eliminate wrong parameters, resulting in a big prediction performance drop right after pruning, and poor robustness before retraining [15]. Though some variants have tried to ï¬nd better magnitude-based criteria [16, 17], the signiï¬cant drop of prediction performance after pruning still remains. To avoid pruning wrong parameters, Guo et al. [11] introduced a mask matrix to indicate the state of network connection for dynamically pruning after each gradient decent step. Jin et al. [18] proposed an iterative hard thresholding approach to re-activate the pruned parameters after each pruning phase.
Besides Net-trim, which is a layer-wise pruning method discussed in the previous section, there is some other work proposed to induce sparsity or low-rank approximation on certain layers for pruning [19] [20]. However, as the @y-norm or the ¢;-norm sparsity-induced regularization term increases difficulty in optimization, the pruned deep neural networks using these methods either obtain much smaller compression ratio [6] compared with direct pruning methods or require retraining of the whole network to prevent accumulation of errors [10].
Optimal Brain Surgeon As our proposed layer-wise pruning method is an extension of OBS on deep neural networks, we briefly review the basic of OBS here. Consider a network in terms of parameters w trained to a local minimum in error. The functional Taylor series of the error w.r.t. w is: bB= (22)" dw + $5w' Hw + O (||dw||3), where 5 denotes a perturbation of a corresponding variable, H = 0?E / bw? ⬠Râ¢*" is the Hessian matrix, where m is the number of parameters, and O(||5@;||°) is the third and all higher order terms. For a network trained to a local minimum in error, the first term vanishes, and the term O(||5@,||) can be ignored. In OBS, the goal is to set one of the parameters to zero, denoted by we (scalar), to minimize JE in each pruning iteration. The resultant optimization problem is written as follows,
min -dw' How, s.t. e) dw +w, = 0, (1) q 2 q
where eq is the unit selecting vector whose q-th element is 1 and otherwise 0. As shown in [21], the optimization problem (1) can be solved by the Lagrange multipliers method. Note that a computation bottleneck of OBS is to calculate and store the non-diagonal Hesssian matrix and its inverse, which makes it impractical on pruning deep models which are usually of a huge number of parameters.
# 3 Layer-wise Optimal Brain Surgeon
# 3.1 Problem Statement
Given a training set of n instances, {(x;, yj) }/_1, and a well-trained deep neural network of L layers (excluding the input layer] Denote the input and the output of the whole deep neural network by X=[xi,...,Xn]â¬R*" and Yâ¬Rââ¢?, respectively. For a layer J, we denote the input and output of the layer by yi lolyt ty eR 1%" and Y'=[y}, ...,y4] eR *â, respectively, where y! can be considered as a representation of x; in layer /, and Y° = X, Y" = Y, and mo = d. Using one forward-pass step, we have Y!=o(Z'), where Z' = W,' Y'~! with W, â¬Râ¢-1*⢠being the matrix of parameters for layer J, and o(-) is the activation function. For convenience in presentation and proof, we define the activation function o(-) as the rectified linear unit (ReLU) [22]. We further denote by ©; â¬Râ¢-?""*! the vectorization of W/. For a well-trained neural network, Yâ, Z! and ©} are all fixed matrixes and contain most information of the neural network. The goal of pruning is to set the values of some elements in ©, to be zero.
# 3.2 Layer-Wise Error
During layer-wise pruning in layer l, the input Ylâ1 is ï¬xed as the same as the well-trained network. Suppose we set the q-th element of Îl, denoted by Îl[q] , to be zero, and get a new parameter vector, denoted by ËÎl. With Ylâ1, we obtain a new output for layer l, denoted by ËYl. Consider the root of
1For simplicity in presentation, we suppose the neural network is a feed-forward (fully-connected) network. In Section 3.4, we will show how to extend our method to ï¬lter layers in Convolutional Neural Networks.
3
mean square error between ËYl and Yl over the whole training data as the layer-wise error:
n 1 Le ol l Ll l =) 2H -Â¥)T6ây)) = ale Yr, ) j=l
F is the Frobenius Norm. Note that for any single parameter pruning, one can compute its where error εl mlâ1ml, and use it as a pruning criterion. This idea has been adopted by some existing methods [15]. However, in this way, for each parameter at each layer, one has to pass the whole training data once to compute its error measure, which is very computationally expensive. A more efï¬cient approach is to make use of the second order derivatives of the error function to help identify importance of each parameter.
We ï¬rst deï¬ne an error function E(
# ) as ·
1 n where Zl is outcome of the weighted sum operation right before performing the activation function ) at layer l of the well-trained neural network, and ËZl is outcome of the weighted sum operation Ï( · after pruning at layer l . Note that Zl is considered as the desired output of layer l before activation. The following lemma shows that the layer-wise error is bounded by the error deï¬ned in (3).
Lemma 3.1. With the error function (3) and Yl = Ï(Zl), the following holds: εl
< \/
â¤
# E(ËZl).
Therefore, to ï¬nd parameters whose deletion (set to be zero) minimizes (2) can be translated to ï¬nd parameters those deletion minimizes the error function (3). Following [12, 13], the error function can be approximated by functional Taylor series as follows,
+ , OE! 1 l l a f T 3 E(Z') â E(Z') = 6E' = (55) 560, 4 538 H,6©, + O (||6@;||*) , (4)
l l a f T 3 E(Z') â E(Z') = 6E' = (55) 560, 4 538 H,6©, + O (||6@;||*) , (4) where 6 denotes a perturbation of a corresponding variable, H; = 07 E"/ 0@,? is the Hessian matrix w.r.t. @;, and O(||5@,||*) is the third and all higher order terms. It can be proven that with the error function defined in (3), the first (linear) term SE o.=0; and O(||5@;||?) are equal to 0.
Suppose every time one aims to ï¬nd a parameter Îl[q] to set to be zero such that the change δEl is minimal. Similar to OBS, we can formulate it as the following optimization problem:
min 46@,7H,d@;, st. e150, +@,,, =O, (5) WS 4 (al
where eq is the unit selecting vector whose q-th element is 1 and otherwise 0. By using the Lagrange multipliers method as suggested in [21], we obtain the closed-form solutions of the optimal parameter pruning and the resultant minimal change in the error function as follows, (Îl[q] )2 [Hâ1 ]qq l
Here Lq is referred to as the sensitivity of parameter Îl[q] . Then we select parameters to prune based on their sensitivity scores instead of their magnitudes. As mentioned in section 2, magnitude-based criteria which merely consider the numerator in (6) is a poor estimation of sensitivity of parameters. Moreover, in (6), as the inverse Hessian matrix over the training data is involved, it is able to capture data distribution when measuring sensitivities of parameters. After pruning the parameter, Îl[q] , with the smallest sensitivity, the parameter vector is updated via ËÎl = Îl +δÎl. With Lemma 3.1 and (6), we have that the layer-wise error for layer l is bounded by
< \E(@!) = \E(@) â E(Z!) = V6E! = HO tol (7) 218 laa
Note that ï¬rst equality is obtained because of the fact that E(Zl) = 0. It is worth to mention that though we merely focus on layer l, the Hessian matrix is still a square matrix with size of mlâ1ml for each layer in Section 3.4.
4
# 3.3 Layer-Wise Error Propagation and Accumulation
So far, we have shown how to prune parameters for each layer, and estimate their introduced errors independently. However, our aim is to control the consistence of the networkâs ï¬nal output YL before and after pruning. To do this, in the following, we show how the layer-wise errors propagate to ï¬nal output layer, and the accumulated error over multiple layers will not explode. Theorem 3.2. Given a pruned deep network via layer-wise pruning introduced in Section 3.2, each layer has its own layer-wise error εl for 1 L, then the accumulated error of ultimate network ËYL output ËεL = 1â
# : || obeys:
â
lA L-1 L ( Il \@u-v BEF) + VOEE, (8) : l=k+1
where Y! = o(W/Y"-}), for 2 <1 < L denotes âaccumulated pruned outputâ of layer l, and Y!=o0(W]X).
Theorem|3.2|shows that: 1) Layer-wise error for a layer / will be scaled by continued multiplication of paramet Frobenius Norm over the following layers when it propagates to final output, i.e., the Lâ1 layers after the /-th layer; 2) The final error of ultimate network output is bounded by the weighted sum of layer-wise errors. The proof of Theorem[3.2|can be found in Appendix. Consider a general case with 9) and { (8): ): parameter ©;,,. who has the smallest sensitivity in layer / is pruned by the i-th pruning operation, and this finally adds Me ma ]Ox ||e VOL! to the ultimate network output error. It is worth to mention that although it seems that the layer-wise error is scaled by a quite large product factor, S; = Tis ||x||,~ when it propagates to the final layer, this scaling is still tractable in practice because ultimate network output is also scaled by the same product factor compared with the output of layer /. For example, we can easily estimate the norm of ultimate network output via, || YÂ¥â||~ + $;||Â¥+||-. If one pruning operation in the Ist layer causes the layer-wise error V6E', then the relative ultimate output error is
eb |Â¥"-Y" lp | VOB" â IÂ¥4le 7 |FÂ¥'le
Thus, we can see that even S, may be quite large, the relative ultimate output error would still be about V6E"/|| ty! ||â which is controllable in practice especially when most of modern deep networks adopt maxout layer as ultimate output. Actually, So is called as network gain representing the ratio of the magnitude of the network output to the magnitude of the network input.
# 3.4 The Proposed Algorithm
# 3.4.1 Pruning on Fully-Connected Layers
To selectively prune parameters, our approach needs to compute the inverse Hessian matrix at each layer to measure the sensitivities of each parameter of the layer, which is still computationally expensive though tractable. In this section, we present an efï¬cient algorithm that can reduce the size of the Hessian matrix and thus speed up computation on its inverse.
For each layer l, according to the deï¬nition of the error function used in Lemma 3.1, the ï¬rst derivative of the error function with respect to ËÎl is âEl j and âÎl j are the j-th columns of the matrices ËZl and Zl, respectively, and the Hessian matrix is deï¬ned as: zl
+ wet issn (2h (08) ot gt giv a B= 36,7 =n ja ( 30; â xen? (4 z)' ]. Note that for most cases 2; is quite
+
â2zl â(Îl)2 (Ëzl j j, we simply ignore the term containing Ëzl
j â â¡ close to zl zl j. Even in the late-stage of pruning when this difference is not small, we can still ignore the corresponding term [13]. For layer l that has ml output units, zl
nom dz; wT H, = H} = ; 9 PEM AEE oe (v0) "
5
He Rex? Hu, H22, H33 ⬠R**4
Figure 1: Illustration of shape of Hessian. For feed-forward neural networks, unit z; gets its activation via forward propagation: z = Wy, where W ⬠R**3, y = [yi, yo, y3, yalâ ⬠R**!, and z = [z1, 22, 23]! ⬠R®*!. Then the Hessian matrix of 21 w.r.t. all parameters is denoted by H'â¢!, As illustrated in the figure, H!*1]âs elements are zero except for those corresponding to W.. (the 1st column of W), which is denoted by Hj. H2] and H's! are similar. More importantly, H~! = diag(H7), H5), Hj), and Hy, = Ho2 = Hg3. As a result, one only needs to compute H; to obtain H~! which significantly reduces computational complexity.
where the Hessian matrix for a single instance j at layer 1, Hj, is a block diagonal square matrix L Oz; of the size mj_1 m);. Specifically, the gradient of the first output unit ay w.s.t. ©; is 6, = az; 0215 Owi? °°? OWm, , where w; is the i-th column of W;. As ay is the layer output before activation function, its gradient is simply to calculate, and more importantly all output unitsâs gradients are L zh d. I-1; . < dw. =Â¥; if k =i, otherwise owe = equal to the layer input: 5 a =0. An illustrated example is shown in Figure[I] where we ignore the scripts 7 and / for simplicity in presentation.
equal to the layer input: Figure 1, where we ignore the scripts j and l for simplicity in presentation. It can be shown that the block diagonal square matrix Hj where 1 block diagonal square matrix with its diagonal blocks being ( 1 n Ψl = 1 n matrix identity [13]:
It can be shown that the block diagonal square matrix H's diagonal blocks Hi, eRâ¢1xm-1, . j _ ~1y\T . : : -1: where 1 < i < my, are all equal to wy =y} Ny} 1) , and the inverse Hessian matrix H, lis alsoa block diagonal square matrix with its diagonal blocks being (4 Vyat yy )-1 In addition, normally w= 4 yal ap) is degenerate and its pseudo-inverse can be calculated recursively via Woodbury matrix identity [13):
wy tyl-1(yl-1) "ply t -1 (Â¥) yy e) ~ â1\T =1 j_ n+ (Â¥i41) v') You where ©! =15~_, ah! with (Wh) | =a, a ⬠(104, 105], and (W!)' =(!,) The is then reduced to mj_1, and the computational complexity of calculating H;' is O (nm?_,). -1 (i) = (8)
,
â1
â1
# size of W!
# l
lâ1
To make the estimated minimal change of the error function optimal in (6), the layer-wise Hessian matrices need to be exact. Since the layer-wise Hessian matrices only depend on the corresponding layer inputs, they are always able to be exact even after several pruning operations. The only parameter we need to control is the layer-wise error εl. Note that there may be a âpruning inï¬ection pointâ after which layer-wise error would drop dramatically. In practice, user can incrementally increase the size of pruned parameters based on the sensitivity Lq, and make a trade-off between the pruning ratio and the performance drop to set a proper tolerable error threshold or pruning ratio.
The procedure of our pruning algorithm for a fully-connected layer l is summarized as follows.
Step 1: Get layer input ylâ1 from a well-trained deep network. Step 2: Calculate the Hessian matrix Hlii, for i = 1, ..., ml, and its pseudo-inverse over the dataset,
and get the whole pseudo-inverse of the Hessian matrix.
Step 3: Compute optimal parameter change 6@, and the sensitivity L, for each parameter at layer /. Set tolerable error threshold .
6
Step 4: Pick up parameters ©/,,,âs with the smallest sensitivity scores. Step 5: If \/L, < â¬, prune the parameter Gi.) *s and get new parameter
â¬, prune the parameter Gi.) *s and get new parameter values via [oF = 0;+ 060),
⤠then repeat Step 4; otherwise stop pruning.
# 3.4.2 Pruning on Convolutional Layers
It is straightforward to generalize our method to a convolutional layer and its variants if we vectorize ï¬lters of each channel and consider them as special fully-connected layers that have multiple inputs (patches) from a single instance. Consider a vectorized ï¬lter wi of channel i, 1 ml, it acts similarly to parameters which are connected to the same output unit in a fully-connected layer. However, the difference is that for a single input instance j, every ï¬lter step of a sliding window across of it will extract a patch Cjn from the input volume. Similarly, each pixel zl ijn in the 2-dimensional activation map that gives the response to each patch corresponds to one output unit in a fully-connected âzl layer. Hence, for convolutional layers, (9) is generalized as Hl = 1 â[w1,...,wml ] , n where Hl is a block diagonal square matrix whose diagonal blocks are all the same. Then, we can slightly revise the computation of the Hessian matrix, and extend the algorithm for fully-connected layers to convolutional layers.
Note that the accumulated error of ultimate network output can be linearly bounded by layer-wise error as long as the model is feed-forward. Thus, L-OBS is a general pruning method and friendly with most of feed-forward neural networks whose layer-wise Hessian can be computed expediently with slight modiï¬cations. However, if models have sizable layers like ResNet-101, L-OBS may not be economical because of computational cost of Hessian, which will be studied in our future work.
# 4 Experiments
In this section, we verify the effectiveness of our proposed Layer-wise OBS (L-OBS) using various architectures of deep neural networks in terms of compression ratio (CR), error rate before retraining, and the number of iterations required for retraining to resume satisfactory performance. CR is deï¬ned as the ratio of the number of preserved parameters to that of original parameters, lower is better. We conduct comparison results of L-OBS with the following pruning approaches: 1) Randomly pruning, 2) OBD [12], 3) LWC [9], 4) DNS [11], and 5) Net-Trim [6]. The deep architectures used for experiments include: LeNet-300-100 [2] and LeNet-5 [2] on the MNIST dataset, CIFAR-Net2 [24] on the CIFAR-10 dataset, AlexNet [25] and VGG-16 [3] on the ImageNet ILSVRC-2012 dataset. For experiments, we ï¬rst well-train the networks, and apply various pruning approaches on networks to evaluate their performance. The retraining batch size, crop method and other hyper-parameters are under the same setting as used in LWC. Note that to make comparisons fair, we do not adopt any other pruning related methods like Dropout or sparse regularizers on MNIST. In practice, L-OBS can work well along with these techniques as shown on CIFAR-10 and ImageNet.
# 4.1 Overall Comparison Results
The overall comparison results are shown in Table 1. In the ï¬rst set of experiments, we prune each layer of the well-trained LeNet-300-100 with compression ratios: 6.7%, 20% and 65%, achieving slightly better overall compression ratio (7%) than LWC (8%). Under comparable compression ratio, L-OBS has quite less drop of performance (before retraining) and lighter retraining compared with LWC whose performance is almost ruined by pruning. Classic pruning approach OBD is also compared though we observe that Hessian matrices of most modern deep models are strongly non-diagonal in practice. Besides relative heavy cost to obtain the second derivatives via the chain rule, OBD suffers from drastic drop of performance when it is directly applied to modern deep models.
To properly prune each layer of LeNet-5, we increase tolerable error threshold ¢ from relative small initial value to incrementally prune more parameters, monitor model performance, stop pruning and set ⬠until encounter the âpruning inflection pointâ mentioned in Section In practice, we prune each layer of LeNet-5 with compression ratio: 54%, 43%, 6% and 25% and retrain pruned model with
2A revised AlexNet for CIFAR-10 containing three convolutional layers and two fully connected layers.
7
Table 1: Overall comparison results. (For iterative L-OBS, err. after pruning regards the last pruning stage.)
Method Networks Original error CR Err. after pruning Re-Error #Re-Iters. Random OBD LWC DNS L-OBS L-OBS (iterative) LeNet-300-100 LeNet-300-100 LeNet-300-100 LeNet-300-100 LeNet-300-100 LeNet-300-100 1.76% 1.76% 1.76% 1.76% 1.76% 1.76% 8% 8% 8% 1.8% 7% 1.5% 85.72% 86.72% 81.32% - 3.10% 2.43% 2.25% 1.96% 1.95% 1.99% 1.82% 1.96% 3.50 Ã 105 8.10 Ã 104 1.40 Ã 105 3.40 Ã 104 510 643 OBD LWC DNS L-OBS L-OBS (iterative) LeNet-5 LeNet-5 LeNet-5 LeNet-5 LeNet-5 1.27% 1.27% 1.27% 1.27% 1.27% 8% 8% 0.9% 7% 0.9% 86.72% 89.55% - 3.21% 2.04% 2.65% 1.36% 1.36% 1.27% 1.66% 2.90 Ã 105 9.60 Ã 104 4.70 Ã 104 740 841 LWC L-OBS CIFAR-Net CIFAR-Net 18.57% 18.57% 9% 9% 87.65% 21.32% 19.36% 18.76% 1.62 Ã 105 1020 DNS LWC L-OBS AlexNet (Top-1 / Top-5 err.) AlexNet (Top-1 / Top-5 err.) AlexNet (Top-1 / Top-5 err.) 43.30 / 20.08% 43.30 / 20.08% 43.30 / 20.08% 5.7% 11% 11% 43.91 / 20.72% - 76.14 / 57.68% 44.06 / 20.64% 50.04 / 26.87% 43.11 / 20.01% 7.30 Ã 105 5.04 Ã 106 1.81 Ã 104 DNS LWC L-OBS (iterative) VGG-16 (Top-1 / Top-5 err.) VGG-16 (Top-1 / Top-5 err.) VGG-16 (Top-1 / Top-5 err.) 31.66 / 10.12% 31.66 / 10.12% 31.66 / 10.12% 7.5% 7.5% 7.5% 63.38% / 38.69% 1.07 Ã 106 - 2.35 Ã 107 73.61 / 52.64% 32.43 / 11.12% 8.63 Ã 104 37.32 / 14.82% 32.02 / 10.97%
much fewer iterations compared with other methods (around 1 : 1000). As DNS retrains the pruned network after every pruning operation, we are not able to report its error rate of the pruned network before retraining. However, as can be seen, similar to LWC, the total number of iterations used by DNS for rebooting the network is very large compared with L-OBS. Results of retraining iterations of DNS are reported from [11] and the other experiments are implemented based on TensorFlow [26]. In addition, in the scenario of requiring high pruning ratio, L-OBS can be quite ï¬exibly adopted to an iterative version, which performs pruning and light retraining alternatively to obtain higher pruning ratio with relative higher cost of pruning. With two iterations of pruning and retraining, L-OBS is able to achieve as the same pruning ratio as DNS with much lighter total retraining: 643 iterations on LeNet-300-100 and 841 iterations on LeNet-5.
Regarding comparison experiments on CIFAR-Net, we ï¬rst well-train it to achieve a testing error of 18.57% with Dropout and Batch-Normalization. We then prune the well-trained network with LWC and L-OBS, and get the similar results as those on other network architectures. We also observe that LWC and other retraining-required methods always require much smaller learning rate in retraining. This is because representation capability of the pruned networks which have much fewer parameters is damaged during pruning based on a principle that number of parameters is an important factor for representation capability. However, L-OBS can still adopt original learning rate to retrain the pruned networks. Under this consideration, L-OBS not only ensures a warm-start for retraining, but also ï¬nds important connections (parameters) and preserve capability of representation for the pruned network instead of ruining model with pruning.
Regarding AlexNet, L-OBS achieves an overall compression ratio of 11% without loss of accuracy with 2.9 hours on 48 Intel Xeon(R) CPU ES5-1650 to compute Hessians and 3.1 hours on NVIDIA Tian X GPU to retrain pruned model (i.e. 18.1K iterations). The computation cost of the Hessian inverse in L-OBS is negligible compared with that on heavy retraining in other methods. This claim can also be supported by the analysis of time complexity. As mentioned in Section3.4] the time complexity of calculating H,! isO (nmj_,). Assume that neural networks are retrained via SGD, then the approximate time complexity of retraining is O (IdM), where d is the size of the mini-batch, M and J are the total numbers of parameters and iterations, respectively. By considering that M = > hoe (m7_,). and retraining in other methods always requires millions of iterations (Id > n) as shown in experiments, complexity of calculating the Hessian (inverse) in L-OBS is quite economic. More interestingly, there is a trade-off between compression ratio and pruning (including retraining) cost. Compared with other methods, L-OBS is able to provide fast-compression: prune AlexNet to 16% of its original size without substantively impacting accuracy (pruned top-5 error 20.98%) even without any retraining. We further apply L-OBS to VGG-16 that has 138M parameters. To achieve more promising compression ratio, we perform pruning and retraining alteratively twice. As can be seen from the table, L-OBS achieves an overall compression ratio of 7.5% without loss
8
x 108 1.00 4 ââ_Net-Jri 0.95 ge i2 s a 7 @ 0.90 a 1.0 uy Methow a & & 0.85 3 0.8 z 0.80 5 06 5 0.75 5 8 204 2 0.70 3 2 0.65 02 0.60 00 © 7 5 03 04 05 0.6 07 08 0.9 1.0 10 10 10° Compression Rate Number of data sample
# (a) Top-5 test accuracy of L-OBS on ResNet-50 under different compression ratios.
# (b) Memory Comparion between L-OBS and Net- Trim on MNIST.
Table 2: Comparison of Net-Trim and Layer-wise OBS on the second layer of LeNet-300-100.
Method ξ2 r Pruned Error CR Method ξ2 r Pruned Error CR Net-Trim L-OBS L-OBS 0.13 0.70 0.71 13.24% 11.34% 10.83% 19% 3.4% 3.8% Net-Trim L-OBS Net-Trim 0.62 0.37 0.71 28.45% 4.56% 47.69% 7.4% 7.4% 4.2%
of accuracy taking 10.2 hours in total on 48 Intel Xeon(R) CPU E5-1650 to compute the Hessian inverses and 86.3K iterations to retrain the pruned model.
We also apply L-OBS on ResNet-50 [27]. From our best knowledge, this is the ï¬rst work to perform pruning on ResNet. We perform pruning on all the layers: All layers share a same compression ratio, and we change this compression ratio in each experiments. The results are shown in Figure 2(a). As we can see, L-OBS is able to maintain ResNetâs accuracy (above 85%) when the compression ratio is larger than or equal to 45%.
# 4.2 Comparison between L-OBS and Net-Trim
As our proposed L-OBS is inspired by Net-Trim, which adopts ¢;-norm to induce sparsity, we conduct comparison experiments between these two methods. In Net-Trim, networks are pruned by formulating layer-wise pruning as a optimization: minw, ||W1||1 s.t. ||o(W! YÂ¥'-!) â Y[p < &, where â¬! corresponds to â¬4||'Â¥" || in L-OBS. Due to memory limitation of Net-Trim, we only prune the middle layer of LeNet-300-100 with L-OBS and Net-Trim under the same setting. As shown in Table|2| under the same pruned error rate, CR of L-OBS outnumbers that of the Net-Trim by about six times. In addition, Net-Trim encounters explosion of memory and time on large-scale datasets and large-size parameters. Specifically, space complexity of the positive semidefinite matrix Q in quadratic constraints used in Net-Trim for optimization is O (2nm?mis . For example, Q requires about 65.7Gb for 1,000 samples on MNIST as illustrated in Figure Moreover, Net-Trim is designed for multi-layer perceptrons and not clear how to deploy it on convolutional layers.
# 5 Conclusion
We have proposed a novel L-OBS pruning framework to prune parameters based on second order derivatives information of the layer-wise error function and provided a theoretical guarantee on the overall error in terms of the reconstructed errors for each layer. Our proposed L-OBS can prune considerable number of parameters with tiny drop of performance and reduce or even omit retraining. More importantly, it identiï¬es and preserves the real important part of networks when pruning compared with previous methods, which may help to dive into nature of neural networks.
# Acknowledgements
This work is supported by NTU Singapore Nanyang Assistant Professorship (NAP) grant M4081532.020, Singapore MOE AcRF Tier-2 grant MOE2016-T2-2-060, and Singapore MOE AcRF Tier-1 grant 2016-T1-001-159.
9
# References
[1] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436â444, 2015.
[2] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[4] Luisa de Vivo, Michele Bellesi, William Marshall, Eric A Bushong, Mark H Ellisman, Giulio Tononi, and Chiara Cirelli. Ultrastructural evidence for synaptic scaling across the wake/sleep cycle. Science, 355(6324):507â510, 2017.
[5] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148â2156, 2013. [6] Nguyen N. Aghasi, A. and J. Romberg. Net-trim: A layer-wise convex pruning of deep neural
networks. Journal of Machine Learning Research, 2016.
[7] Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740â 747, 1993.
[8] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
[9] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015.
[10] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Sparsifying neural network connections for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4856â4864, 2016.
[11] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efï¬cient dnns. In Advances In Neural Information Processing Systems, pages 1379â1387, 2016.
[12] Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPs, volume 2, pages 598â605, 1989.
[13] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in neural information processing systems, pages 164â164, 1993. [14] Thomas Kailath. Linear systems, volume 156. Prentice-Hall Englewood Cliffs, NJ, 1980. [15] Nikolas Wolfe, Aditya Sharma, Lukas Drude, and Bhiksha Raj. The incredible shrinking neural network: New perspectives on learning representations through the lens of pruning. arXiv preprint arXiv:1701.04465, 2017.
[16] Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efï¬cient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
[17] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning ï¬lters for efï¬cient convnets. arXiv preprint arXiv:1608.08710, 2016.
[18] Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. Training skinny deep neural networks with iterative hard thresholding methods. arXiv preprint arXiv:1607.05423, 2016.
[19] Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015.
[20] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 806â814, 2015.
[21] R Tyrrell Rockafellar. Convex analysis. princeton landmarks in mathematics, 1997. [22] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectiï¬er neural networks. In
Aistats, volume 15, page 275, 2011.
[23] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Maxout networks. ICML (3), 28:1319â1327, 2013.
10
[24] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
[25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[26] MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016.
11
# APPENDIX
# Proof of Theorem 3.2
We prove Theorem 3.2 via induction. First, for l = 1, (8) holds as a special case of (2). Then suppose that Theorem 3.2 holds up to layer l:
l-1 l fH < S(T] WOcllevoe*) + Vor! (10) h=1 k=h+1
In order to show that (10) holds for layer 7+ 1 as well, we refer to Y'+! =o(W/],, Y") as âlayer-wise pruned outputâ, where the input Y" is fixed as the same as the originally well-trained network not an accumulated input Yâ, and have the following theorem. Theorem 5.1. Consider layer |+1 in a pruned deep network, the difference between its accumulated pruned output, Y'*", and layer-wise pruned output, Y'+", is bounded by:
# yyirt
yyirt _ Vols < Vnl|O |p. (11)
â
â¤
Proof sketch: Consider one arbitrary element of the layer-wise pruned output yn, Gf) = ow) H+ Wi) (y} -Â¥})) ~ =T ~ < Hit +o(w] (y} - 55) ~ =T ~ < Hp t+lW Oj - HI,
j))
~ =T ~ < Hp t+lW Oj - HI, where W; is the i-th column of Wiis. The first inequality is obtained because we suppose the activation function o(-) is ReLU. Similarly, it holds for accumulated pruned output:
Ëyl+1 ij ⤠Ëyl+1 ij + i (yl j â Ëyl j) . |
vip) Sop? + Ie By combining the above two inequalities, we have
a) 95") < be (5 â HH).
and thus have the following inequality in a form of matrix,
# ËWl+1(Yl ËYl
ËYl+1 ËYl+1 F ËÎl+1 F ËYl F
Â¥+)le
Â¥!
â
â
â
As é! is defined as é! =
As é! is defined as é! = yallâ â Y" ||, we have
||, we have
ËYl+1 ân F Ëεl.
â¤
â This completes the proof of Theorem 11.
By using (2) ,(11) and the triangle inequality, we are now able to extend (10) to layer l + 1:
IA 1 so. 1. ~ 1m al41 141 (41) yA (41) 141 (41) âyjyvettiy yyvol_iy âyyvoliy E val! lle Tall lp + Tall lle IA l 41 »( Il joie VEE) + V6E41, n=l \keh+1
Finally, we prove that (10) holds up for all layers, and Theorem 3.2 is a special case when l = L.
# Extensive Experiments and Details
# Redundancy of Networks
LeNet-300-100 is a classical feed-forward network, which has three fully connected layers, with 267K learnable parameters. LeNet-5 is a convolutional neural network that has two convolutional
12
S a Random Pruning Accuracy S BR LWC ApoZ Ours S iy 0.0 0 10 20 30 40 50 60 70 80 90 100 Pruning Ratio (%)
Figure 2: Test accuracy on MNIST using LeNet-300-100 when continually pruning the ï¬rst layer until pruning ratio is 100%. Comparison on ability to preserve prediction between LWC, ApoZ and our proposed L-OBS.
10° 4. E 10 = 10° s Z 10° 10! 0 1 2 3 4 5 6 L, (10-4)
Figure 3: Distribution of sensitivity of parameters in LeNet-300-100âs ï¬rst layer. More than 90% of parametersâ sensitivity scores are smaller than 0.001.
layers and two fully connected layers, with 431K learnable parameters. CIFAR-Net is a revised AlexNet for CIFAR-10 containing three convolutional layers and two fully connected layers.
We ï¬rst validate the redundancy of networks and the ability of our proposed Layer-wise OBS to ï¬nd parameters with the smallest sensitivity scores with LeNet-300-100 on MINIST. In all cases, we ï¬rst get a well-trained network without dropout or regularization terms. Then, we use four kinds of pruning criteria: Random, LWC [9], ApoZW, and Layer-wise OBS to prune parameters, and evaluate performance of the whole network after performing every 100 pruning operations. Here, LWC is a magnitude-based criterion proposed in [9], which prunes parameters based on smallest absolute values. ApoZW is a revised version of ApoZ [16], which measures the importance of each parameter p=1(ylâ1 Wlij in layer l via Ï l . In this way, both magnitude of the parameter and its inputs are taken into consideration.
Originally well-trained model LeNet-300-100 achieves 1.8% error rate on MNIST without dropout. Four pruning criteria are respectively conducted on the well-trained modelâs ï¬rst layer which has 235K parameters by ï¬xing the other two layersâ parameters, and test accuracy of the whole network is recorded every 100 pruning operations without any retraining. Overall comparison results are summarized in Figure 2.
We also visualize the distribution of parametersâ sensitivity scores Lqâs estimated by Layer-wise OBS in Figure 3, and ï¬nd that parameters of little impact on the layer output dominate. This further veriï¬es our hypothesis that deep neural networks usually contain a lot of redundant parameters. As shown in the ï¬gure, the distribution of parametersâ sensitivity scores in Layer-wise OBS are heavy-tailed. This means that a lot of parameters can be pruned with minor impact on the prediction outcome.
13
â -â Before Pruning â Lwc ââ L-OBS Error 0 15 15 22.5 30 Retraining Iterations (105)
Figure 4: Retraining pattern of LWC and L-OBS. L-OBS has a better start point and totally resume original performance after 740 iterations for LeNet-5.
Random pruning gets the poorest result as expected but can still preserve prediction accuracy when the pruning ratio is smaller than 30%. This also indicates the high redundancy of the network.
Compared with LWC and ApoZW, L-OBS is able to preserve original accuracy until pruning ratio reaches about 96% which we call as âpruning inï¬ection pointâ. As mentioned in Section 3.4, the reason on this âpruning inï¬ection pointâ is that the distribution of parametersâ sensitivity scores is heavy-tailed and sensitivity scores after âpruning inï¬ection pointâ would be considerable all at once. The percentage of parameters with sensitivity smaller than 0.001 is about 92% which matches well with pruning ratio at inï¬ection point.
L-OBS can not only preserve modelsâ performance when pruning one single layer, but also ensures tiny drop of performance when pruning all layers in a model. This claim holds because of the theoretical guarantee on the overall prediction performance of the pruned deep neural network in terms of reconstructed errors for each layer in Section 3.3. As shown in Figure 4, L-OBS is able to resume original performance after 740 iterations for LeNet-5 with compression ratio of 7%.
# How To Set Tolerable Error Threshold
One of the most important bounds we proved is that there is a theoretical guarantee on the overall prediction performance of the pruned deep neural network in terms of reconstructed errors for each pruning operation in each layer. This bound enables us to prune a whole model layer by layer without concerns because the accumulated error of ultimate network output is bounded by the weighted sum of layer-wise errors. As long as we control layer-wise errors, we can control the accumulated error. Although L-OBS allows users to control the accumulated error of ultimate network output 2â = Sa ||Y! â Y"||p, this error is used to measure difference between network outputs before and after pruning, and is not strictly inversely proportional to the final accuracy. In practice, one can increase tolerable error threshold ⬠from a relative small initial value to incrementally prune more and more parameters to monitor model performance, and make a trade-off between compression ratio and performance drop. The corresponding relation (in the first layer of LeNet-300-100) between the tolerable error threshold and the pruning ratio is shown in Figure[5]
# Iterative Layer-wise OBS
As mentioned in Section 4.1, to achieve better compression ratio, L-OBS can be quite ï¬exibly adopted to its iterative version, which performs pruning and light retraining alternatively. Speciï¬cally, the two-stage iterative L-OBS applied to LeNet-300-100, LeNet-5 and VGG-16 in this work follows the retrain the model and reboot following work ï¬ow: pre-train a well-trained model performance in a degree lightly retrain model. In practice, if required compression ratio is beyond the âpruning inï¬ection pointâ, users have to deploy iterative L-OBS though ultimate compression ratio is not of too much importance. Experimental results are shown in Tabel 3, 4 and 5,
14
0.007 0.006 0.005 0.004 0.003 0.002 0.001 0.000 0 20 Error Threshold 40 Sparsity (%) 60 80 100
Figure 5: The corresponding relation between tolerable error threshold and pruning ratio.
where CR(n) means ratio of the number of preserved parameters to the number of original parameters after the n-th pruning.
Table 3: For LeNet-300-100, iterative L-OBS(two-stage) achieves compression ratio of 1.5%
Layer Weights CR1 CR2 fc1 fc2 fc3 235K 30K 1K 7% 20% 70% 1% 4% 54% Total 266K 8.7% 1.5%
Table 4: For LeNet-5, iterative L-OBS(two-stage) achieves compression ratio of 0.9%
Layer Weights CR1 CR2 conv1 conv2 fc1 fc2 0.5K 25K 400K 5K 60% 60% 6% 30% 20% 1% 0.9% 8% Total 431K 9.5% 0.9%
Table 5: For VGG-16, iterative L-OBS(two-stage) achieves compression ratio of 7.5% conv2_2
Layer conv1_1 conv1_2 conv2_1 conv3_1 conv3_2 conv3_3 conv4_1 Weights 2K 37K 74K 148K 295K 590K 590K 1M CR1 70% 50% 70% 70% 60% 60% 60% 50% CR2 58% 36% 42% 32% 53% 34% 39% 43% Layer conv4_2 conv4_3 conv5_1 conv5_2 conv5_3 fc6 fc7 fc8 Weights 2M 2M 2M 2M 2M 103M 17M 4M CR1 50% 50% 70% 70% 60% 8% 10% 30% CR2 24% 30% 35% 43% 32% 2% 5% 17%
15 | {
"id": "1607.03250"
} |
1705.07485 | Shake-Shake regularization | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake | http://arxiv.org/pdf/1705.07485 | Xavier Gastaldi | cs.LG, cs.CV | null | null | cs.LG | 20170521 | 20170523 | 7 1 0 2
y a M 3 2 ] G L . s c [
2 v 5 8 4 7 0 . 5 0 7 1 : v i X r a
# Shake-Shake regularization
# Xavier Gastaldi xgastaldi.mba2011@london.edu
# Abstract
The method introduced in this paper aims at helping deep learning practition- ers faced with an overï¬t problem. The idea is to replace, in a multi-branch network, the standard summation of parallel branches with a stochastic afï¬ne combination. Applied to 3-branch residual networks, shake-shake regularization improves on the best single shot published results on CIFAR-10 and CIFAR- 100 by reaching test errors of 2.86% and 15.85%. Experiments on architec- tures without skip connections or Batch Normalization show encouraging re- sults and open the door to a large set of applications. Code is available at https://github.com/xgastaldi/shake-shake.
# Introduction
Deep residual nets (He et al., 2016a) were ï¬rst introduced in the ILSVRC & COCO 2015 competitions (Russakovsky et al., 2015; Lin et al., 2014), where they won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. Since then, signiï¬cant effort has been put into trying to improve their performance. Scientists have investigated the impact of pushing depth (He et al., 2016b; Huang et al., 2016a), width (Zagoruyko & Komodakis, 2016) and cardinality (Xie et al., 2016; Szegedy et al., 2016; Abdi & Nahavandi, 2016).
While residual networks are powerful models, they still overï¬t on small datasets. A large number of techniques have been proposed to tackle this problem, including weight decay (Nowlan & Hinton, 1992), early stopping, and dropout (Srivastava et al., 2014). While not directly presented as a regularization method, Batch Normalization (Ioffe & Szegedy, 2015) regularizes the network by computing statistics that ï¬uctuate with each mini-batch. Similarly, Stochastic Gradient Descent (SGD) (Bottou, 1998; Sutskever et al., 2013) can also be interpreted as Gradient Descent using noisy gradients and the generalization performance of neural networks often depends on the size of the mini-batch (see Keskar et al. (2017)).
Pre-2015, most computer vision classiï¬cation architectures used dropout to combat overï¬t but the introduction of Batch Normalization reduced its effectiveness (see Ioffe & Szegedy (2015); Zagoruyko & Komodakis (2016); Huang et al. (2016b)). Searching for other regularization methods, researchers started to look at the possibilities speciï¬cally offered by multi-branch networks. Some of them noticed that, given the right conditions, it was possible to randomly drop some of the information paths during training (Huang et al., 2016b; Larsson et al., 2016).
Like these last 2 works, the method proposed in this document aims at improving the generalization ability of multi-branch networks by replacing the standard summation of parallel branches with a stochastic afï¬ne combination.
# 1.1 Motivation
Data augmentation techniques have traditionally been applied to input images only. However, for a computer, there is no real difference between an input image and an intermediate representation. As a consequence, it might be possible to apply data augmentation techniques to internal representations.
Shake-Shake regularization was created as an attempt to produce this sort of effect by stochastically "blending" 2 viable tensors.
# 1.2 Model description on 3-branch ResNets
Let xi denote the tensor of inputs into residual block i. W (1) are sets of weights associated with the 2 residual units. F denotes the residual function, e.g. a stack of two 3x3 convolutional layers. xi+1 denotes the tensor of outputs from residual block i.
A typical pre-activation ResNet with 2 residual branches would follow this equation:
xi+1 = xi + F(xi, W (1) i ) + F(xi, W (2) i ) (1)
Proposed modiï¬cation: If αi is a random variable following a uniform distribution between 0 and 1, then during training:
xi+1 = xi + αiF(xi, W (1) i ) + (1 â αi)F(xi, W (2) i ) (2)
Following the same logic as for dropout, all αi are set to the expected value of 0.5 at test time.
This method can be seen as a form of drop-path (Larsson et al., 2016) where residual branches are scaled-down instead of being completely dropped (i.e. multiplied by 0).
Replacing binary variables with enhancement or reduction coefï¬cients is also explored in dropout variants like shakeout (Kang et al., 2016) and whiteout (Yinan et al., 2016). However, where these methods perform an element-wise multiplication between an input tensor and a noise tensor, shake-shake regularization multiplies the whole image tensor with just one scalar αi (or 1 â αi).
1.3 Training procedure
a; â rand(0,1) Cony 3x3 Cony 3x3 Cony 3x3 Cony 3x3 ca ea Mul(p;) Mul(1-p) addition B,;â rand(0,1)
Figure 1: Left: Forward training pass. Center: Backward training pass. Right: At test time.
As shown in Figure 1, all scaling coefï¬cients are overwritten with new random numbers before each forward pass. The key to making this work is to repeat this coefï¬cient update operation before each backward pass. This results in a stochastic blend of forward and backward ï¬ows during training.
Related to this idea are the works of An (1996) and Neelakantan et al. (2015). These authors showed that adding noise to the gradient during training helps training and generalization of complicated neural networks. Shake-Shake regularization can be seen as an extension of this concept where gradient noise is replaced by a form of gradient augmentation.
2
# Improving on the best single shot published results on CIFAR
# 2.1 CIFAR-10
# 2.1.1 Implementation details
The Shake-Shake code is based on fb.resnet.torch1 and is available at https://github.com/ xgastaldi/shake-shake. The ï¬rst layer is a 3x3 Conv with 16 ï¬lters, followed by 3 stages each having 4 residual blocks. The feature map size is 32, 16 and 8 for each stage. Width is doubled when downsampling. The network ends with a 8x8 average pooling and a fully connected layer (total 26 lay- ers deep). Residual paths have the following structure: ReLU-Conv3x3-BN-ReLU-Conv3x3-BN-Mul. The skip connections represent the identity function except during downsampling where a slightly customized structure consisting of 2 concatenated ï¬ows is used. Each of the 2 ï¬ows has the following components: 1x1 average pooling with step 2 followed by a 1x1 convolution. The input of one of the two ï¬ows is shifted by 1 pixel right and 1 pixel down to make the average pooling sample from a different position. The concatenation of the two ï¬ows doubles the width. Models were trained on the CIFAR-10 (Krizhevsky, 2009) 50k training set and evaluated on the 10k test set. Standard translation and ï¬ipping data augmentation is applied on the 32x32 input image. Due to the introduced stochasticity, all models were trained for 1800 epochs. Training starts with a learning rate of 0.2 and is annealed using a Cosine function without restart (see Loshchilov & Hutter (2016)). All models were trained on 2 GPUs with a mini-batch size of 128. Other implementation details are as in fb.resnet.torch.
# Inï¬uence of Forward and Backward training procedures
The base network is a 26 2x32d ResNet (i.e. the network has a depth of 26, 2 residual branches and the ï¬rst residual block has a width of 32). "Shake" means that all scaling coefï¬cients are overwritten with new random numbers before the pass. "Even" means that all scaling coefï¬cients are set to 0.5 before the pass. "Keep" means that we keep, for the backward pass, the scaling coefï¬cients used during the forward pass. "Batch" means that, for each residual block i, we apply the same scaling coefï¬cient for all the images in the mini-batch. "Image" means that, for each residual block i, we apply a different scaling coefï¬cient for each image in the mini-batch (see Image level update procedure below).
Image level update procedure: Let x0 denote the original input mini-batch tensor of dimensions 128x3x32x32. The ï¬rst dimension « stacks » 128 images of dimensions 3x32x32. Inside the second stage of a 26 2x32d model, this tensor is transformed into a mini-batch tensor xi of dimensions 128x64x16x16. Applying Shake-Shake regularization at the Image level means slicing this tensor along the ï¬rst dimension and, for each of the 128 slices, multiplying the jth slice (of dimensions 64x16x16) with a scalar αi.j (or 1 â αi.j).
The numbers in Table 1 represent the average of 3 runs except for the 96d models which were run 5 times. What can be observed in Table 1 and Figure 2 is that "Shake-Keep" or "S-K" models (i.e. "Shake" Backward) do not have a particularly strong effect on the error rate. The network seems to be able to see through the perturbations when the weight update is done with the same ratios as during the forward pass. "Even-Shake" only works when applied at the "Image" level. "Shake-Even" and "Shake-Shake" models all produce strong results at 32d but the better training curves of "Shake-Shake" models start to make a difference when the number of ï¬lters of the ï¬rst residual block is increased to 64d. Applying coefï¬cients at the "Image" level seems to improve regularization.
# 2.2 CIFAR-100
The network architecture chosen for CIFAR-100 is a ResNeXt without pre-activation (this model gives slightly better results on CIFAR-100 than the model used for CIFAR-10). Hyperparameters are the same as in Xie et al. (2016) except for the learning rate which is annealed using a Cosine function and the number of epochs which is increased to 1800. The network in Table 2 is a ResNeXt-29 2x4x64d (2 residual branches with 4 grouped convolutions, each with 64 channels). Due to the
# 1https://github.com/facebook/fb.resnet.torch
3
Table 1: Error rates (%) on CIFAR-10. Results that surpass all competing methods by more than 0.1% are bold and the overall best result is blue.
Model Forward Backward Level 26 2x32d 26 2x64d 26 2x96d Even Even n/a 4.27 3.76 3.58 Even Shake Shake Shake Shake Keep Even Shake Batch Batch Batch Batch 4.44 4.11 3.47 3.67 - - 3.30 3.07 - - - - Even Shake Shake Shake Shake Keep Even Shake Image Image Image Image 4.11 4.09 3.47 3.55 - - 3.20 2.98 - - - 2.86
Error rate 0 500 1000 1500 epoch
Error rate 0 500 1000 1500 epoch
0 500 1000 1500 0 500 1000 1500 epoch epoch
Figure 2: Left: Training curves of a selection of 32d models. Right: Training curves (dark) and test curves (light) of the 96d models.
combination of the larger model (34.4M parameters) and the long training time, fewer tests were performed than on CIFAR-10.
Table 2: Error rates (%) on CIFAR-100. Results that surpass all competing methods by more than 0.5% are bold and the overall best result is blue.
Model Forward Backward Level Runs 29 2x4x64d Even Even n/a 2 16.34 Shake Shake Even Shake Image Image 3 1 15.85 15.97
Interestingly, a key hyperparameter on CIFAR-100 is the batch size which, compared to CIFAR-10, has to be reduced from 128 to 32 if using 2 GPUs.2 Without this reduction, the E-E-B network does not produce competitive results. As shown in Table 2, the increased regularization produced by the smaller batch size impacts the training procedure selection and makes S-E-I a slightly better choice.
# 2As per notes in https://github.com/facebookresearch/ResNeXt
4
# 2.3 Comparisons with state-of-the-art results
At the time of writing, the best single shot model on CIFAR-10 is a DenseNet-BC k=40 (3.46% error rate) with 25.6M parameters. The second best model is a ResNeXt-29, 16x64d (3.58% error rate) with 68.1M parameters. A small 26 2x32d "Shake-Even-Image" model with 2.9M parameters obtains approximately the same error rate. This is roughly 9 times less parameters than the DenseNet model and 23 times less parameters than the ResNeXt model. A 26 2x96d "Shake-Shake-Image" ResNet with 26.2M parameters, reaches a test error of 2.86% (Average of 5 runs - Median 2.87%, Min = 2.72%, Max = 2.95%).
On CIFAR-100, a few hyperparameter modiï¬cations of a standard ResNeXt-29 8x64d (batchsize, no pre-activation, longer training time and cosine annealing) lead to a test error of 16.34%. Adding shake-even regularization reduces the test error to 15.85% (Average of 3 runs - Median 15.85%, Min = 15.66%, Max = 16.04%).
Table 3: Test error (%) and model size on CIFAR. Best results are blue.
Method Depth Params C10 C100 Wide ResNet 28 36.5M 3.8 18.3 ResNeXt-29, 16x64d 29 68.1M 3.58 17.31 DenseNet-BC (k=40) 190 25.6M 3.46 17.18 C10 Model S-S-I C100 Model S-E-I 26 29 26.2M 2.86 34.4M - - 15.85
# 3 Correlation between residual branches
To check whether the correlation between the 2 residual branches is increased or decreased by the regularization, the following test was performed:
For each residual block:
1. Forward a mini-batch tensor xi through the residual branch 1 (ReLU-Conv3x3-BN-ReLU- . Do the same for residual branch Conv3x3-BN-Mul(0.5)) and store the output tensor in y(1) 2 and store the output in y(2) . i
# i
. Calculate the covariance between each corresponding item in the 2 vectors using an online version of the covariance algorithm.
3. Calculate the variances of f lat(1) i 4. Repeat until all the images in the test set have been forwarded. Use the resulting covariance
# i
and variances to calculate the correlation.
This algorithm was run on CIFAR-10 for 3 EEB models and 3 S-S-I models both 26 2x32d. The results are presented in Figure 3. The correlation between the output tensors of the 2 residual branches seems to be reduced by the regularization. This would support the assumption that the regularization forces the branches to learn something different.
One problem to be mindful of is the issue of alignment (see Li et al. (2016)). The method above assumes that the summation at the end of the residual blocks forces an alignment of the layers on the left and right residual branches. This can be veriï¬ed by calculating the layer wise correlation for each conï¬guration of the ï¬rst 3 layers of each block.
The results are presented in Figure 4. L1R3 for residual block i means the correlation between the activations of the ï¬rst layer in y(1) (right branch). Figure 4 shows that the correlation between the same layers on the left and right branches (i.e. L1R1, L2R2, etc..) is higher than in the other conï¬gurations, which is consistent with the assumption that the summation forces alignment.
5
E-E-B Models S-S-I Models 01 02 O03 Avg 01 #02 O38 Avg 1 [0.07 0.36 0.16 | 0.20 1 [0.34 0.32 0.33] 0.33 2 |0.46 0.45 0.53 0.48 2 |0.25 0.24 0.23) 0.24 3 |0.47 0.45 0.45 | 0.46 3|0.24 0.24 0.23) 0.23 x 4 x 4/023 0.21 0.20/0.21) Value S 5/048 0.59 0.60/0.56] 8 5 |0.34 0.33 0.33/0.33 AI00) S 6/049 0.41 0.45/0.45) <=) 6/015 0.16 0.16/0.16 0.50 3 7/048 048 054/050) 3/7 /0.14 0.14 0.14/0.14 0.00 8 8 |0.56 0.53 0.49|0.53) $| 8 /0.18 0.17 0.17/0.18 -0.50 © 9 |0.57 0.59 0.59/0.58} â| 9 /0.25 0.24 0.25| 0.24 10|0.40 0.39 0.41| 0.40 10|0.21 0.20 0.22] 0.21 11/0.49 0.51 0.52) 0.54 11|0.21 0.20 0.21] 0.21 12 12|0.29 0.27 0.30| 0.29
# Figure 3: Correlation results on E-E-B and S-S-I models.
E-E-B S-S- 0.28 -0.04 -0.07 -0.03 0.32 -0.03 0.00 -0.03 0.35 0.17 0.01 0.04 0.00 0.22 0.00 -0.06 -0.03 0.10 0.12 -0.01 0.04 0.00 0.21 -0.03 -0.04 -0.04 0.00 0.24 0.05 0.02 0.02 0.20 0.02 0.02 -0.03 -0.04) Value 0.31 -0.04 0.04 -0.03 0.36 0.10 0.04 0.06 0.32 0.19 0.03 0,00 -0.03 0.12 -0.03 0.03 0.00 0.15; 0.50 0.11 0.01 -0.01 0.03 0.12 0.02 0.06 0.03 0.11 0.00 0.07 0.04 0.04 0.04 0.15 0.04 0.00 0.06 0.19 -0.50 0.27 -0.01 -0.01 -0.02 0.19 0.00 -0.03 0.02 0.21 0.18 -0.03 -0.03 -0.02 0.22 0.06 -0.01 0.06 0.23 0.06 0.01 -0.07 0.02 0.01 0.00 0.03 0.08 0.08 0.51 -0.18 0.20 -0.40 0.48 0.23 0.17 0.04 0.49 0.39 0.15 -0.05 0.12 0.37 0.00 -0.16 -0.15 0.13 0.41 -0.11 -0.01 0.32 -0.10 0.14 0.05 -0.01 0.09 0.24 0.18 -0.12 -0.23 0.45 -0.37 0.13 -0.14/0173) 0.24 0.11 0.11 0.15 0.31 0.11 0.06 -0.05 0.45 0.39 0.25 -0.26 -0.05 0.30 -0.16 -0.09 -0.27 0.44 0.30 0.16 0.23 0.08 0.23 0.08 0.10 -0.06 0.29 (0.55 0.14 -0.03 -0.04/0.51 -0.05 0.04 -0.11 0.61 10 0.43 0.12 0.16 0.13 0.38 0.20 0.23 0.14 0.37 11 0.29 0.13 0.23 0.04 0.41 0.13 0.01 0.04 0.21 0.14 -0.01 -0.02 -0.02 0.22 0.10 0.00 0.09 0.26 12 (0194) 0.30 0.47 0.31 /0.90) 0.32 0.54 0.33 [0194) 0.27 -0.06 0.00 -0.09 0.30 0.15 -0.01 0.13 0.33 L1R1 L1R2 L1R3 L2R1 L2R2 L2R3 L3R1 L3R2 L3R3 LIR1 L1R2 L1R3 L2R1 L2R2 L2R3 L3R1 L3R2 L3R3 Layers used for correlation calculation Layers used for correlation calculation OONOTDAWN Residual block Residual block Ratgeoovoaaens
Figure 4: Layer-wise correlation between the ï¬rst 3 layers of each residual block.
# 4 Regularization strength
This section looks at what would happen if we give, during the backward pass, a large weight to a branch that received a small weight in the forward pass (and vice-versa).
Let αi.j be the coefï¬cient used during the forward pass for image j in residual block i. Let βi.j be the coefï¬cient used during the backward pass for the same image at the same position in the network.
The ï¬rst test (method 1) is to set βi.j = 1 - αi.j. All the tests in this section were performed on CIFAR-10 using 26 2x32d models at the Image level. These models are compared to a 26 2x32d Shake-Keep-Image model. The results of M1 can be seen on the left part of Figure 5 (blue curve). The effect is quite drastic and the training error stays really high.
Tests M2 to M5 in Table 4 were designed to understand why Method 1 (M1) has such a strong effect. The right part of Figure 5 illustrates Table 4 graphically.
What can be seen is that:
1. The regularization effect seems to be linked to the relative position of βi.j compared to αi.j
2. The further away βi.j is from αi.j, the stronger the regularization effect
3. There seems to be a jump in strength when 0.5 is crossed
These insights could be useful when trying to control with more accuracy the strength of the regularization.
6
# Table 4: Update rules for βi.j.
S-S-I S-E-I M1 M2 M3 M4 M5 αi.j < 0.5 rand(0, 1) 0.5 1 â αi.j rand(0, 1) â αi.j rand(0, 1) â (0.5 â αi.j) + αi.j rand(0, 1) â (0.5 â αi.j) + 0.5 rand(0, 1) â αi.j + (1 â αi.j) αi.j ⥠0.5 rand(0, 1) 0.5 1 â αi.j rand(0, 1) â (1 â αi.j) + αi.j rand(0, 1) â (αi.j â 0.5) + 0.5 rand(0, 1) â (0.5 â (1 â αi.j)) + (1 â αi.j) rand(0, 1) â (1 â αi.j)
Hoy < 0.6: M1 0 a 05 By=1-4,;1 M2 â â M3 CBLLBLAOA M4 ie 4 ws RA Ifa, = 0.5: M1 0 B,=1-4, 05 qi 1 M2 M3 camel M4 eH MS "44 eae
Error rate Hoy < 0.6: M1 0 a 05 By=1-4,;1 M2 â â M3 CBLLBLAOA M4 ie 4 ws RA Ifa, = 0.5: M1 0 B,=1-4, 05 qi 1 M2 M3 camel M4 eH MS "44 eae 0 500 1000 1500 epoch
Error rate 0 500 1000 1500 epoch
Figure 5: Left: Training curves (dark) and test curves (light) of models M1 to M5. Right: Illustration of the different methods in Table 4.
# 5 Removing skip connections / Removing Batch Normalization
One interesting question is whether the skip connection plays a role. A lot of deep learning systems donât use ResNets and making this type of regularization work without skip connections could extend the number of potential applications.
Table 5 and Figure 6 present the results of removing the skip connection. The ï¬rst variant (A) is exactly like the 26 2x32d used on CIFAR-10 but without the skip connection (i.e. 2 branches with the following components ReLU-Conv3x3-BN-ReLU-Conv3x3-BN-Mul). The second variant (B) is the same as A but with only 1 convolutional layer per branch (ReLU-Conv3x3-BN-Mul) and twice the number of blocks. Models using architecture A were tested once and models using architecture B were tested twice.
The results of architecture A clearly show that shake-shake regularization can work even without a skip connection. On that particular architecture and on a 26 2x32d model, S-S-I is too strong and the model underï¬ts. The softer effect of S-E-I works better but this could change if the capacity is increased (e.g. 64d or 96d).
The results of architecture B are actually the most surprising. The ï¬rst point to notice is that the regularization no longer works. This, in itself, would indicate that the regularization happens thanks to the interaction between the 2 convolutions in each branch. The second point is that the train and test curves of the S-E-I and E-E-B models are absolutely identical. This would indicate that, for architecture B, the shake operation of the forward pass has no effect on the cost function. The third point is that even with a really different training curve, the test curve of the S-S-I model is nearly identical to the test curves of the E-E-B and S-E-I models (albeit with a smaller variance).
7
Table 5: Error rates (%) on CIFAR-10.
Architecture Model 26 2x32d E-E-B αi.j n/a A 4.84 B 5.17 C - 26 2x32d S-E-I 26 2x32d S-S-I rand(0,1) rand(0,1) 4.05 4.59 5.09 5.20 - - 14 2x32d E-E-B n/a - - 9.65 14 2x32d S-E-I v1 14 2x32d S-E-I v2 14 2x32d S-E-I v3 rand(0.4,0.6) rand(0.35,0.65) rand(0.30,0.70) - - - - - - 8.7 7.73 diverges
Error rate ââ- 0 500 1000 1500 epoch
Error rate 20 â T T 0 500 1000 1500 epoch
Error rate 20 TT T 0 500 1000 1500 epoch
Error rate Error rate Error rate 20 20 ââ- â T T TT T 0 500 1000 1500 0 500 1000 1500 0 500 1000 1500 epoch epoch epoch
Figure 6: Training curves (dark) and test curves (light). Left: Architecture A. Center: Architecture B. Right: Architecture C.
Finally, it would be interesting to see whether this method works without Batch Normalization. While batchnorm is commonly used on computer vision datasets, it is not necessarily the case for other types of problems (e.g. NLP, etc ..). Architecture C is the same as architecture A but without Batch Normal- ization (i.e. no skip, 2 branches with the following structure ReLU-Conv3x3-ReLU-Conv3x3-Mul). To allow the E-E-B model to converge the depth was reduced from 26 to 14 and the initial learning rate was set to 0.05 after a warm start at 0.025 for 1 epoch. The absence of Batch Normalization makes the model a lot more sensitive and applying the same methods as before makes the model diverge. To soften the effect a S-E-I model was chosen and the interval covered by αi.j was reduced from [0,1] to [0.4,0.6]. Models using architecture C and different intervals were tested once on CIFAR-10. As shown in Table 5 and Figure 6, this method works quite well but it is also really easy to make the model diverge (see model 14 2x32d S-E-I v3).
# 6 Conclusion
A series of experiments seem to indicate an ability to combat overï¬t by decorrelating the branches of multi-branch networks. This method leads to state of the art results on CIFAR datasets and could potentially improve the accuracy of architectures that do not use ResNets or Batch Normalization. While these results are encouraging, questions remain on the exact dynamics at play. Understanding these dynamics could help expand the application ï¬eld to a wider variety of complex architectures.
8
# References
Masoud Abdi and Saeid Nahavandi. Multi-residual networks. arXiv preprint arXiv:1609.05672, 2016.
Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. Neural Comput., 1996.
Léon Bottou. Online algorithms and stochastic approximations. In Online Learning and Neural Networks. Cambridge University Press, 1998.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b.
Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016b.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
Guoliang Kang, Jun Li, and Dacheng Tao. Shakeout: A new regularized deep neural network training scheme. In AAAI Conference on Artiï¬cial Intelligence, 2016.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter In Tang. On large-batch training for deep learning: Generalization gap and sharp minima. International Conference on Learning Representation (ICLR â17), 2017.
Alex Krizhevsky. Learning multiple layers of features from tiny images. Tech Report, 2009.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? In International Conference on Learning Representation (ICLR â16), 2016.
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
Ilya Loshchilov and Frank Hutter. Sgdr: stochastic gradient descent with restarts. arXiv preprint arXiv:1608.03983, 2016.
Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807, 2015.
Steven J. Nowlan and Geoffrey E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Computation, 1992.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15:1929â1958, 2014.
9
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa- tion and momentum in deep learning. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, 2013.
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex A. Alemi. Inception-v4, inception- resnet and the impact of residual connections on learning. In ICLR 2016 Workshop, 2016.
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
Li Yinan, Xu Ruoyi, and Liu Fang. Whiteout: Gaussian adaptive regularization noise in deep neural networks. arXiv preprint arXiv:1612.01490v2, 2016.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
10 | {
"id": "1612.01490"
} |
1705.06950 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | 7 1 0 2 y a M 9 1 ]
V C . s c [ 1 v 0 5 9 6 0 . 5 0 7 1 : v i X r a
# The Kinetics Human Action Video Dataset
# Will Kay wkay@google.com
# JoËao Carreira joaoluis@google.com
# Karen Simonyan simonyan@google.com
# Brian Zhang brianzhang@google.com
# Chloe Hillier chillier@google.com
# Sudheendra Vijayanarasimhan svnaras@google.com
# Fabio Viola fviola@google.com
# Tim Green tfgg@google.com
# Trevor Back back@google.com
# Paul Natsev natsev@google.com
# Mustafa Suleyman mustafasul@google.com
# Andrew Zisserman zisserman@google.com
# Abstract
We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as play- ing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance ï¬gures for neural network architectures trained and tested for human action classiï¬cation on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classiï¬ers.
# 1. Introduction
purposes, including multi-modal analysis. Our inspiration in providing a dataset for classiï¬cation is ImageNet [18], where the signiï¬cant beneï¬ts of ï¬rst training deep networks on this dataset for classiï¬cation, and then using the trained network for other purposes (detection, image segmenta- tion, non-visual modalities (e.g. sound, depth), etc) are well known.
The Kinetics dataset can be seen as the successor to the two human action video datasets that have emerged as the standard benchmarks for this area: HMDB-51 [15] and UCF-101 [20]. These datasets have served the commu- nity very well, but their usefulness is now expiring. This is because they are simply not large enough or have suf- ï¬cient variation to train and test the current generation of human action classiï¬cation models based on deep learning. Coincidentally, one of the motivations for introducing the HMDB dataset was that the then current generation of ac- tion datasets was too small. The increase then was from 10 to 51 classes, and we in turn increase this to 400 classes.
In this paper we introduce a new, large, video dataset for human action classiï¬cation. We developed this dataset prin- cipally because there is a lack of such datasets for human action classiï¬cation, and we believe that having one will fa- cilitate research in this area â both because the dataset is large enough to train deep networks from scratch, and also because the dataset is challenging enough to act as a perfor- mance benchmark where the advantages of different archi- tectures can be teased apart.
Our aim is to provide a large scale high quality dataset, covering a diverse range of human actions, that can be used for human action classiï¬cation, rather than temporal local- ization. Since the use case is classiï¬cation, only short clips of around 10s containing the action are included, and there are no untrimmed videos. However, the clips also con- tain sound so the dataset can potentially be used for many
Table 1 compares the size of Kinetics to a number of re- cent human action datasets. In terms of variation, although the UCF-101 dataset contains 101 actions with 100+ clips for each action, all the clips are taken from only 2.5k dis- tinct videos. For example there are 7 clips from one video of the same person brushing their hair. This means that there is far less variation than if the action in each clip was per- formed by a different person (and different viewpoint, light- ing, etc). This problem is avoided in Kinetics as each clip is taken from a different video.
The clips are sourced from YouTube videos. Con- sequently, for the most part, they are not professionally videoed and edited material (as in TV and ï¬lm videos). There can be considerable camera motion/shake, illumina- tion variations, shadows, background clutter, etc. More im-
1
Year Actions 2011 2012 2015 2017 Clips 51 min 102 101 min 101 200 avg 141 400 min 400 Total 6,766 13,320 28,108 306,245 Videos 3,312 2,500 19,994 306,245
Table 1: Statistics for recent human action recognition datasets. âActionsâ, speciï¬es the number of action classes; âClipsâ, the number of clips per class; âTotalâ, is the total number of clips; and âVideosâ, the total number of videos from which these clips are extracted.
portantly, there are a great variety of performers (since each clip is from a different video) with differences in how the action is performed (e.g. its speed), clothing, body pose and shape, age, and camera framing and viewpoint.
(ballet, macarena, tap, . . . ); Cooking (cutting, frying, peel- ing, . . . ). The full list of classes is given in the appendix, together with parent-child groupings. Figure 1 shows clips from a sample of classes.
Our hope is that the dataset will enable a new generation of neural network architectures to be developed for video. For example, architectures including multiple streams of in- formation (RGB/appearance, optical ï¬ow, human pose, ob- ject category recognition), architectures using attention, etc. That will enable the virtues (or otherwise) of the new archi- tectures to be demonstrated. Issues such as the tension be- tween static and motion prediction, and the open question of the best method of temporal aggregation in video (recurrent vs convolutional) may ï¬nally be resolved.
Statistics: The dataset has 400 human action classes, with 400â1150 clips for each action, each from a unique video. Each clip lasts around 10s. The current version has 306,245 videos, and is divided into three splits, one for training hav- ing 250â1000 videos per class, one for validation with 50 videos per class and one for testing with 100 videos per class. The statistics are given in table 2. The clips are from YouTube videos and have a variable resolution and frame rate.
The rest of the paper is organized as: Section 2 gives an overview of the new dataset; Section 3 describes how it was collected and discusses possible imbalances in the data and their consequences for classiï¬er bias. Section 4 gives the performance of a number of ConvNet architectures that are trained and tested on the dataset. Our companion paper [5] explores the beneï¬t of pre-training an action classiï¬cation network on Kinetics, and then using the features from the network for action classiï¬cation on other (smaller) datasets. The URLs of the YouTube videos and temporal intervals of the dataset can be obtained from http://deepmind. com/kinetics.
# 2. An Overview of the Kinetics Dataset
Content: The dataset is focused on human actions (rather than activities or events). The list of action classes covers: Person Actions (singular), e.g. drawing, drinking, laughing, pumping ï¬st; Person-Person Actions, e.g. hugging, kissing, shaking hands; and, Person-Object Actions, e.g. opening present, mowing lawn, washing dishes. Some actions are ï¬ne grained and require temporal reasoning to distinguish, for example different types of swimming. Other actions re- quire more emphasis on the object to distinguish, for exam- ple playing different types of wind instruments.
Train 250â1000 Validation Test 100 50
Table 2: Kinetics Dataset Statistics. The number of clips for each class in the train/val/test partitions.
Non-exhaustive annotation. Each class contains clips il- lustrating that action. However, a particular clip can con- tain several actions. Interesting examples in the dataset include: âtextingâ while âdriving a carâ; âHula hoopingâ while âplaying ukuleleâ; âbrushing teethâ while âdancingâ (of some type). In each case both of the actions are Kinetics classes, and the clip will probably only appear under only one of these classes not both, i.e. clips do not have complete (exhaustive) annotation. For this reason when evaluating classiï¬cation performance, a top-5 measure is more suitable than top-1. This is similar to the situation in ImageNet [18], where one of the reasons for using a top-5 measure is that images are only labelled for a single class, although it may contain multiple classes.
There is not a deep hierarchy, but instead there are several (non-exclusive) parent-child groupings, e.g. Music (playing drums, trombone, violin, . . . ); Personal Hygiene (brushing teeth, cutting nails, washing hands, . . . ); Dancing
# 3. How the Dataset was Built
In this section we describe the collection process: how candidate videos were obtained from YouTube, and then the processing pipeline that was used to select the candidates
(a) headbanging
(b) stretching leg
(c) shaking hands
(d) tickling
(e) robot dancing
(f) salsa dancing
(g) riding a bike
(h) riding unicycle
# (i) playing violin
Te
WS
ay dk
oO ad
Av)
D oO DD ay Av) Te WS dk ad ViAT ARAORR
# (j) playing trumpet
# (k) braiding hair
PPP
# (l) brushing hair
(m) dribbling basketball
(n) dunking basketball
Figure 1: Example classes from the Kinetics dataset. Best seen in colour and with zoom. Note that in some cases a single image is not enough for recognizing the action (e.g. âheadbangingâ) or distinguishing classes (âdribbling basketballâ vs âdunking basketballâ). The dataset contains: Singular Person Actions (e.g. ârobot dancingâ, âstretching legâ); Person-Person Actions (e.g. âshaking handsâ, âticklingâ); Person-Object Actions (e.g. âriding a bikeâ); same verb different objects (e.g. âplaying violinâ, âplaying trumpetâ); and same object different verbs (e.g. âdribbling basketballâ, âdunking basketballâ). These are realistic (amateur) videos â there is often signiï¬cant camera shake, for instance.
and clean up the dataset. We then discuss possible biases in the dataset due to the collection process.
Overview: clips for each class were obtained by ï¬rst searching on YouTube for candidates, and then using Ama- zon Mechanical Turkers (AMT) to decide if the clip con- tains the action or not. Three or more conï¬rmations (out of ï¬ve) were required before a clip was accepted. The dataset was de-duped, by checking that only one clip is taken from each video, and that clips do not contain common video material. Finally, classes were checked for overlap and de- noised.
We now describe these stages in more detail.
# 3.1. Stage 1: Obtaining an action list
Curating a large list of human actions is challenging, as there is no single listing available at this scale with suitable visual action classes. Consequently, we had to combine numerous sources together with our own obser- vations of actions that surround us. These sources in- (i) Action datasets â existing datasets like Ac- clude: tivityNet [3], HMDB [15], UCF101 [20], MPII Human Pose [2], ACT [25] have useful classes and a suitable sub set of these were used; (ii) Motion capture â there are a num- ber of motion capture datasets which we looked through and extracted ï¬le titles. These titles described the motion within the ï¬le and were often quite creative; and, (iii) Crowd- sourced â we asked Mechanical Turk workers to come up with a more appropriate action if the label we had presented to them for a clip was incorrect.
# 3.2. Stage 2: Obtaining candidate clips
The chosen method and steps are detailed below which combine a number of different internal efforts:
Step 1: obtaining videos. Videos are drawn from the YouTube corpus by matching video titles with the Kinetics actions list.
Step 2: temporal positioning within a video. Image classiï¬ers are available for a large number of human ac- tions. These classiï¬ers are obtained by tracking user ac- tions on Google Image Search. For example, for a search query âclimbing treeâ, user relevance feedback on images is collected by aggregating across the multiple times that that search query is issued. This relevance feedback is used to select a high-conï¬dence set of images that can be used to train a âclimbing treeâ image classiï¬er. These classiï¬ers are run at the frame level over the videos found in step 1, and clips extracted around the top k responses (where k = 2).
It was found that the action list had a better match to relevant classiï¬ers if action verbs are formatted to end with
âingâ. Thinking back to image search, this makes sense as typically if you are searching for an example of someone performing an action you would issue queries like ârunning manâ or âbrushing hairâ over other tenses like âman ranâ or âbrush hairâ.
The output of this stage is a large number of videos and a position in all of them where one of the actions is po- tentially occurring. 10 second clips are created by taking 5 seconds either side of that position (there are length ex- ceptions when the position is within 5 seconds of the start or end of the video leading to a shorter clip length). The clips are then passed onto the next stage of cleanup through human labelling.
# 3.3. Stage 3: Manual labelling process
The key aim of this stage was to identify whether the supposed action was actually occurring during a clip or not. A human was required in the loop for this phase and we chose to use Amazonâs Mechanical Turk (AMT) for the task due to the large numbers of high quality workers using the platform.
A single-page webapp was built for the labelling task and optimised to maximise the number of clips presented to the workers whilst maintaining a high quality of annotation. The labelling interface is shown in ï¬gure 2. The user inter- face design and theme were chosen to differentiate the task from many others on the platform as well as make the task as stimulating and engaging as possible. This certainly paid off as the task was one of the highest rated on the platform and would frequently get more than 400 distinct workers as soon as a new run was launched.
The workers were given clear instructions at the begin- ning. There were two screens of instruction, the second re- inforcing the ï¬rst. After acknowledging they understood the task they were presented with a media player and several response icons. The interface would fetch a set of videos from the available pool for the worker at that moment and embed the ï¬rst clip. The task consisted of 20 videos each with a different class where possible; we randomised all the videos and classes to make it more interesting for the work- ers and prevent them from becoming stuck on classes with low yields. Two of the video slots were used by us to in- ject groundtruth clips. This allowed us to get an estimate of the accuracy for each worker. If a worker fell below a 50% success rating on these, we showed them a âlow accuracyâ warning screen. This helped address many low accuracies. In the labelling interface, workers were asked the question âCan you see a human performing the action class-name?â. The following response options were available on the interface as icons:
⢠Yes, this contains a true example of the action
⢠No, this does not contain an example of the action
Evaluating Actions in Videos Can you see a & human performing the action riding mule? Instructions We would like to find videos that contain real humans performing actions e.g. scrubbing their face, jumping, kissing someone etc. Please click on the most appropriate button after watching each video: ry Yes, this contains a true example of the action cy No, this does not contain an example of the action @ You are unsure if there is an example of the action Ey Replay the video Video does not play, does not contain a human, is an image, cartoon or a computer game. We have turned off the audio, you need to judge the clip using the visuals only.
Figure 2: Labeling interface used in Mechanical Turk.
⢠You are unsure if there is an example of the action
⢠Replay the video
Following annotating, the video ids, clip times and labels were exported from the database and handed on to be used for model training.
⢠Video does not play, does not contain a human, is an image, cartoon or a computer game.
When a worker responded with âYesâ we also asked the question âDoes the action last for the whole clip?â in or- der to use this signal later during model training.
Note, the AMT workers didnât have access to the audio to ensure that the video can be classiï¬ed purely based on its visual content.
In order for a clip to be added to the dataset, it needed to receive at least 3 positive responses from workers. We allowed each clip to be annotated 5 times except if it had been annotated by more than 2 of a speciï¬c response. For example, if 3 out of 3 workers had said it did not contain an example of the action we would immediately remove it from the pool and not continue until 5 workers had annotated it. Due to the large scale of the task it was necessary to quickly remove classes that were made up of low quality or completely irrelevant candidates. Failing to do this would have meant that we spent a lot of money paying workers to mark videos as negative or bad. Accuracies for each class were calculated after 20 clips from that class had been an- notated. We adjusted the accuracy threshold between runs but would typically start at a high accuracy of 50% (1 in 2 videos were expected to contain the action).
What we learnt: We found that more speciï¬c classes like âriding muleâ were producing much less noise than more general classes like âridingâ. However, occasionally us- ing more general classes was a beneï¬t as they could sub- sequently be split into a few distinct classes that were not previously present and the candidates resent out to workers e.g. âgardeningâ was split into âwatering plantsâ, âtrimming treesâ and âplanting treesâ.
The amount of worker trafï¬c that the task generated meant that we could not rely on direct fetching and writes to the database even with appropriate indexes and optimised queries. We therefore created many caches which were made up of groups of clips for each worker. When a worker started a new task, the interface would fetch a set of clips for that speciï¬c worker. The cache was replenished often by background processes as clips received a sufï¬cient num- ber of annotations. This also negated labelling collisions where previously > 1 worker might pick up the same video to annotate and we would quickly exceed 5 responses for any 1 clip.
# 3.4. Stage 4: Cleaning up and de-noising
One of the dataset design goals was having a single clip from each given video sequence, different from ex- isting datasets which slice videos containing repetitive ac- tions into many (correlated) training examples. We also employed mechanisms for identifying structural problems as we grew the dataset, such as repeated classes due to syn- onymy or different word order (e.g. riding motorbike, riding motorcycle), classes that are too general and co-occur with many others (e.g. talking) and which are problematic for typical 1-of-K classiï¬cation learning approaches (instead of multi-label classiï¬cation). We will now describe these pro- cedures.
De-duplicating videos. We de-duplicated videos using two complementary approaches. First, in order to have only one clip from each YouTube link, we randomly selected a single clip from amongst those validated by Turkers for that video. This stage ï¬ltered out around 20% of Turker- approved examples, but we visually found that it still left many duplicates. The reason is that YouTube users often create videos reusing portions of other videos, for example as part of video compilations or promotional adverts. Some- times they are cropped, resized and generally pre-processed in different ways (but, nevertheless, the image classiï¬er could localize the same clip). So even though each clip is from a distinct video there were still duplications.
We devised a process for de-duplicating across YouTube links which operated independently for each class. First we computed Inception-V1 [12] feature vectors (taken after last average pooling layer) on 224 Ã 224 center crops of 25 uni- formly sampled frames from each video, which we then av- eraged. Afterwards we built a class-wise matrix having all cosine similarities between these feature vectors and thresh- olded it. Finally, we computed connected components and kept a random example from each. We found this to work well for most classes using the same threshold of 0.97, but adjusted it in a few cases where classes were visually sim- ilar, such as some taking place in the snow or in the water. This process reduced the number of Turker-approved exam- ples by a further 15%.
Detecting noisy classes. Classes can be ânoisyâ in that they may overlap with other classes or they may contain several quite distinct (in terms of the action) groupings due to an ambiguity in the class name. For example, âskippingâ can be âskipping with a ropeâ and also âskipping stones across waterâ. We trained two-stream action classiï¬ers [19] repeatedly throughout the dataset development to identify these noise classes. This allowed us to ï¬nd the top con- fusions for each class, which sometimes were clear even by just verifying the class names (but went unnoticed due
to the scale of the dataset), and other times required eye- balling the data to understand if the confusions were alright and the classes were just difï¬cult to distinguish because of shortcomings of the model. We merged, split or outright removed classes based on these detected confusions.
Final ï¬ltering. After all the data was collected, de- duplicated and the classes were selected, we ran a ï¬nal man- ual clip ï¬ltering stage. Here the class scores from the two- stream model were again useful as they allowed sorting the examples from most conï¬dent to least conï¬dent â a mea- sure of how prototypical they were. We found that noisy ex- amples were often among the lowest ranked examples and focused on those. The ranking also made adjacent any re- maining duplicate videos, which made it easier to ï¬lter out those too.
# 3.5. Discussion: dataset bias I
We are familiar with the notion of dataset bias leading to lack of generalization: where a classiï¬er trained on one dataset, e.g. Caltech 256 [10], does not perform well when tested on another, e.g. PASCAL VOC [8]. Indeed it is even possible to train a classiï¬er to identify which dataset an im- age belongs to [22].
There is another sense of bias which could arise from un- balanced categories within a dataset. For example, gender imbalance in a training set could lead to a corresponding performance bias for classiï¬ers trained on this set. There are precedents for this, e.g. in publicly available face detec- tors not being race agnostic1, and more recently in learning a semantic bias in written texts [4]. It is thus an important question as to whether Kinetics leads to such bias.
To this end we carried out a preliminary study on (i) whether the data for each action class of Kinetics is gen- der balanced, and (ii) if, there is an imbalance, whether it leads to a biased performance of the action classiï¬es.
The outcome of (i) is that in 340 action classes out of the 400, the data is either not dominated by a single gender, or it is mostly not possible to determine the gender â the latter arises in classes where, for example, only hands appear, or the âactorsâ are too small or heavily clothed. The classes that do show gender imbalance include âshaving beardâ and âdunking basketballâ, that are mostly male, and âï¬lling eye- browsâ and âcheerleadingâ, that are mostly female.
The outcome of (ii) for these classes we found little evi- dence of classiï¬er bias for action classes with gender imbal- ance. For example in âplaying pokerâ, which tends to have more male players, all videos with female players are cor- rectly classiï¬ed. The same happens for âHammer throwâ. We can conjecture that this lack of bias is because the clas- siï¬er is able to make use of both the objects involved in
# 1https://www.media.mit.edu/posts/
media-lab-student-recognized-for-fighting-bias-in-machine-learning/
an action as well as the motion patterns, rather than simply physical appearance.
Imbalance can also be examined on other âaxesâ, for ex- ample age and race. Again, in a preliminary investigation we found very little clear bias. There is one exception where there is clear bias to babies â in âcryingâ, where many of the videos of non-babies crying are misclassiï¬ed; another ex- ample is âwrestlingâ, where the opposite happens: adults wrestling in a ring seem to be better classiï¬ed than children wrestling in their homes, but it is hard to tell whether the deciding factor is age or the scenes where the actions hap- pen. Nevertheless, these issues of dataset imbalance and any resulting classiï¬er bias warrant a more thorough inves- tigation, and we return to this in section 5.
# 3.6. Discussion: dataset bias II
Another type of bias could arise because classiï¬ers are involved in the dataset collection pipeline: it could be that these classiï¬ers lead to a reduction in the visual variety of the clips obtained, which in turn leads to a bias in the action classiï¬er trained on these clips. In more detail, although the videos are selected based on their title (which is provided by the person uploading the video to YouTube), the position of the candidate clip within the video is provided by an image (RGB) classiï¬er, as described above. In practice, using a classiï¬er at this point does not seem to constrain the variety of the clips â since the video is about the action, the par- ticular frame chosen as part of the clip may not be crucial; and, in any case, the clip contains hundreds of more frames where the appearance (RGB) and motion can vary consid- erably. For these reasons we are not so concerned about the intermediate use of image classiï¬ers.
# 4. Benchmark Performance
In this section we ï¬rst brieï¬y describe three standard ConvNet architectures for human action recognition in video. We then use these architectures as baselines and compare their performance by training and testing on the Kinetics dataset. We also include their performance on UCF-101 and HMDB-51.
We consider three typical approaches for video classiï¬- cation: ConvNets with an LSTM on top [7, 26]; two-stream networks [9, 19]; and a 3D ConvNet [13, 21, 23]. There have been many improvements over these basic architec- tures, e.g. [9], but our intention here is not to perform a thorough study on what is the very best architecture on Ki- netics, but instead to provide an indication of the level of difï¬culty of the dataset. A rough graphical overview of the three types of architectures we compare is shown in ï¬gure 3, and the speciï¬cation of their temporal interfaces is given in table 3.
For the experiments on the Kinetics dataset all three ar- chitectures are trained from scratch using Kinetics. How-
ever, for the experiments on UCF-101 and HMDB-51 the architectures (apart from the 3D ConvNet) are pre-trained on ImageNet (since these datasets are too small to train the architectures from scratch).
# 4.1. ConvNet+LSTM
The high performance of image classiï¬cation networks makes it appealing to try to reuse them with as minimal change as possible for video. This can be achieved by using them to extract features independently from each frame then pooling their predictions across the whole video [14]. This is in the spirit of bag of words image modeling approaches [16, 17, 24], but while convenient in practice, it has the issue of entirely ignoring temporal structure (e.g. models canât potentially distinguish opening from closing a door).
In theory, a more satisfying approach is to add a recur- rent layer to the model [7, 26], such as an LSTM, which can encode state, and capture temporal ordering and long range dependencies. We position an LSTM layer with batch nor- malization (as proposed by Cooijmans et al. [6]) after the last average pooling layer of a ResNet-50 model [11], with 512 hidden units. We then add a fully connected layer on top of the output of the LSTM for the multi-way classiï¬ca- tion. At test time the classiï¬cation is taken from the model output for the last frame.
# 4.2. Two-Stream networks
LSTMs on features from the last layers of ConvNets can model high-level variation, but may not be able to capture ï¬ne low-level motion which is critical in many cases. It is also expensive to train as it requires unrolling the network through multiple frames for backpropagation-through-time. A different, very practical approach, introduced by Si- monyan and Zisserman [19], models short temporal snap- shots of videos by averaging the predictions from a single RGB frame and a stack of 10 externally computed opti- cal ï¬ow frames, after passing them through two replicas of an ImageNet-pretrained ConvNet. The ï¬ow stream has an adapted input convolutional layer with twice as many input channels as ï¬ow frames (because ï¬ow has two channels, horizontal and vertical), and at test time multiple snapshots are sampled from the video and the action prediction is av- eraged. This was shown to get very high performance on existing benchmarks, while being very efï¬cient to train and test.
# 4.3. 3D ConvNets
3D ConvNets [13, 21, 23] seem like a natural approach to video modeling. They are just like standard 2D convo- lutional networks, but with spatio-temporal ï¬lters, and have a very interesting characteristic: they directly create hier- archical representations of spatio-temporal data. One issue with these models is that they have many more parameters
a) LSTM b) Two-Stream c) 3D ConvNet Action Action i 1 Action >) oo⢠|LSTM 900 _ LSTM "Gy } : â ; )) ( )) | ConvNet | âcone SD GamuNei ConvNet) 0 |ConvNet ) ar + \ ) LL | Image 1 | x0) Image K Image 1 Optical Images |. Flow 1 to N ltok time time ime
Figure 3: Video architectures used as baseline human action classiï¬ers.
than 2D ConvNets because of the additional kernel dimen- sion, and this makes them harder to train. Also, they seem to preclude the beneï¬ts of ImageNet pre-training and pre- vious work has deï¬ned relatively shallow custom architec- tures and trained them from scratch [13, 14, 21, 23]. Re- sults on benchmarks have shown promise but have not yet matched the state-of-the-art, possibly because they require more training data than their 2D counterparts. Thus 3D ConvNets are a good candidate for evaluation on our larger dataset.
# 4.4. Implementation details
The ConvNet+LSTM and Two-Stream architecures use In the case of the ResNet-50 as the base architecture. Two-Stream architecture, a separate ResNet-50 is trained independently for each stream. As noted earlier, for these architectures the ResNet-50 model is pre-trained on Ima- geNet for the experiments on UCF-101 and HMDB-51, and trained from scratch for experiments on Kinetics. The 3D- ConvNet is not pre-trained.
For this paper we implemented a small variation of C3D [23], which has 8 convolutional layers, 5 pooling layers and 2 fully connected layers at the top. The inputs to the model are short 16-frame clips with 112 à 112-pixel crops. Dif- ferently from the original paper we use batch normalization after all convolutional and fully connected layers. Another difference to the original model is in the ï¬rst pooling layer, where we use a temporal stride of 2 instead of 1, which re- duces the memory footprint and allows for bigger batches â this was important for batch normalization (especially after the fully connected layers, where there is no weight tying). Using this stride we were able to train with 15 videos per batch per GPU using standard K40 GPUs.
We trained the models on videos using standard SGD with momentum in all cases, with synchronous paralleliza- tion across 64 GPUs for all models. We trained models on Kinetics for up to 100k steps, with a 10x reduction of learn- ing rate when validation loss saturated, and tuned weight decay and learning rate hyperparameters on the validation set of Kinetics. All the models were implemented in Ten- sorFlow [1].
The original clips have variable resolution and frame rate. In our experiments they are all normalized so that the larger image side is 340 pixels wide for models using ResNet-50 and 128 pixels wide for the 3D ConvNet. We also resample the videos so they have 25 frames per sec- ond.
At test time, we split the video uniformly into crops of 16 frames and apply the classiï¬er separately on each. We then average the class scores, as in the original paper.
Data augmentation is known to be of crucial importance for the performance of deep architectures. We used random cropping both spatially â randomly cropping a 299 Ã 299
Method (a) ConvNet+LSTM (b) Two-Stream (c) 3D-ConvNet #Params 29M 48M 79M Training # Input Frames Temporal Footprint 25 rgb 1 rgb, 10 ï¬ow 16 rgb 5s 0.4s 0.64s # Input Frames 50 rgb 25 rgb, 250 ï¬ow 240 rgb Testing Temporal Footprint 10s 10s 9.6s
Table 3: Number of parameters and temporal input sizes of the models. ConvNet+LSTM and Two-Stream use ResNet-50 ConvNet modules.
UCF-101 HMDB-51 Architecture (a) ConvNet+LSTM 84.3 84.2 (b) Two-Stream 51.6 (c) 3D-ConvNet RGB Flow RGB+Flow RGB Flow RGB+Flow â 85.9 â â 92.5 â 43.9 51.0 24.3 â 56.9 â â 63.7 â RGB 57.0 / 79.0 56.0 / 77.3 56.1 / 79.5 Kinetics Flow â 49.5 / 71.9 â
Table 4: Baseline comparisons across datasets: (left) training and testing on split 1 of UCF-101; (middle) training and testing on split 1 of HMDB-51; (right) training and testing on Kinetics (showing top-1/top-5 performance). ConvNet+LSTM and Two-Stream use ResNet-50 ConvNet modules, pretrained on ImageNet for UCF-101 and HMDB-51 examples but not for the Kinetics experiments. Note that the Two-Stream architecture numbers on individual RGB and Flow streams can be interpreted as a simple baseline which applies a ConvNet independently on 25 uniformly sampled frames then averages the predictions.
patch (respectively 112 à 112 for the 3D ConvNet) â and temporally, when picking the starting frame among those early enough to guarantee a desired number of frames. For shorter videos, we looped the video as many times as neces- sary to satisfy each modelâs input interface. We also applied random left-right ï¬ipping consistently for each video during training.
At test time, we sample from up to 10 seconds of video, again looping if necessary. Better performance could be obtained by also considering left-right ï¬ipped videos at test time and by adding additional augmentation, such as photo- metric, during training. We leave this to future work.
# 4.5. Baseline evaluations
unlike the other baselines. This translates into poor per- formance on all datasets but especially on UCF-101 and HMDB-51 â on Kinetics it is much closer to the perfor- mance of the other models, thanks to the much larger train- ing set of Kinetics.
⢠Class difï¬culty. We include a full list of Kinetics classes sorted by classiï¬cation accuracy under the two- stream model in ï¬gure 4. Eating classes are among the hardest, as they sometimes require distinguishing what is being eaten, such as hotdogs, chips and doughnuts â and these may appear small and already partially con- sumed, in the video. Dancing classes are also hard, as well as classes centered on a speciï¬c body part, such as âmassaging feetâ, or âshaking headâ.
In this section we compare the performance of the three baseline architectures whilst varying the dataset used for training and testing.
Table 4 shows the classiï¬cation accuracy when training and testing on either UCF-101, HMDB-51 or Kinetics. We train and test on split 1 of UCF-101 and HMDB-51, and on the train/val set and held-out test set of Kinetics.
There are several noteworthy observations. First, the per- formance is far lower on Kinetics than on UCF-101, an indi- cation of the different levels of difï¬culty of the two datasets. On the other hand, the performance on HMDB-51 is worse than on Kinetics â it seems to have a truly difï¬cult test set, and it was designed to be difï¬cult to appearance-centered methods, while having little training data. The parameter- rich 3D-ConvNet model is not pre-trained on ImageNet,
⢠Class confusion. The top 10 class confusions are provided in table 5. They mostly correspond to ï¬ne- grained distinctions that one would expect to be hard, for example âlong jumpâ and âtriple jumpâ, confusing burger with doughnuts. The confusion between âswing dancingâ and âsalsa dancingâ raises the question of how accurate motion modeling is in the two-stream model, since âswing dancingâ is typically much faster-paced and has a peculiar style that makes it easy for humans to distinguish from salsa.
⢠Classes where motion matters most. We tried to an- alyze for which classes motion is more important and
riding mechanical bull presenting weather forecast sled dog racing playing squash / racquetball snowkiting diving cliff shearing sheep pull ups filling eyebrows: bench pressing riding or walking with horse passing American football (in game) picking fruit weaving basket) playing tennis crawling baby cutting watermelon tying tie trapezing bowling recording music tossing coin fixing hair yawning shooting basketball answering questions rock scissors paper drinking beer shaking hands making a cake throwing ball drinking shots eating chips drinking headbutting sneezing sniffing eating doughnuts faceplanting slapping 00 0.2 04 06 08 Accuracy
Figure 4: List of 20 easiest and 20 hardest Kinetics classes sorted by class accuracies obtained using the two-stream model.
which ones were recognized correctly using just ap- pearance information, by comparing the recognition accuracy ratios when using the ï¬ow and RGB streams of the two-stream model in isolation. We show the ï¬ve classes where this ratio is largest and smallest in ta- ble 6.
# 5. Conclusion
We have described the Kinetics Human Action Video dataset, which has an order of magnitude more videos than previous datasets of its type. We have also discussed the procedures we employed collecting the data and for ensur- ing its quality. We have shown that the performance of stan- dard existing models on this dataset is much lower than on UCF-101 and on par with HMDB-51, whilst allowing large models such as 3D ConvNets to be trained from scratch, unlike the existing human action datasets.
We have also carried out a preliminary analysis of dataset imbalance and whether this leads to bias in the classiï¬ers trained on the dataset. We found little evidence that the resulting classiï¬ers demonstrate bias along sensitive axes, such as across gender. This is however a complex area that deserves further attention. We leave a thorough analysis for future work, in collaboration with specialists from comple- mentary areas, namely social scientists and critical human- ists.
We will release trained baseline models (in TensorFlow), so that they can be used, for example, to generate features for new action classes.
# Acknowledgements:
The collection of this dataset was funded by DeepMind. We are very grateful for help from Andreas Kirsch, John- Paul Holt, Danielle Breen, Jonathan Fildes, James Besley and Brian Carver. We are grateful for advice and comments from Tom Duerig, Juan Carlos Niebles, Simon Osindero, Chuck Rosenberg and Sean Legassick; we would also like to thank Sandra and Aditya for data clean up.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 8
[2] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the In Computer Vision and Pattern Recognition art analysis. (CVPR), 2014 IEEE Conference on. IEEE, 2014. 4
[3] F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. Activitynet: A large-scale video benchmark for human activ- ity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. 2, 4
[4] A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics de- rived automatically from language corpora contain human- like biases. Science, 356(6334):183â186, 2017. 6
[5] J. Carreira and A. Zisserman. Quo vadis, action recogni- tion? new models and the kinetics dataset. In IEEE Interna- tional Conference on Computer Vision and Pattern Recogni- tion CVPR, 2017. 2
[6] T. Cooijmans, N. Ballas, C. Laurent, and A. Courville.
Class 1 âriding muleâ âhockey stopâ âswing dancingâ âstrumming guitarâ âshooting basketballâ âcooking sausagesâ âsweeping ï¬oorâ âtriple jumpâ âdoing aerobicsâ âpetting animal (not cat)â âshaving legsâ âsnowboardingâ Class 2 âriding or walking with horseâ âice skatingâ âsalsa dancingâ âplaying guitarâ âplaying basketballâ âcooking chickenâ âmopping ï¬oorâ âlong jumpâ âzumbaâ âfeeding goatsâ âwaxing legsâ âskiing (not slalom or crosscountry)â confusion 40% 36% 36% 35% 32% 29% 27% 26% 26% 25% 25% 22%
Table 5: Top-12 class confusions in Kinetics, using the two-stream model.
Class ârock scissors paperâ âsword ï¬ghtingâ ârobot dancingâ âair drummingâ âexercising armâ âmaking a cakeâ âcooking sausagesâ âsnifï¬ngâ âeating cakeâ âmaking a sandwichâ Flow/RGB accuracy ratio 5.3 3.1 3.1 2.8 2.5 0.1 0.1 0.1 0.0 0.0
Table 6: Classes with largest and smallest ratios of recogni- tion accuracy when using ï¬ow and RGB. The highest ratios correspond to when ï¬ow does better, the smallest to when RGB does better. We also evaluated the ratios of rgb+ï¬ow to rgb accuracies and the ordering was quite similar.
Recurrent arXiv:1603.09025, 2016. 7 batch normalization. arXiv preprint
S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Dar- rell. Long-term recurrent convolutional networks for visual In Proceedings of the IEEE recognition and description. Conference on Computer Vision and Pattern Recognition, pages 2625â2634, 2015. 7
[8] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes International Journal of Com- challenge: A retrospective. puter Vision, 111(1):98â136, 2015. 6
[9] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In IEEE International Conference on Computer Vision and Pat- tern Recognition CVPR, 2016. 7
[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, 2016. 7 [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 6
[13] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221â 231, 2013. 7, 8
[14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classiï¬cation with convo- lutional neural networks. In Proceedings of the IEEE con- ference on Computer Vision and Pattern Recognition, pages 1725â1732, 2014. 7, 8
[15] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recog- In Proceedings of the International Conference on nition. Computer Vision (ICCV), 2011. 1, 2, 4
[16] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1â8. IEEE, 2008. 7
[17] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learn- ing of human action categories using spatial-temporal words. International journal of computer vision, 79(3):299â318, 2008. 7
[18] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and F. Li. Imagenet large scale visual recognition challenge. IJCV, 2015. 1, 2
[19] K. Simonyan and A. Zisserman. Two-stream convolutional In Advances networks for action recognition in videos. in Neural Information Processing Systems, pages 568â576, 2014. 6, 7
[20] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 1, 2, 4
[10] G. Grifï¬n, A. Holub, and P. Perona. Caltech-256 object cat- egory dataset. 2007. 6
[21] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolu- tional learning of spatio-temporal features. In European con-
ference on computer vision, pages 140â153. Springer, 2010. 7, 8
[22] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1521â1528. IEEE, 2011. 6 [23] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional net- works. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 4489â4497. IEEE, 2015. 7, 8
[24] H. Wang and C. Schmid. Action recognition with improved In International Conference on Computer Vi- trajectories. sion, 2013. 7
[25] X. Wang, A. Farhadi, and A. Gupta. Actions Ë transforma- tions. In CVPR, 2016. 4
[26] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snip- In Proceed- pets: Deep networks for video classiï¬cation. ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4694â4702, 2015. 7
# A. List of Kinetics Human Action Classes
This is the list of classes included in the human action video dataset. The number of clips for each action class is given by the number in brackets following each class name.
1. abseiling (1146)
2. air drumming (1132)
3. answering questions (478)
4. applauding (411)
5. applying cream (478)
6. archery (1147)
7. arm wrestling (1123)
8. arranging ï¬owers (583)
9. assembling computer (542)
10. auctioning (478)
11. baby waking up (611)
12. baking cookies (927)
13. balloon blowing (826)
14. bandaging (569)
15. barbequing (1070)
16. bartending (601)
17. beatboxing (943)
18. bee keeping (430)
19. belly dancing (1115)
20. bench pressing (1106)
21. bending back (635)
22. bending metal (410)
23. biking through snow (1052)
24. blasting sand (713)
25. blowing glass (1145)
26. blowing leaves (405)
27. blowing nose (597)
28. blowing out candles (1150)
29. bobsledding (605)
30. bookbinding (914)
31. bouncing on trampoline (690)
32. bowling (1079)
33. braiding hair (780)
34. breading or breadcrumbing (454)
35. breakdancing (948)
36. brush painting (532)
37. brushing hair (934)
38. brushing teeth (1149)
39. building cabinet (431)
40. building shed (427)
41. bungee jumping (1056)
42. busking (851)
43. canoeing or kayaking (1146)
44. capoeira (1092)
45. carrying baby (558)
46. cartwheeling (616)
47. carving pumpkin (711)
48. catching ï¬sh (671)
49. catching or throwing baseball (756)
50. catching or throwing frisbee (1060)
51. catching or throwing softball (842)
52. celebrating (751)
53. changing oil (714)
54. changing wheel (459)
55. checking tires (555)
56. cheerleading (1145)
57. chopping wood (916)
58. clapping (491)
59. clay pottery making (513)
60. clean and jerk (902)
61. cleaning ï¬oor (874)
62. cleaning gutters (598)
63. cleaning pool (447)
64. cleaning shoes (706)
65. cleaning toilet (576)
66. cleaning windows (695)
67. climbing a rope (413)
68. climbing ladder (662)
69. climbing tree (1120)
70. contact juggling (1135)
71. cooking chicken (1000)
72. cooking egg (618)
73. cooking on campï¬re (403)
74. cooking sausages (467)
75. counting money (674)
76. country line dancing (1015)
77. cracking neck (449)
78. crawling baby (1150)
79. crossing river (951)
80. crying (1037)
81. curling hair (855)
82. cutting nails (560)
83. cutting pineapple (712)
84. cutting watermelon (767)
85. dancing ballet (1144)
86. dancing charleston (721)
87. dancing gangnam style (836)
88. dancing macarena (958)
89. deadlifting (805)
90. decorating the christmas tree (612)
91. digging (404)
92. dining (671)
93. disc golï¬ng (565)
94. diving cliff (1075)
95. dodgeball (595)
96. doing aerobics (461)
97. doing laundry (461)
98. doing nails (949)
99. drawing (445)
100. dribbling basketball (923)
101. drinking (599)
102. drinking beer (575)
103. drinking shots (403)
104. driving car (1118)
105. driving tractor (922)
106. drop kicking (716)
107. drumming ï¬ngers (409)
108. dunking basketball (1105)
109. dying hair (1072)
110. eating burger (864)
111. eating cake (494)
112. eating carrots (516)
113. eating chips (749)
114. eating doughnuts (528)
115. eating hotdog (570)
116. eating ice cream (927)
117. eating spaghetti (1145)
118. eating watermelon (550)
119. egg hunting (500)
120. exercising arm (416)
121. exercising with an exercise ball (438)
122. extinguishing ï¬re (602)
123. faceplanting (441)
124. feeding birds (1150)
125. feeding ï¬sh (973)
126. feeding goats (1027)
127. ï¬lling eyebrows (1085)
128. ï¬nger snapping (825)
129. ï¬xing hair (676)
130. ï¬ipping pancake (720)
131. ï¬ying kite (1063)
132. folding clothes (695)
133. folding napkins (874)
134. folding paper (940)
135. front raises (962)
136. frying vegetables (608)
137. garbage collecting (441)
138. gargling (430)
139. getting a haircut (658)
140. getting a tattoo (737)
141. giving or receiving award (953)
142. golf chipping (699)
143. golf driving (836)
144. golf putting (1081)
145. grinding meat (415)
146. grooming dog (613)
147. grooming horse (645)
148. gymnastics tumbling (1143)
149. hammer throw (1148)
150. headbanging (1090)
151. headbutting (640)
152. high jump (954)
153. high kick (825)
154. hitting baseball (1071)
155. hockey stop (468)
156. holding snake (430)
157. hopscotch (726)
158. hoverboarding (564)
159. hugging (517)
160. hula hooping (1129)
161. hurdling (622)
162. hurling (sport) (836)
163. ice climbing (845)
164. ice ï¬shing (555)
165. ice skating (1140)
166. ironing (535)
167. javelin throw (912)
168. jetskiing (1140)
169. jogging (417)
170. juggling balls (923)
171. juggling ï¬re (668)
172. juggling soccer ball (484)
173. jumping into pool (1133)
174. jumpstyle dancing (662)
175. kicking ï¬eld goal (833)
176. kicking soccer ball (544)
177. kissing (733)
178. kitesurï¬ng (794)
179. knitting (691)
180. krumping (657)
181. laughing (926)
182. laying bricks (432)
183. long jump (831)
184. lunge (759)
185. making a cake (463)
186. making a sandwich (440)
187. making bed (679)
188. making jewelry (658)
189. making pizza (1147)
190. making snowman (756)
191. making sushi (434)
192. making tea (426)
193. marching (1146)
194. massaging back (1113)
195. massaging feet (478)
196. massaging legs (592)
198. milking cow (980) 199. mopping ï¬oor (606) 200. motorcycling (1142) 201. moving furniture (426) 202. mowing lawn (1147) 203. news anchoring (420) 204. opening bottle (732) 205. opening present (866) 206. paragliding (800) 207. parasailing (762) 208. parkour (504) 209. passing American football (in game) (863) 210. passing American football (not in game) (1045) 211. peeling apples (592) 212. peeling potatoes (457) 213. petting animal (not cat) (757) 214. petting cat (756) 215. picking fruit (793) 216. planting trees (557) 217. plastering (428) 218. playing accordion (925) 219. playing badminton (944) 220. playing bagpipes (838) 221. playing basketball (1144) 222. playing bass guitar (1135) 223. playing cards (737) 224. playing cello (1081) 225. playing chess (850) 226. playing clarinet (1022) 227. playing controller (524) 228. playing cricket (949) 229. playing cymbals (636) 230. playing didgeridoo (787) 238. playing kickball (468) 239. playing monopoly (731) 240. playing organ (672) 241. playing paintball (1140) 242. playing piano (691) 243. playing poker (1134) 244. playing recorder (1148) 245. playing saxophone (916) 246. playing squash or racquetball (980) 247. playing tennis (1144) 248. playing trombone (1149) 249. playing trumpet (989) 250. playing ukulele (1146) 251. playing violin (1142) 252. playing volleyball (804) 253. playing xylophone (746) 254. pole vault (984) 255. presenting weather forecast (1050) 256. pull ups (1121) 257. pumping ï¬st (1009) 258. pumping gas (544) 259. punching bag (1150) 260. punching person (boxing) (483) 261. push up (614) 262. pushing car (1069) 263. pushing cart (1150) 264. pushing wheelchair (465) 265. reading book (1148) 266. reading newspaper (424) 267. recording music (415) 268. riding a bike (476) 269. riding camel (716) 270. riding elephant (1104)
~~ P= a
231. playing drums (908)
232. playing ï¬ute (475)
233. playing guitar (1135)
234. playing harmonica (1006)
235. playing harp (1149)
236. playing ice hockey (917)
271. riding mechanical bull (698)
272. riding mountain bike (495)
273. riding mule (476)
274. riding or walking with horse (1131)
275. riding scooter (674)
276. riding unicycle (864)
277. ripping paper (605)
278. robot dancing (893) 279. rock climbing (1144) 280. rock scissors paper (424) 281. roller skating (960) 282. running on treadmill (428) 283. sailing (867) 284. salsa dancing (1148) 285. sanding ï¬oor (574) 286. scrambling eggs (816) 287. scuba diving (968) 288. setting table (478) 289. shaking hands (640) 290. shaking head (885) 291. sharpening knives (424) 292. sharpening pencil (752) 293. shaving head (971) 294. shaving legs (509) 295. shearing sheep (988) 296. shining shoes (615) 297. shooting basketball (595) 298. shooting goal (soccer) (444) 299. shot put (987) 300. shoveling snow (879) 301. shredding paper (403) 302. shufï¬ing cards (828) 303. side kick (991) 304. sign language interpreting (446) 305. singing (1147) 306. situp (817) 307. skateboarding (1139) 308. ski jumping (1051)
309. skiing (not slalom or crosscountry) (1140)
310. skiing crosscountry (477)
311. skiing slalom (539)
312. skipping rope (488)
313. skydiving (505)
314. slacklining (790)
315. slapping (465)
316. sled dog racing (775)
317. smoking (1105)
318. smoking hookah (857)
319. snatch weight lifting (943)
320. sneezing (505)
321. snifï¬ng (399)
322. snorkeling (1012)
323. snowboarding (937)
324. snowkiting (1145)
325. snowmobiling (601)
326. somersaulting (993)
327. spinning poi (1134)
328. spray painting (908)
329. spraying (470)
330. springboard diving (406)
331. squat (1148)
332. sticking tongue out (770)
333. stomping grapes (444)
334. stretching arm (718)
335. stretching leg (829)
336. strumming guitar (472)
337. surï¬ng crowd (876)
338. surï¬ng water (751)
339. sweeping ï¬oor (604)
340. swimming backstroke (1077)
341. swimming breast stroke (833)
342. swimming butterï¬y stroke (678)
343. swing dancing (512)
344. swinging legs (409)
345. swinging on something (482)
346. sword ï¬ghting (473)
347. tai chi (1070)
348. taking a shower (378)
349. tango dancing (1114)
350. tap dancing (947)
351. tapping guitar (815)
352. tapping pen (703)
353. tasting beer (588)
354. tasting food (613)
355. testifying (497)
356. texting (704)
357. throwing axe (816)
358. throwing ball (634)
359. throwing discus (1104)
360. tickling (610)
361. tobogganing (1147)
362. tossing coin (461)
363. tossing salad (463)
364. training dog (481)
365. trapezing (786)
366. trimming or shaving beard (981)
367. trimming trees (665)
368. triple jump (784)
369. tying bow tie (387)
370. tying knot (not on a tie) (844)
371. tying tie (673)
372. unboxing (858)
373. unloading truck (406)
374. using computer (937)
375. using remote controller (not gaming) (549)
376. using segway (387)
377. vault (562)
378. waiting in line (430)
379. walking the dog (1145)
380. washing dishes (1048)
381. washing feet (862)
382. washing hair (423)
383. washing hands (916)
384. water skiing (763)
385. water sliding (420)
386. watering plants (680)
387. waxing back (537)
388. waxing chest (760)
389. waxing eyebrows (720)
390. waxing legs (948)
391. weaving basket (743)
392. welding (759)
393. whistling (416)
394. windsurï¬ng (1114)
395. wrapping present (861)
396. wrestling (488)
397. writing (735)
398. yawning (398)
399. yoga (1140)
400. zumba (1093)
# B. List of Parent-Child Groupings
These lists are not exclusive and are not intended to be comprehensive. Rather, they are a guide for related human action classes.
arts and crafts (12) arranging ï¬owers blowing glass brush painting carving pumpkin clay pottery making decorating the christmas tree drawing getting a tattoo knitting making jewelry spray painting weaving basket
athletics â jumping (6) high jump hurdling long jump parkour pole vault triple jump
athletics â throwing + launching (9) archery catching or throwing frisbee disc golï¬ng hammer throw javelin throw shot put throwing axe throwing ball throwing discus
auto maintenance (4) changing oil changing wheel checking tires pumping gas
ball sports (25) bowling catching or throwing baseball
catching or throwing softball dodgeball dribbling basketball dunking basketball golf chipping golf driving golf putting hitting baseball hurling (sport) juggling soccer ball kicking ï¬eld goal kicking soccer ball passing American football (in game) passing American football (not in game) playing basketball playing cricket playing kickball playing squash or racquetball playing tennis playing volleyball shooting basketball shooting goal (soccer) shot put
body motions (16) air drumming applauding baby waking up bending back clapping cracking neck drumming ï¬ngers ï¬nger snapping headbanging headbutting pumping ï¬st shaking head stretching arm stretching leg swinging legs
cleaning (13) cleaning ï¬oor cleaning gutters cleaning pool cleaning shoes cleaning toilet cleaning windows doing laundry making bed mopping ï¬oor setting table shining shoes
sweeping ï¬oor washing dishes
cloths (8) bandaging doing laundry folding clothes folding napkins ironing making bed tying bow tie tying knot (not on a tie) tying tie
communication (11) answering questions auctioning bartending celebrating crying giving or receiving award laughing news anchoring presenting weather forecast sign language interpreting testifying
cooking (22) baking cookies barbequing breading or breadcrumbing cooking chicken cooking egg cooking on campï¬re cooking sausages cutting pineapple cutting watermelon ï¬ipping pancake frying vegetables grinding meat making a cake making a sandwich making pizza making sushi making tea peeling apples peeling potatoes picking fruit scrambling eggs tossing salad
dancing (18) belly dancing
breakdancing capoeira cheerleading country line dancing dancing ballet dancing charleston dancing gangnam style dancing macarena jumpstyle dancing krumping marching robot dancing salsa dancing swing dancing tango dancing tap dancing zumba
eating + drinking (17) bartending dining drinking drinking beer drinking shots eating burger eating cake eating carrots eating chips eating doughnuts eating hotdog eating ice cream eating spaghetti eating watermelon opening bottle tasting beer tasting food
electronics (5) assembling computer playing controller texting using computer using remote controller (not gaming)
garden + plants (10) blowing leaves carving pumpkin chopping wood climbing tree decorating the christmas tree egg hunting mowing lawn planting trees
trimming trees watering plants
golf (3) golf chipping golf driving golf putting
gymnastics (5) bouncing on trampoline cartwheeling gymnastics tumbling somersaulting vault
hair (14) braiding hair brushing hair curling hair dying hair ï¬xing hair getting a haircut shaving head shaving legs trimming or shaving beard washing hair waxing back waxing chest waxing eyebrows waxing legs
hands (9) air drumming applauding clapping cutting nails doing nails drumming ï¬ngers ï¬nger snapping pumping ï¬st washing hands
head + mouth (17) balloon blowing beatboxing blowing nose blowing out candles brushing teeth gargling headbanging headbutting shaking head singing
smoking smoking hookah sneezing snifï¬ng sticking tongue out whistling yawning
heights (15) abseiling bungee jumping climbing a rope climbing ladder climbing tree diving cliff ice climbing jumping into pool paragliding rock climbing skydiving slacklining springboard diving swinging on something trapezing
interacting with animals (19) bee keeping catching ï¬sh feeding birds feeding ï¬sh feeding goats grooming dog grooming horse holding snake ice ï¬shing milking cow petting animal (not cat) petting cat riding camel riding elephant riding mule riding or walking with horse shearing sheep training dog walking the dog
juggling (6) contact juggling hula hooping juggling balls juggling ï¬re juggling soccer ball spinning poi
makeup (5) applying cream doing nails dying hair ï¬lling eyebrows getting a tattoo
martial arts (10) arm wrestling capoeira drop kicking high kick punching bag punching person side kick sword ï¬ghting tai chi wrestling
miscellaneous (9) digging extinguishing ï¬re garbage collecting laying bricks moving furniture spraying stomping grapes tapping pen unloading truck
mobility â land (20) crawling baby driving car driving tractor faceplanting hoverboarding jogging motorcycling parkour pushing car pushing cart pushing wheelchair riding a bike riding mountain bike riding scooter riding unicycle roller skating running on treadmill skateboarding surï¬ng crowd using segway waiting in line
mobility â water (10) crossing river diving cliff jumping into pool scuba diving snorkeling springboard diving swimming backstroke swimming breast stroke swimming butterï¬y stroke water sliding
music (29) beatboxing busking playing accordion playing bagpipes playing bass guitar playing cello playing clarinet playing cymbals playing didgeridoo playing drums playing ï¬ute playing guitar playing harmonica playing harp playing keyboard playing organ playing piano playing recorder playing saxophone playing trombone playing trumpet playing ukulele playing violin playing xylophone recording music singing strumming guitar tapping guitar whistling
paper (12) bookbinding counting money folding napkins folding paper opening present reading book reading newspaper ripping paper
shredding paper unboxing wrapping present writing
personal hygiene (6) brushing teeth taking a shower trimming or shaving beard washing feet washing hair washing hands
playing games (13) egg hunting ï¬ying kite hopscotch playing cards playing chess playing monopoly playing paintball playing poker riding mechanical bull rock scissors paper shufï¬ing cards skipping rope tossing coin
racquet + bat sports (8) catching or throwing baseball catching or throwing softball hitting baseball hurling (sport) playing badminton playing cricket playing squash or racquetball playing tennis
snow + ice (18) biking through snow bobsledding hockey stop ice climbing ice ï¬shing ice skating making snowman playing ice hockey shoveling snow ski jumping skiing (not slalom or crosscountry) skiing crosscountry skiing slalom sled dog racing
snowboarding snowkiting snowmobiling tobogganing
swimming (3) swimming backstroke swimming breast stroke swimming butterï¬y stroke
touching person (11) carrying baby hugging kissing massaging back massaging feet massaging legs massaging personâs head shaking hands slapping tickling
using tools (13) bending metal blasting sand building cabinet building shed changing oil changing wheel checking tires plastering pumping gas sanding ï¬oor sharpening knives sharpening pencil welding
water sports (8) canoeing or kayaking jetskiing kitesurï¬ng parasailing sailing surï¬ng water water skiing windsurï¬ng
waxing (4) waxing back waxing chest waxing eyebrows waxing legs | {
"id": "1603.04467"
} |
1705.06476 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | 8 1 0 2
r a M 8 ] L C . s c [
4 v 6 7 4 6 0 . 5 0 7 1 : v i X r a
# ParlAI: A Dialog Research Software Platform
# Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh and Jason Weston Facebook AI Research
# Abstract
# Abstract
We introduce ParlAI (pronounced âpar- layâ), an open-source software plat- form for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a uniï¬ed framework for sharing, training and testing dialog models; integration of Amazon Mechani- cal Turk for data collection, human eval- uation, and online/reinforcement learning; and a repository of machine learning mod- els for comparing with othersâ models, and improving upon existing architectures. Over 20 tasks are supported in the ï¬rst re- lease, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Di- alog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.
# Introduction
QA datasets SQuAD Sentence Completion bAbI tasks QACNN (Cloze) MCTest SimpleQuestions WikiQA, WebQuestions, WikiMovies, MTurkWikiMovies MovieDD (Movie Recommendations) QADailyMail CBT BookTest Goal-Oriented Dialog Dialog Chit-Chat bADI Dialog tasks Ubuntu Dialog-based Language Learning bAbl Movies SubReddit Dialog-based Language Learning Movie Cornell Movie MovieDD-QARecs dialog Opensubtitles Visual QA/Dialog van
Figure 1: The tasks in the ï¬rst release of ParlAI.
QA Collector: In the United States, heating, ventilation and air conditioning (HVAC) systems ac EJ/yt) ofthe energy used in commercial nearly 50% (10.1 Euiyt) of the energy used in residential buildings. Live Chat In this task, you will need to ask a question about a paragraph, and then provide your own answer toi Please provide a question given this context. You: How much of the energy used in residential buildings do HVAC systems account for? you are ready, please click "Accept HIT" to start this task. QA Collector: Thanks. And what is the answer to your question? Suto t | aaaa
The purpose of language is to accomplish com- munication goals, which typically involve a dia- log between two or more communicators (Crystal, 2004). Hence, trying to solve dialog is a funda- mental goal for researchers in the NLP commu- nity. From a machine learning perspective, build- ing a learning agent capable of dialog is also fun- damental for various reasons, chieï¬y that the solu- tion involves achieving most of the subgoals of the ï¬eld, and in many cases those subtasks are directly impactful to the task.
Figure 2: MTurk Live Chat for collecting QA datasets in ParlAI.
about sports or the news, or answering factual or perceptually-grounded questions all fall under dia- log. Hence, methods that perform task transfer ap- pear useful for the end-goal. Memory, logical and commonsense reasoning, planning, learning from interaction, learning compositionality and other AI subgoals also have clear roles in dialog.
On the one hand dialog can be seen as a sin- gle task (learning how to talk) and on the other hand as thousands of related tasks that require dif- ferent skills, all using the same input and output format. The task of booking a restaurant, chatting
However, to pursue these research goals, soft- ware tools should unify the different dialog sub- tasks and the agents that can learn from them. Working on individual datasets can lead to siloed
research, where the overï¬tting to speciï¬c quali- ties of a dataset do not generalize to solving other tasks. For example, methods that do not gener- alize beyond WebQuestions (Berant et al., 2013) because they specialize on knowledge bases only, SQuAD (Rajpurkar et al., 2016) because they pre- dict start and end context indices (see Sec. 7), or bAbI (Weston et al., 2015) because they use sup- porting facts or make use of its simulated nature.
In this paper we present a software platform, ParlAI (pronounced âpar-layâ), that provides re- searchers a uniï¬ed framework for training and testing dialog models, especially multitask train- ing or evaluation over many tasks at once, as well as seamless integration with Amazon Mechanical Turk. Over 20 tasks are supported in the ï¬rst re- lease, including many popular datasets, see Fig. 1. Included are examples of training neural models with PyTorch and Lua Torch1. Using Theano2 or Tensorï¬ow3 instead is also straightforward.
The overarching goal of ParlAI is to build a community-based platform for easy access to both tasks and learning algorithms that perform well on them, in order to push the ï¬eld forward. This pa- per describes our goals in detail, and gives a tech- nical overview of the platform.
# 2 Goals
The goals of ParlAI are as follows:
A uniï¬ed framework for development of dia- log models. ParlAI aims to unify dialog dataset input formats fed to machine learning agents to a single format, and to standardize evaluation frameworks and metrics as much as possible. Re- searchers can submit their new tasks and their agent training code to the repository to share with others in order to aid reproducibility, and to better enable follow-on research.
General dialog involving many different skills. ParlAI contains a seamless combination of real and simulated language datasets, and encourages multitask model development & evaluation by making multitask models as easy to build as single task ones. This should reduce overï¬tting of model design to speciï¬c datasets and encourage models that perform task transfer, an important prerequi- site for a general dialog agent.
1
# lnttp://pytorch.org/
http://pytorch.org/ and http://torch.ch/
2
http://deeplearning.net/software/theano/
3
https://www.tensorflow.org/
Real dialog with people. ParlAI allows collect- ing, training and evaluating on live dialog with hu- mans via Amazon Mechanical Turk by making it easy to connect Turkers with a dialog agent, see Fig. 2. This also enables comparison of Turk ex- periments across different research groups, which has been historically difï¬cult. Towards a common general dialog model. Our aim is to motivate the building of new tasks and agents that move the ï¬eld towards a working di- alog model. Hence, each new task that goes into the repository should build towards that common goal, rather than being seen solely as a piece of independent research.
# 3 General Properties of ParlAI
ParlAI consists of a number of tasks and agents that can be used to solve them. All the tasks in ParlAI have a single format (API) which makes applying any agent to any task, or multiple tasks at once, simple. The tasks include both ï¬xed su- pervised/imitation learning datasets (i.e. conver- sation logs) and interactive (online or reinforce- ment learning) tasks, as well as both real language and simulated tasks, which can all be seamlessly trained on. ParlAI also supports other media, e.g. images as well as text for visual question an- swering (Antol et al., 2015) or visually grounded dialog (Das et al., 2017). ParlAI automatically downloads tasks and datasets the ï¬rst time they are used. One or more Mechanical Turkers can be embedded inside an environment (task) to collect data, train or evaluate learning agents.
Examples are included in the ï¬rst release of training with PyTorch and Lua Torch. ParlAI uses ZeroMQ to talk to languages other than Python (such as Lua Torch). Both batch training and hog- wild training of models are supported and built into the code. An example main for training an agent is given in Fig. 3.
# 4 Worlds, Agents and Teachers
The main concepts (classes) in ParlAI are worlds, agents and teachers:
⢠world â the environment. This can vary from being very simple, e.g. just two agents con- versing, to much more complex, e.g. multiple agents in an interactive environment.
⢠agent â an agent that can act (especially, speak) in the world. An agent is either a learner (i.e. a machine learned system), a
teacher = SquadTeacher(opt) agent = MyAgent(opt) world = World(opt, [teacher, agent]) for i in range(num_exs): world.parley() print(world.display()) def parley(self): for agent in self.agents: act = agent.act() for other_agent in self.agents: if other_agent != agent: other_agent.observe(act)
Figure 3: ParlAI main for displaying data (top) and the code for the world.parley call (bottom).
hard-coded bot such as one designed to inter- act with learners, or a human (e.g. a Turker). ⢠teacher â a type of agent that talks to the learner in order to teach it, e.g. implements one of the tasks in Fig. 1.
After deï¬ning a world and the agents in it, a main loop can be run for training, testing or dis- playing, which calls the function world.parley() to run one time step of the world. Example code to display data is given in Fig. 3, and the output of running it is in Fig. 4.
# 5 Actions and Observations
All agents (including teachers) speak to each other in a single common format â the observa- tion/action object (a python dict), see Fig. 5. It is used to pass text, labels and rewards between agents. The same object type is used for both talking (acting) and listening (observing), but with different values in the ï¬elds. Hence, the ob- ject is returned from agent.act() and passed in to agent.observe(), see Fig. 3.
The ï¬elds of the message are as follows: ⢠text: a speech act. ⢠id: the speakerâs identity. ⢠reward: a real-valued reward assigned to the
receiver of the message.
⢠episode done: indicating the end of a dialog. For supervised datasets, there are some addi- tional ï¬elds that can be used:
⢠label: a set of answers the speaker is expect- ing to receive in reply, e.g. for QA datasets the right answers to a question.
⢠label candidates: a set of possible ways to respond supplied by a teacher, e.g. for multi- ple choice datasets or ranking tasks.
ranked candidate predic- tions from a learner. Used to evaluate ranking
python examples/display_data.py -t babi [babi:Task1k:4]: The office is north of the kitchen. The bathroom is north of the office. What is north of the kitchen? [cands: office|garden|hallway|bedroom|kitchen|bathroom] [RepeatLabelAgent]: office - - - - - - - - - - - - - - - - - - - - - ËË [babi:Task1k:2]: Daniel went to the kitchen. Daniel grabbed the football there. Mary took the milk there. Mary journeyed to the office. Where is the milk? [cands: office|garden|hallway|bedroom|kitchen|bathroom] [RepeatLabelAgent]: office
Figure 4: Example output to display data of a given task (see Fig. 3 for corresponding code).
metrics, rather than just evaluate the single response in the text ï¬eld.
metrics: A teacher can communicate to a learning agent metrics on its performance. Finally other media can also be supported with
additional ï¬elds:
⢠image: an image, e.g. for Visual Question Answering or Visual Dialog datasets.
As the dict is extensible, we can add more ï¬elds over time, e.g. for audio and other sensory data, as well as actions other than speech acts.
Each of these ï¬elds are technically optional, de- pending on the dataset, though the text ï¬eld will most likely be used in nearly all exchanges. A typ- ical exchange from a ParlAI training set is shown in Fig. 6.
# 6 Code Structure
The ParlAI codebase has ï¬ve main directories: ⢠core: the primary code for the platform. ⢠agents: contains agents which can interact with the worlds/tasks (e.g. learning models). ⢠examples: contains examples of different mains (display data, training and evaluation). ⢠tasks: contains code for the different tasks
available from within ParlAI.
⢠mturk: contains code for setting up Mechan- ical Turk and sample MTurk tasks.
# 6.1 Core
The core library contains the following ï¬les:
⢠agents.py: deï¬nes the Agent base class for all agents, which implements the observe() and act() methods, the Teacher class which also reports metrics, and MultiTaskTeacher for multitask training.
Observation/action dict Passed back and forth between agents & environment. Contains: .text .id .reward .episode done text of speaker(s) id of speaker(s) for reinforcement learning signals end of episode For supervised dialog datasets: .label .label candidates .text candidates .metrics multiple choice options ranked candidate responses evaluation metrics Other media: .image for VQA or Visual Dialog
Figure 5: The observation/action dict is the cen- tral message passing object in ParlAI: agents send this message to speak, and receive a message of this form to observe other speakers and the envi- ronment.
⢠dialog teacher.py: the base teacher class for doing dialog with ï¬xed chat logs.
⢠worlds.py: deï¬nes the base World class, Di- alogPartnerWorld for two speakers, MultiA- gentDialogWorld for more than two, and two containers that can wrap a chosen environ- ment: BatchWorld for batch training, and HogwildWorld for training across multiple threads.
⢠dict.py: code for building language dictio- naries.
⢠metrics.py: computes exact match, F1 and ranking metrics for evaluation.
⢠params.py: uses argparse to interpret com- mand line arguments for ParlAI
# 6.2 Agents
The agents directory contains machine learning agents. Currently available within this directory:
an attentive LSTM model DrQA (Chen et al., 2017) implemented in PyTorch that has competitive results on SQuAD (Ra- jpurkar et al., 2016) amongst other datasets. ⢠memnn: code for an end-to-end memory net- work (Sukhbaatar et al., 2015) in Lua Torch. ⢠remote agent: basic class for any agent con-
necting over ZeroMQ.
⢠seq2seq: basic GRU sequence to sequence model (Sutskever et al., 2014)
⢠ir baseline: information retrieval (IR) base- line that scores responses with TFIDF-
Teacher: { âtextâ: âSam went to the kitchen.
Pat gave Sam the milk.
Where is the milk?â,\ âlabelsâ: [âkitchenâ], âlabel_candidatesâ: [âhallwayâ, âkitchenâ, âbathroomâ], âepisode_doneâ: False } Student: { âtextâ: âhallwayâ } Teacher: { âtextâ: âSam went to the hallway
Pat went to the bathroom
Where is the milk?â, âlabelsâ: [âhallwayâ], âlabel_candidatesâ: [âhallwayâ, âkitchenâ, âbathroomâ], âdoneâ: True } Student: { âtextâ: âhallwayâ } ...
Figure 6: A typical exchange from a ParlAI train- ing set involves messages passed using the obser- vation/action dict (the test set would not include labels). Shown here is the bAbI dataset.
weighted matching (Ritter et al., 2011).
⢠repeat label: basic class for merely repeat- ing all data sent to it (e.g. for debugging).
# 6.3 Examples
This directory contains examples of different mains:.
⢠display data: display data from a particu- lar task provided on the command-line.
⢠display model: show the predictions of a provided model.
⢠eval model: compute evaluation metrics for a given model on a given task.
⢠train model: execute a standard training procedure with a given task and model, in- cluding logging and possibly alternating be- tween training and validation.
For example, one can display 10 random exam-
ples from the bAbI tasks (Weston et al., 2015): python display data.py -t babi -n 10
Display multitasking bAbI and SQuAD (Ra- jpurkar et al., 2016) at the same time:
python display data.py -t babi,squad
Evaluate an IR baseline model on the Movies Sub- reddit:
python eval model.py -m ir baseline -t â#moviedd-redditâ -dt valid
Train an attentive LSTM model on the SQuAD dataset with a batch size of 32 examples:
python train model.py -m drqa -t squad -b 32
# 6.4 Tasks
Over 20 tasks are supported in the ï¬rst release, including popular datasets such as SQuAD (Ra- jpurkar et al., 2016), bAbI tasks (Weston et al., (Hermann 2015), QACNN and QADailyMail et al., 2015), CBT (Hill et al., 2015), bAbI Dialog tasks (Bordes and Weston, 2016), Ubuntu (Lowe et al., 2015) and VQA (Antol et al., 2015). All the datasets in the ï¬rst release are shown in Fig. 14. The tasks are separated into ï¬ve categories: ⢠Question answering (QA): one of the sim- plest forms of dialog, with only 1 turn per speaker. Any intelligent dialog agent should be capable of answering questions, and there are many kinds of questions (and hence datasets) that one can build, providing a set of very important tests. Question answering is particularly useful in that the evaluation is simpler than other forms of dialog if the dataset is labeled with QA pairs and the ques- tions are mostly unambiguous.
the agent has to ï¬ll in a missing word in the next utterance in a dialog. Again, this is special- ized dialog task, but it has the advantage that the datasets are cheap to make and evaluation is simple, which is why the community has built several such datasets.
⢠Goal-Oriented Dialog: a more realistic class of tasks is where there is a goal to be achieved by the end of the dialog. For example, a cus- tomer and a travel agent discussing a ï¬ight, one speaker recommending another a movie to watch, and so on.
⢠Chit-Chat: dialog tasks where there may not be an explicit goal, but more of a discus- sion â for example two speakers discussing sports, movies or a mutual interest.
⢠Visual Dialog: dialog is often grounded in physical objects in the world, so we also in- clude dialog tasks with images as well as text. Choosing a task in ParlAI is as easy as specify- ing it on the command line, as shown in the dataset display utility, Fig. 4. If the dataset has not been used before, ParlAI will automatically download it. As all datasets are treated in the same way in ParlAI (with a single dialog API, see Sec. 5), a di- alog agent can switch training and testing between any of them. Importantly, one can specify many
4All dataset descriptions and references are at http:// parl.ai in the README.md and task list.py.
tasks at once (multitasking) by simply providing a comma-separated list, e.g. the command line argu- ments -t babi,squad, to use those two datasets, or even all the QA datasets at once (-t #qa) or in- deed every task in ParlAI at once (-t #all). The aim is to make it easy to build and evaluate very rich dialog models.
Each task is contained in a folder with the fol- lowing standardized ï¬les:
⢠build.py: ï¬le for setting up data for the task, including downloading the data the ï¬rst time it is requested.
contains agents that live in the world of the task.
⢠worlds.py: optionally added for tasks that need to deï¬ne new/complex environments. To add a new task, one must implement build.py to download any required data, and agents.py for the teacher. If the data consist of ï¬xed logs/dialog scripts such as in many supervised datasets (SQuAD, Ubuntu, etc.) there is very lit- tle code to write. For more complex setups where an environment with interaction has to be deï¬ned, new worlds and/or teachers can be implemented.
# 6.5 Mechanical Turk
An important part of ParlAI is seamless integra- tion with Mechanical Turk for data collection, training or evaluation. Human Turkers are also viewed as agents in ParlAI and hence human- human, human-bot, or multiple humans and bots in group chat can all converse within the standard framework, switching out the roles as desired with no code changes to the agents. This is because Turkers also receive and send via the same in- terface: using the ï¬elds of the observation/action dict. We provide two examples in the ï¬rst release: (i) qa collector: an agent that talks to Turkers to collect question-answer pairs given a context paragraph to build a QA dataset, see Fig. 2.
(ii) model evaluator: an agent which collects ratings from Turkers on the performance of a bot on a given task.
Running a new MTurk task involves implement- ing and running a main ï¬le (like run.py) and deï¬n- ing several task speciï¬c parameters for the world and agent(s) you wish humans to talk to. For data collection tasks the agent should pose the prob- lem and ask the Turker for e.g. the answers to questions, see Fig. 2. Other parameters include the task description, the role of the Turker in the
task, keywords to describe the task, the number of hits and the rewards for the Turkers. One can run in a sandbox mode before launching the real task where Turkers are paid.
For online training or evaluation, the Turker can talk to your machine learning agent, e.g. LSTM, memory network or other implemented technique. New tasks can be checked into the repository so researchers can share data collection and data eval- uation procedures and reproduce experiments.
# 7 Demonstrative Experiment
To demonstrate ParlAI in action, we give results in Table 1 of DrQA, an attentive LSTM architec- ture with single task and multitask training on the SQuAD and bAbI tasks, a combination not shown before with any method, to our knowledge.
This experiment simultaneously shows the power of ParlAI â how easy it is to set up this experiment â and the limitations of current meth- ods. Almost all methods working well on SQuAD have been designed to predict a phrase from the given context (they are given labeled start and end indices in training). Hence, those models cannot be applied to all dialog datasets, e.g. some of the bAbI tasks include yes/no questions, where yes and no do not appear in the context. This high- lights that researchers should not focus models on a single dataset. ParlAI does not provide start and end label indices as its API is dialog only, see Fig. 5. This is a deliberate choice that discourages such dataset overï¬tting/ specialization. However, this also results in a slight drop in performance be- cause less information is given5 (66.4 EM vs. 69.5 EM, see (Chen et al., 2017), which is still in the range of many existing well-performing methods, see https://stanford-qa.com).
Overall, while DrQA can solve some of the bAbI tasks and performs well on SQuAD, it does not match the best performing methods on bAbI (Seo et al., 2016; Henaff et al., 2016), and multi- tasking does not help. Hence, ParlAI lays out the challenge to the community to ï¬nd learning algo- rithms that are generally applicable and that bene- ï¬t from training over many dialog datasets.
5As we now do not know the location of the true answer, we randomly pick the start and end indices of any context phrase matching the given training set answer, in some cases this is unique.
bAbI 10k Task 1: Single Supporting Fact 2: Two Supporting Facts 3: Three Supporting Facts 4: Two Arg. Relations 5: Three Arg. Relations 11: Basic Coreference 12: Conjunction 13: Compound Coref. 14: Time Reasoning 16: Basic Induction Single Multitask 100 98.1 45.4 100 98.9 100 100 100 99.8 47.7 66.4 100 54.3 58.1 100 98.2 100 100 100 99.9 48.2 63.4 SQuAD (Dev. Set)
Table 1: Test Accuracy of DrQA on bAbI 10k and SQuAD (Exact Match metric) using ParlAI. The subset of bAbI tasks for which the answer is ex- actly contained in the text is used.
# 8 Related Software
There are many existing independent dialog datasets, and training code for individual models that work on some of them. Many are framed in slightly different ways (different formats, with dif- ferent types of supervision), and ParlAI attempts to unify this fragmented landscape.
There are some existing software platforms that are related in their scope, but not in their special- ization. OpenAIâs Gym and Universe6 are toolk- its for developing and comparing reinforcement learning (RL) algorithms. Gym is for games like Pong or Go, and Universe is for online games and websites. Neither focuses on dialog or covers the case of supervised datasets as we do.
CommAI7 is a framework that uses textual com- munication for the goal of developing artiï¬cial general intelligence through incremental tasks that test increasingly more complex skills, as described in (Mikolov et al., 2015). CommAI is in a RL set- ting, and contains only synthetic datasets, rather than real natural language datasets as we do here. In that regard it has a different focus to ParlAI, which emphasizes the more immediate task of real dialog, rather than directly on evaluation of ma- chine intelligence.
# 9 Conclusion and Outlook
ParlAI is a framework allowing the research com- munity to share existing and new tasks for dia- log as well as agents that learn on them, and to collect and evaluate conversations between agents and humans via Mechanical Turk. We hope this
6
https://gym.openai.com/ and https://universe.openai.com/ 7 https://github.com/facebookresearch/CommAI-env
tool enables systematic development and evalua- tion of dialog agents, helps push the state of the art in dialog further, and beneï¬ts the ï¬eld as a whole.
# Acknowledgments
We thank Mike Lewis, Denis Yarats, Douwe Kiela, Michael Auli, Y-Lan Boureau, Arthur Szlam, MarcâAurelio Ranzato, Yuandong Tian, Maximilian Nickel, Martin Raison, Myle Ott, Marco Baroni, Leon Bottou and other members of the FAIR team for discussions helpful to building ParlAI.
# References
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In Pro- ceedings of the IEEE International Conference on Com- puter Vision, pages 2425â2433.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question- answer pairs. In EMNLP, volume 2, page 6.
Antoine Bordes and Jason Weston. 2016. ing end-to-end goal-oriented dialog. arXiv:1605.07683. Learn- arXiv preprint
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bor- des. 2017. Reading wikipedia to answer open-domain questions. arXiv:1704.00051.
David Crystal. 2004. The Cambridge encyclopedia of the En- glish language. Ernst Klett Sprachen.
Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual di- arXiv alog agents with deep reinforcement learning. preprint arXiv:1703.06585.
Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bor- Tracking the world arXiv preprint des, and Yann LeCun. 2016. state with recurrent entity networks. arXiv:1612.03969.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and compre- hend. In Advances in Neural Information Processing Sys- tems, pages 1693â1701.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading childrenâs books arXiv preprint with explicit memory representations. arXiv:1511.02301.
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for re- search in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909.
Tomas Mikolov, Armand Joulin, and Marco Baroni. 2015. A roadmap towards machine intelligence. arXiv preprint arXiv:1511.08130.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for ma- chine comprehension of text. arXiv:1606.05250.
Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data- driven response generation in social media. In EMNLP, pages 583â593. Association for Computational Linguis- tics.
Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Ha- jishirzi. 2016. Query-reduction networks for question an- swering. arXiv preprint arXiv:1606.04582.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural in- formation processing systems, pages 2440â2448.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to sequence learning with neural networks. In Ad- vances in neural information processing systems, pages 3104â3112.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv:1502.05698. | {
"id": "1612.03969"
} |
1705.04304 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | 7 1 0 2
v o N 3 1 ] L C . s c [
3 v 4 0 3 4 0 . 5 0 7 1 : v i X r a
# A DEEP REINFORCED MODEL FOR ABSTRACTIVE SUMMARIZATION
Romain Paulus, Caiming Xiong & Richard Socher Salesforce Research 172 University Avenue Palo Alto, CA 94301, USA {rpaulus,cxiong,rsocher}@salesforce.com
# ABSTRACT
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra- attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit âexposure biasâ â they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global se- quence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
# INTRODUCTION
Text summarization is the process of automatically generating natural language summaries from an input document while retaining the important points. By condensing large quantities of information into short, informative summaries, summarization can aid many downstream applications such as creating news digests, search, and report generation.
There are two prominent types of summarization algorithms. First, extractive summarization sys- tems form summaries by copying parts of the input (Dorr et al., 2003; Nallapati et al., 2017). Second, abstractive summarization systems generate new phrases, possibly rephrasing or using words that were not in the original text (Chopra et al., 2016; Nallapati et al., 2016).
Neural network models (Nallapati et al., 2016) based on the attentional encoder-decoder model for machine translation (Bahdanau et al., 2014) were able to generate abstractive summaries with high ROUGE scores. However, these systems have typically been used for summarizing short input sequences (one or two sentences) to generate even shorter summaries. For example, the summaries on the DUC-2004 dataset generated by the state-of-the-art system by Zeng et al. (2016) are limited to 75 characters.
Nallapati et al. (2016) also applied their abstractive summarization model on the CNN/Daily Mail dataset (Hermann et al., 2015), which contains input sequences of up to 800 tokens and multi- sentence summaries of up to 100 tokens. But their analysis illustrates a key problem with attentional encoder-decoder models: they often generate unnatural summaries consisting of repeated phrases.
We present a new abstractive summarization model that achieves state-of-the-art results on the CNN/Daily Mail and similarly good results on the New York Times dataset (NYT) (Sandhaus, 2008). To our knowledge, this is the ï¬rst end-to-end model for abstractive summarization on the NYT dataset. We introduce a key attention mechanism and a new learning objective to address the repeating phrase problem: (i) we use an intra-temporal attention in the encoder that records previous attention weights for each of the input tokens while a sequential intra-attention model in the decoder
1
Ce) 2 re siahiet 4 / The United States became the yest tech . US. tech
Encoder
# Decoder
Figure 1: Illustration of the encoder and decoder attention functions combined. The two context vectors (marked âCâ) are computed from attending over the encoder hidden states and decoder hidden states. Using these two contexts and the current decoder hidden state (âHâ), a new word is generated and added to the output sequence.
takes into account which words have already been generated by the decoder. (ii) we propose a new objective function by combining the maximum-likelihood cross-entropy loss used in prior work with rewards from policy gradient reinforcement learning to reduce exposure bias.
Our model achieves 41.16 ROUGE-1 on the CNN/Daily Mail dataset. Moreover, we show, through human evaluation of generated outputs, that our model generates more readable summaries com- pared to other abstractive approaches.
# 2 NEURAL INTRA-ATTENTION MODEL
In this section, we present our intra-attention model based on the encoder-decoder network (Sutskever et al., 2014). In all our equations, x = {x1,r2,...,2»,} represents the sequence of input (article) tokens, y = {y1, y2,---, Ynâ } the sequence of output (summary) tokens, and || denotes the vector concatenation operator.
Our model reads the input sequence with a bi-directional LSTM encoder {RNNe fwd, RNNe bwd} computing hidden states he ] from the embedding vectors of xi. We use a single LSTM decoder RNNd, computing hidden states hd t from the embedding vectors of yt. Both input and output embeddings are taken from the same matrix Wemb. We initialize the decoder hidden state with hd
2.1
# INTRA-TEMPORAL ATTENTION ON INPUT SEQUENCE
At each decoding step t, we use an intra-temporal attention function to attend over speciï¬c parts of the encoded input sequence in addition to the decoderâs own hidden state and the previously- generated word (Sankaran et al., 2016). This kind of attention prevents the model from attending over the sames parts of the input on different decoding steps. Nallapati et al. (2016) have shown that such an intra-temporal attention can reduce the amount of repetitions when attending over long documents. We deï¬ne eti as the attention score of the hidden input state he
# eti = f (hd
# t , he
eti = f (hd t , he i ), (1)
where f can be any function returning a scalar eti from the hd i vectors. While some attention models use functions as simple as the dot-product between the two vectors, we choose to use a bilinear function:
f (hd t , he i ) = hd t T W e attnhe i . (2)
2
We normalize the attention weights with the following temporal attention function, penalizing input tokens that have obtained high attention scores in past decoding steps. We define new temporal J scores â¬},;: ti
# supers Setton ee
I= supers . 3 ti Setton otherwise. (3) , ee ift=1
Finally, we compute the normalized attention scores αe obtain the input context vector ce t : ti across the inputs and use these weights to
Ul ti Va ti e On = n (4) cf = Do afihe. (5) i=1
INTRA-DECODER ATTENTION
While this intra-temporal attention function ensures that different parts of the encoded input se- quence are used, our decoder can still generate repeated phrases based on its own hidden states, especially when generating long sequences. To prevent that, we can incorporate more information about the previously decoded sequence into the decoder. Looking back at previous decoding steps will allow our model to make more structured predictions and avoid repeating the same information, even if that information was generated many steps away. To achieve this, we introduce an intra- decoder attention mechanism. This mechanism is not present in existing encoder-decoder models for abstractive summarization. For each decoding step t, our model computes a new decoder context vector cd 1 to a vector of zeros since the generated sequence is empty on the ï¬rst decoding step. For t > 1, we use the following equations:
d t-1 of, = Pew) = Danny 8) j=l T d _ pd? ya ety = he Watn ny (6)
Figure 1 illustrates the intra-attention context vector computation cd temporal attention, and their use in the decoder. t , in addition to the encoder
A closely-related intra-RNN attention function has been introduced by Cheng et al. (2016) but their implementation works by modifying the underlying LSTM function, and they do not apply it to long sequence generation problems. This is a major difference with our method, which makes no assumptions about the type of decoder RNN, thus is more simple and widely applicable to other types of recurrent networks.
# 2.3 TOKEN GENERATION AND POINTER
To generate a token, our decoder uses either a token-generation softmax layer or a pointer mecha- nism to copy rare or unseen from the input sequence. We use a switch function that decides at each decoding step whether to use the token generation or the pointer (Gulcehre et al., 2016; Nallapati et al., 2016). We deï¬ne ut as a binary value, equal to 1 if the pointer mechanism is used to output yt, and 0 otherwise. In the following equations, all probabilities are conditioned on y1, . . . , ytâ1, x, even when not explicitly stated.
Our token-generation layer generates the following probability distribution:
p(yelue = 0) = softmax(Woulh?||c% ||c7] + Bout) (9) cf
On the other hand, the pointer mechanism uses the temporal attention weights αe distribution to copy the input token xi. ti as the probability
# p(yt = xi|ut = 1) = αe ti
(10)
We also compute the probability of using the copy mechanism for the decoding step t: p(ut = 1) = Ï(Wu[hd
p(ur = 1) = o (Wi [hi ||c°||c4] + bu), (11) e ât
3
where Ï is the sigmoid activation function.
Putting Equations 9 , 10 and 11 together, we obtain our ï¬nal probability distribution for the output token yt:
p(yt) = p(ut = 1)p(yt|ut = 1) + p(ut = 0)p(yt|ut = 0).
The ground-truth value for ut and the corresponding i index of the target input token when ut = 1 are provided at every decoding step during training. We set ut = 1 either when yt is an out-of- vocabulary token or when it is a pre-deï¬ned named entity (see Section 5).
# 2.4 SHARING DECODER WEIGHTS
In addition to using the same embedding matrix Wemb for the encoder and the decoder sequences, we introduce some weight-sharing between this embedding matrix and the Wout matrix of the token- generation layer, similarly to Inan et al. (2017) and Press & Wolf (2016). This allows the token- generation function to use syntactic and semantic information contained in the embedding matrix.
Wout = tanh(WembWproj) (13)
2.5 REPETITION AVOIDANCE AT TEST TIME
Another way to avoid repetitions comes from our observation that in both the CNN/Daily Mail and NYT datasets, ground-truth summaries almost never contain the same trigram twice. Based on this observation, we force our decoder to never output the same trigram more than once during testing. We do this by setting p(yt) = 0 during beam search, when outputting yt would create a trigram that already exists in the previously decoded sequence of the current beam.
# 3 HYBRID LEARNING OBJECTIVE
In this section, we explore different ways of training our encoder-decoder model. In particular, we propose reinforcement learning-based algorithms and their application to our summarization task.
# 3.1 SUPERVISED LEARNING WITH TEACHER FORCING
The most widely used method to train a decoder RNN for sequence generation, called the teacher forcingâ algorithm (Williams & Zipser, 1989), minimizes a maximum-likelihood loss at each decoding step. We define y* = {y{,y3,...,y%-} as the ground-truth output sequence for a given input sequence x. The maximum-likelihood training objective is the minimization of the following loss:
n Lint = â Yo log p(y li. --+9f-1.2) (14) t=1
However, minimizing Lml does not always produce the best results on discrete evaluation metrics such as ROUGE (Lin, 2004). This phenomenon has been observed with similar sequence generation tasks like image captioning with CIDEr (Rennie et al., 2016) and machine translation with BLEU (Wu et al., 2016; Norouzi et al., 2016). There are two main reasons for this discrepancy. The ï¬rst one, called exposure bias (Ranzato et al., 2015), comes from the fact that the network has knowledge of the ground truth sequence up to the next token during training but does not have such supervision when testing, hence accumulating errors as it predicts the sequence. The second reason is due to the large number of potentially valid summaries, since there are more ways to arrange tokens to produce paraphrases or different sentence orders. The ROUGE metrics take some of this ï¬exibility into account, but the maximum-likelihood objective does not.
3.2 POLICY LEARNING
One way to remedy this is to learn a policy that maximizes a speciï¬c discrete metric instead of minimizing the maximum-likelihood loss, which is made possible with reinforcement learning. In our model, we use the self-critical policy gradient training algorithm (Rennie et al., 2016).
4
(12)
For this training algorithm, we produce two separate output sequences at each training iteration: ys, which is obtained by sampling from the p(ys 1, . . . , ys tâ1, x) probability distribution at each decod- ing time step, and Ëy, the baseline output, obtained by maximizing the output probability distribution at each time step, essentially performing a greedy search. We deï¬ne r(y) as the reward function for an output sequence y, comparing it with the ground truth sequence yâ with the evaluation metric of our choice.
, n Lr = (r(G) ~r(y")) Slog vi lis --+Â¥ia+) (15) t=1
We can see that minimizing Lrl is equivalent to maximizing the conditional likelihood of the sam- pled sequence ys if it obtains a higher reward than the baseline Ëy, thus increasing the reward expec- tation of our model.
# 3.3 MIXED TRAINING OBJECTIVE FUNCTION
One potential issue of this reinforcement training objective is that optimizing for a speciï¬c discrete metric like ROUGE does not guarantee an increase in quality and readability of the output. It is possible to game such discrete metrics and increase their score without an actual increase in readability or relevance (Liu et al., 2016). While ROUGE measures the n-gram overlap between our generated summary and a reference sequence, human-readability is better captured by a language model, which is usually measured by perplexity.
Since our maximum-likelihood training objective (Equation 14) is essentially a conditional lan- guage model, calculating the probability of a token yt based on the previously predicted sequence {y1, . . . , ytâ1} and the input sequence x, we hypothesize that it can assist our policy learning algo- rithm to generate more natural summaries. This motivates us to deï¬ne a mixed learning objective function that combines equations 14 and 15:
Lmixed = γLrl + (1 â γ)Lml, (16)
where γ is a scaling factor accounting for the difference in magnitude between Lrl and Lml. A similar mixed-objective learning function has been used by Wu et al. (2016) for machine translation on short sequences, but this is its ï¬rst use in combination with self-critical policy learning for long summarization to explicitly improve readability in addition to evaluation metrics.
# 4 RELATED WORK
# 4.1 NEURAL ENCODER-DECODER SEQUENCE MODELS
Neural encoder-decoder models are widely used in NLP applications such as machine translation (Sutskever et al., 2014), summarization (Chopra et al., 2016; Nallapati et al., 2016), and question answering (Hermann et al., 2015). These models use recurrent neural networks (RNN), such as long-short term memory network (LSTM) (Hochreiter & Schmidhuber, 1997) to encode an input sentence into a ï¬xed vector, and create a new output sequence from that vector using another RNN. To apply this sequence-to-sequence approach to natural language, word embeddings (Mikolov et al., 2013; Pennington et al., 2014) are used to convert language tokens to vectors that can be used as inputs for these networks. Attention mechanisms (Bahdanau et al., 2014) make these models more performant and scalable, allowing them to look back at parts of the encoded input sequence while the output is generated. These models often use a ï¬xed input and output vocabulary, which prevents them from learning representations for new words. One way to ï¬x this is to allow the decoder network to point back to some speciï¬c words or sub-sequences of the input and copy them onto the output sequence (Vinyals et al., 2015). Gulcehre et al. (2016) and Merity et al. (2017) combine this pointer mechanism with the original word generation layer in the decoder to allow the model to use either method at each decoding step.
# 4.2 REINFORCEMENT LEARNING FOR SEQUENCE GENERATION
Reinforcement learning (RL) is a way of training an agent to interact with a given environment in order to maximize a reward. RL has been used to solve a wide variety of problems, usually when
5
an agent has to perform discrete actions before obtaining a reward, or when the metric to optimize is not differentiable and traditional supervised learning methods cannot be used. This is applicable to sequence generation tasks, because many of the metrics used to evaluate these tasks (like BLEU, ROUGE or METEOR) are not differentiable.
In order to optimize that metric directly, Ranzato et al. (2015) have applied the REINFORCE algo- rithm (Williams, 1992) to train various RNN-based models for sequence generation tasks, leading to signiï¬cant improvements compared to previous supervised learning methods. While their method requires an additional neural network, called a critic model, to predict the expected reward and sta- bilize the objective function gradients, Rennie et al. (2016) designed a self-critical sequence training method that does not require this critic model and lead to further improvements on image captioning tasks.
4.3 TEXT SUMMARIZATION
Most summarization models studied in the past are extractive in nature (Dorr et al., 2003; Nallapati et al., 2017; Durrett et al., 2016), which usually work by identifying the most important phrases of an input document and re-arranging them into a new summary sequence. The more recent abstractive summarization models have more degrees of freedom and can create more novel sequences. Many abstractive models such as Rush et al. (2015), Chopra et al. (2016) and Nallapati et al. (2016) are all based on the neural encoder-decoder architecture (Section 4.1).
A well-studied set of summarization tasks is the Document Understanding Conference (DUC) 1. These summarization tasks are varied, including short summaries of a single document and long summaries of multiple documents categorized by subject. Most abstractive summarization models have been evaluated on the DUC-2004 dataset, and outperform extractive models on that task (Dorr et al., 2003). However, models trained on the DUC-2004 task can only generate very short sum- maries up to 75 characters, and are usually used with one or two input sentences. Chen et al. (2016) applied different kinds of attention mechanisms for summarization on the CNN dataset, and Nalla- pati et al. (2016) used different attention and pointer functions on the CNN and Daily Mail datasets combined. In parallel of our work, See et al. (2017) also developed an abstractive summarization model on this dataset with an extra loss term to increase temporal coverage of the encoder attention function.
# 5 DATASETS
# 5.1 CNN/DAILY MAIL
We evaluate our model on a modiï¬ed version of the CNN/Daily Mail dataset (Hermann et al., 2015), following the same pre-processing steps described in Nallapati et al. (2016). We refer the reader to that paper for a detailed description. Our ï¬nal dataset contains 287,113 training examples, 13,368 validation examples and 11,490 testing examples. After limiting the input length to 800 tokens and output length to 100 tokens, the average input and output lengths are respectively 632 and 53 tokens.
5.2 NEW YORK TIMES
The New York Times (NYT) dataset (Sandhaus, 2008) is a large collection of articles published between 1996 and 2007. Even though this dataset has been used to train extractive summarization systems (Durrett et al., 2016; Hong & Nenkova, 2014; Li et al., 2016) or closely-related models for predicting the importance of a phrase in an article (Yang & Nenkova, 2014; Nye & Nenkova, 2015; Hong et al., 2015), we are the ï¬rst group to run an end-to-end abstractive summarization model on the article-abstract pairs of this dataset. While CNN/Daily Mail summaries have a similar wording to their corresponding articles, NYT abstracts are more varied, are shorter and can use a higher level of abstraction and paraphrase. Because of these differences, these two formats are a good complement to each other for abstractive summarization models. We describe the dataset preprocessing and pointer supervision in Section A of the Appendix.
1http://duc.nist.gov/
6
Model Lead-3 (Nallapati et al., 2017) SummaRuNNer (Nallapati et al., 2017) words-lvt2k-temp-att (Nallapati et al., 2016) ML, no intra-attention ML, with intra-attention RL, with intra-attention ML+RL, with intra-attention ROUGE-1 ROUGE-2 ROUGE-L 39.2 39.6 35.46 37.86 38.30 41.16 39.87 15.7 16.2 13.30 14.69 14.81 15.75 15.82 35.5 35.3 32.65 34.99 35.49 39.08 36.90
Table 1: Quantitative results for various models on the CNN/Daily Mail test dataset
Model ML, no intra-attention ML, with intra-attention RL, no intra-attention ML+RL, no intra-attention ROUGE-1 ROUGE-2 ROUGE-L 44.26 43.86 47.22 47.03 27.43 27.10 30.51 30.72 40.41 40.11 43.27 43.10
Table 2: Quantitative results for various models on the New York Times test dataset
Source document Jenson Button was denied his 100th race for McLaren after an ERS prevented him from making it to the start- line. It capped a miserable weekend for the Briton; his time in Bahrain plagued by reliability issues. Button spent much of the race on Twitter delivering his verdict as the action unfolded. âKimi is the man to watch,â and âloving the sparksâ, were among his pearls of wisdom, but the tweet which courted the most attention was a rather mischievous one: âOoh is Lewis backing his team mate into Vettel?â he quizzed after Rosberg accused Hamilton of pulling off such a manoeuvre in China. Jenson Button waves to the crowd ahead of the Bahrain Grand Prix which he failed to start Perhaps a career in the media beckons... Lewis Hamilton has out-qualiï¬ed and ï¬nished ahead of Nico Rosberg at every race this season. Indeed Rosberg has now beaten his Mercedes team-mate only once in the 11 races since the pair infamously collided in Belgium last year. Hamilton secured the 36th win of his career in Bahrain and his 21st from pole position. Only Michael Schumacher (40), Ayrton Senna (29) and Sebastian Vettel (27) have more. (...) Ground truth summary Button denied 100th race start for McLaren after ERS failure. Button then spent much of the Bahrain Grand Prix on Twitter delivering his verdict on the action as it unfolded. Lewis Hamilton has out-qualiï¬ed and ï¬nished ahead of Mercedes team-mate Nico Rosberg at every race this season. Bernie Ecclestone conï¬rms F1 will make its bow in Azerbaijan next season. ML, with intra-attention (ROUGE-1 41.58) Button was denied his 100th race for McLaren. ERS prevented him from making it to the start-line. The Briton. He quizzed after Nico Rosberg accused Lewis Hamilton of pulling off such a manoeuvre in China. Button has been in Azerbaijan for the ï¬rst time since 2013. RL, with intra-attention (ROUGE-1 50.00) Button was denied his 100th race for McLaren after an ERS prevented him from making it to the start-line. It capped a miserable weekend for the Briton. Button has out-qualiï¬ed. Finished ahead of Nico Rosberg at Bahrain. Lewis Hamilton has. In 11 races. . The race. To lead 2,000 laps. . In. . . And. . ML+RL, with intra-attention (ROUGE-1 44.00) Button was denied his 100th race for McLaren. The ERS prevented him from making it to the start-line. Button was his team mate in the 11 races in Bahrain. He quizzed after Nico Rosberg accused Lewis Hamilton of pulling off such a manoeuvre in China.
Table 3: Example from the CNN/Daily Mail test dataset showing the outputs of our three best models after de-tokenization, re-capitalization, replacing anonymized entities, and replacing numbers. The ROUGE score corresponds to the speciï¬c example.
# 6 RESULTS
6.1 EXPERIMENTS
Setup: We evaluate the intra-decoder attention mechanism and the mixed-objective learning by running the following experiments on both datasets. We ï¬rst run maximum-likelihood (ML) training with and without intra-decoder attention (removing cd t from Equations 9 and 11 to disable intra-
7
Model First sentences First k words Full (Durrett et al., 2016) ML+RL, with intra-attn R-1 28.6 35.7 42.2 42.94 R-2 17.3 21.6 24.9 26.02
Table 4: Comparison of ROUGE recall scores for lead baselines, the extractive model of Durrett et al. (2016) and our model on their NYT dataset splits.
attention) and select the best performing architecture. Next, we initialize our model with the best ML parameters and we compare reinforcement learning (RL) with our mixed-objective learning (ML+RL), following our objective functions in Equation 15 and 16. The hyperparameters and other implementation details are described in the Appendix.
ROUGE metrics and options: We report the full-length F-1 score of the ROUGE-1, ROUGE-2 and ROUGE-L metrics with the Porter stemmer option. For RL and ML+RL training, we use the ROUGE-L score as a reinforcement reward. We also tried ROUGE-2 but we found that it created summaries that almost always reached the maximum length, often ending sentences abruptly.
# 6.2 QUANTITATIVE ANALYSIS
Our results for the CNN/Daily Mail dataset are shown in Table 1, and for the NYT dataset in Table 2. We observe that the intra-decoder attention func- tion helps our model achieve better ROUGE scores on the CNN/Daily Mail but not on the NYT dataset.
Further analysis on the CNN/Daily Mail test set shows that intra-attention increases the ROUGE-1 score of examples with a long ground truth sum- mary, while decreasing the score of shorter sum- maries, as illustrated in Figure 2. This conï¬rms our assumption that intra-attention improves per- formance on longer output sequences, and explains why intra-attention doesnt improve performance on the NYT dataset, which has shorter summaries on average.
oa 2 val 3 EL] A oo} ââ ey 0 CNN /Daily Mail with intra-attn OF TEI DOH SET TOP wT BO TOOTS Cumulated ROUGE g
# WOT SOT SET
Figure 2: Cumulated ROUGE-1 relative im- provement obtained by adding intra-attention to the ML model on the CNN/Daily Mail dataset.
In addition, we can see that on all datasets, both the RL and ML+RL models obtain much higher scores than the ML model. In particular, these methods clearly surpass the state-of-the-art model from Nallapati et al. (2016) on the CNN/Daily Mail dataset, as well as the lead-3 extractive baseline (taking the ï¬rst 3 sentences of the article as the summary) and the SummaRuNNer extractive model (Nallapati et al., 2017).
See et al. (2017) also reported their results on a closely-related abstractive model the CNN/DailyMail but used a different dataset preprocessing pipeline, which makes direct comparison with our numbers difï¬cult. However, their best model has lower ROUGE scores than their lead-3 baseline, while our ML+RL model beats the lead-3 baseline as shown in Table 1. Thus, we conclude that our mixed- objective model obtains a higher ROUGE performance than theirs.
We also compare our model against extractive baselines (either lead sentences or lead words) and the extractive summarization model built by Durrett et al. (2016), which was trained using a smaller version of the NYT dataset that is 6 times smaller than ours but contains longer summaries. We trained our ML+RL model on their dataset and show the results on Table 4. Similarly to Durrett et al. (2016), we report the limited-length ROUGE recall scores instead of full-length F-scores. For each example, we limit the generated summary length or the baseline length to the ground truth summary length. Our results show that our mixed-objective model has higher ROUGE scores than their extractive model and the extractive baselines.
8
Readability Relevance Model 6.76 ML RL 4.18 ML+RL 7.04
Table 5: Comparison of human readability scores on a random subset of the CNN/Daily Mail test dataset. All models are with intra-decoder attention.
6.3 QUALITATIVE ANALYSIS
We perform human evaluation to ensure that our increase in ROUGE scores is also followed by an increase in human readability and quality. In particular, we want to know whether the ML+RL training objective did improve readability compared to RL.
Evaluation setup: To perform this evaluation, we randomly select 100 test examples from the CNN/Daily Mail dataset. For each example, we show the original article, the ground truth summary as well as summaries generated by different models side by side to a human evaluator. The human evaluator does not know which summaries come from which model or which one is the ground truth. Two scores from 1 to 10 are then assigned to each summary, one for relevance (how well does the summary capture the important parts of the article) and one for readability (how well-written the summary is). Each summary is rated by 5 different human evaluators on Amazon Mechanical Turk and the results are averaged across all examples and evaluators.
Results: Our human evaluation results are shown in Table 5. We can see that even though RL has the highest ROUGE-1 and ROUGE-L scores, it produces the least readable summaries among our experiments. The most common readability issue observed in our RL results, as shown in the example of Table 3, is the presence of short and truncated sentences towards the end of sequences. This conï¬rms that optimizing for single discrete evaluation metric such as ROUGE with RL can be detrimental to the model quality.
On the other hand, our RL+ML summaries obtain the highest readability and relevance scores among our models, hence solving the readability issues of the RL model while also having a higher ROUGE score than ML. This demonstrates the usefulness and value of our RL+ML training method for abstractive summarization.
# 7 CONCLUSION
We presented a new model and training procedure that obtains state-of-the-art results in text summa- rization for the CNN/Daily Mail, improves the readability of the generated summaries and is better suited to long output sequences. We also run our abstractive model on the NYT dataset for the ï¬rst time. We saw that despite their common use for evaluation, ROUGE scores have their shortcom- ings and should not be the only metric to optimize on summarization model for long sequences. Our intra-attention decoder and combined training objective could be applied to other sequence-to- sequence tasks with long inputs and outputs, which is an interesting direction for further research.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Artiï¬cial Intelligence (IJCAI-16), pp. 2754â2760, 2016.
Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.
Sumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. Abstractive sentence sum- marization with attentive recurrent neural networks. Proceedings of NAACL-HLT16, pp. 93â98, 2016.
9
Bonnie Dorr, David Zajic, and Richard Schwartz. Hedge trimmer: A parse-and-trim approach to In Proceedings of the HLT-NAACL 03 on Text summarization workshop- headline generation. Volume 5, pp. 1â8. Association for Computational Linguistics, 2003.
Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. Learning-based single-document summa- rization with compression and anaphoricity constraints. arXiv preprint arXiv:1603.08887, 2016.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. Pointing the unknown words. arXiv preprint arXiv:1603.08148, 2016.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa In Advances in Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. Neural Information Processing Systems, pp. 1693â1701, 2015.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Kai Hong and Ani Nenkova. Improving the estimation of word importance for news multi-document summarization-extended technical report. 2014.
Kai Hong, Mitchell Marcus, and Ani Nenkova. System combination for multi-document summa- rization. In EMNLP, pp. 107â117, 2015.
Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classiï¬ers: A loss framework for language modeling. Proceedings of the International Conference on Learning Representations, 2017.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Junyi Jessy Li, Kapil Thadani, and Amanda Stent. The role of discourse units in near-extractive summarization. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 137, 2016.
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised eval- uation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016.
Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and In ACL (System David McClosky. The stanford corenlp natural language processing toolkit. Demonstrations), pp. 55â60, 2014.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. Proceedings of the International Conference on Learning Representations, 2017.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen- tations of words and phrases and their compositionality. In Advances in neural information pro- cessing systems, pp. 3111â3119, 2013.
Ramesh Nallapati, Bowen Zhou, C¸ aËglar G¨ulc¸ehre, Bing Xiang, et al. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023, 2016.
Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Proceedings of the 31st AAAI con- ference, 2017.
Mohammad Norouzi, Samy Bengio, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, et al. Reward augmented maximum likelihood for neural structured prediction. In Advances In Neural Information Processing Systems, pp. 1723â1731, 2016.
Benjamin Nye and Ani Nenkova. Identiï¬cation and characterization of newsworthy verbs in world news. In HLT-NAACL, pp. 1440â1445, 2015.
10
Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532â1543, 2014.
Oï¬r Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. arXiv preprint arXiv:1612.00563, 2016.
Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015.
Evan Sandhaus. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752, 2008.
Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. Temporal attention model for neural machine translation. arXiv preprint arXiv:1608.02927, 2016.
Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073â1083, July 2017.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104â3112, 2014.
Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. learned time series models. In AAAI, pp. 3024â3030, 2015. Improving multi-step prediction of
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692â2700, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270â280, 1989.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016.
Yinfei Yang and Ani Nenkova. Detecting information-dense texts in multiple news domains. In AAAI, pp. 1650â1656, 2014.
Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Urtasun. Efï¬cient summarization with read- again and copy mechanism. arXiv preprint arXiv:1611.03382, 2016.
# A NYT DATASET
A.1 PREPROCESSING
We remove all documents that do not have a full article text, abstract or headline. We concatenate the headline, byline and full article text, separated by special tokens, to produce a single input sequence for each example. We tokenize the input and abstract pairs with the Stanford tokenizer (Manning et al., 2014). We convert all tokens to lower-case and replace all numbers with â0â, remove â(s)â and â(m)â marks in the abstracts and all occurrences of the following words, singular or plural, if they are surrounded by semicolons or at the end of the abstract: âphotoâ, âgraphâ, âchartâ, âmapâ, âtableâ
11
and âdrawingâ. Since the NYT abstracts almost never contain periods, we consider them multi- sentence summaries if we split sentences based on semicolons. This allows us to make the summary format and evaluation procedure similar to the CNN/Daily Mail dataset. These pre-processing steps give us an average of 549 input tokens and 40 output tokens per example, after limiting the input and output lengths to 800 and 100 tokens.
A.2 DATASET SPLITS
We created our own training, validation, and testing splits for this dataset. Instead of producing random splits, we sorted the documents by their publication date in chronological order and used the ï¬rst 90% (589,284 examples) for training, the next 5% (32,736) for validation, and the remaining 5% (32,739) for testing. This makes our dataset splits easily reproducible and follows the intuition that if used in a production environment, such a summarization model would be used on recent articles rather than random ones.
A.3 POINTER SUPERVISION
We run each input and abstract sequence through the Stanford named entity recognizer (NER) (Man- ning et al., 2014). For all named entity tokens in the abstract if the type âPERSONâ, âLOCATIONâ, âORGANIZATIONâ or âMISCâ, we ï¬nd their ï¬rst occurrence in the input sequence. We use this information to supervise p(ut) (Equation 11) and αe ti (Equation 4) during training. Note that the NER tagger is only used to create the dataset and is no longer needed during testing, thus weâre not adding any dependencies to our model. We also add pointer supervision for out-of-vocabulary output tokens if they are present in the input.
# B HYPERPARAMETERS AND IMPLEMENTATION DETAILS
For ML training, we use the teacher forcing algorithm with the only difference that at each decoding step, we choose with a 25% probability the previously generated token instead of the ground-truth token as the decoder input token ytâ1, which reduces exposure bias (Venkatraman et al., 2015). We use a γ = 0.9984 for the ML+RL loss function.
We use two 200-dimensional LSTMs for the bidirectional encoder and one 400-dimensional LSTM for the decoder. We limit the input vocabulary size to 150,000 tokens, and the output vocabulary to 50,000 tokens by selecting the most frequent tokens in the training set. Input word embeddings are 100-dimensional and are initialized with GloVe (Pennington et al., 2014). We train all our models with Adam (Kingma & Ba, 2014) with a batch size of 50 and a learning rate α of 0.001 for ML training and 0.0001 for RL and ML+RL training. At test time, we use beam search of width 5 on all our models to generate our ï¬nal predictions.
12 | {
"id": "1603.08148"
} |
1705.03122 | Convolutional Sequence to Sequence Learning | The prevalent approach to sequence to sequence learning maps an input
sequence to a variable length output sequence via recurrent neural networks. We
introduce an architecture based entirely on convolutional neural networks.
Compared to recurrent models, computations over all elements can be fully
parallelized during training and optimization is easier since the number of
non-linearities is fixed and independent of the input length. Our use of gated
linear units eases gradient propagation and we equip each decoder layer with a
separate attention module. We outperform the accuracy of the deep LSTM setup of
Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French
translation at an order of magnitude faster speed, both on GPU and CPU. | http://arxiv.org/pdf/1705.03122 | Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin | cs.CL | null | null | cs.CL | 20170508 | 20170725 | 7 1 0 2
l u J 5 2 ] L C . s c [
3 v 2 2 1 3 0 . 5 0 7 1 : v i X r a
# Convolutional Sequence to Sequence Learning
Jonas Gehring Michael Auli David Grangier Denis Yarats Yann N. Dauphin Facebook AI
Facebook AI Research
# Abstract
The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural net- works. We introduce an architecture based en- tirely on convolutional neural networks.! Com- pared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimiza- tion is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propaga- tion and we equip each decoder layer with a sep- arate attention module. We outperform the accu- racy of the deep LSTM setup of Wu et al. (2016) on both WMTâ 14 English-German and WMTâ 14 English-French translation at an order of magni- tude faster speed, both on GPU and CPU.
# 1. Introduction
Sequence to sequence learning has been successful in many tasks such as machine translation, speech recogni- tion (Sutskever et al., 2014; Chorowski et al., 2015) and text summarization (Rush et al., 2015; Nallapati et al., 2016; Shen et al., 2016) amongst others. The dominant approach to date encodes the input sequence with a se- ries of bi-directional recurrent neural networks (RNN) and generates a variable length output with another set of de- coder RNNs, both of which interface via a soft-attention mechanism (Bahdanau et al., 2014; Luong et al., 2015). In machine translation, this architecture has been demon- strated to outperform traditional phrase-based models by large margins (Sennrich et al., 2016b; Zhou et al., 2016; Wu et al., 2016; §2).
'The source code and models are available at https: // github.com/facebookresearch/fairseq.
Convolutional neural networks are less common for se- quence modeling, despite several advantages (Waibel et al., 1989; LeCun & Bengio, 1995). Compared to recurrent lay- ers, convolutions create representations for fixed size con- texts, however, the effective context size of the network can easily be made larger by stacking several layers on top of each other. This allows to precisely control the maximum length of dependencies to be modeled. Convolutional net- works do not depend on the computations of the previous time step and therefore allow parallelization over every ele- ment in a sequence. This contrasts with RNNs which main- tain a hidden state of the entire past that prevents parallel computation within a sequence.
Multi-layer convolutional neural networks create hierarchi- cal representations over the input sequence in which nearby input elements interact at lower layers while distant ele- ments interact at higher layers. Hierarchical structure pro- vides a shorter path to capture long-range dependencies compared to the chain structure modeled by recurrent net- works, e.g. we can obtain a feature representation captur- ing relationships within a window of n words by applying only O(;) convolutional operations for kernels of width k, compared to a linear number O(n) for recurrent neu- ral networks. Inputs to a convolutional network are fed through a constant number of kernels and non-linearities, whereas recurrent networks apply up to n operations and non-linearities to the first word and only a single set of operations to the last word. Fixing the number of non- linearities applied to the inputs also eases learning.
Recent work has applied convolutional neural networks to sequence modeling such as Bradbury et al. (2016) who in- troduce recurrent pooling between a succession of convo- lutional layers or Kalchbrenner et al. (2016) who tackle neural translation without attention. However, none of these approaches has been demonstrated improvements over state of the art results on large benchmark datasets. Gated convolutions have been previously explored for ma- chine translation by Meng et al. (2015) but their evaluation was restricted to a small dataset and the model was used in tandem with a traditional count-based model. Architec-
1
Convolutional Sequence to Sequence Learning
tures which are partially convolutional have shown strong performance on larger tasks but their decoder is still recur- rent (Gehring et al., 2016).
state h; and the last prediction y;; the result is normalized to be a distribution over input elements.
In this paper we propose an architecture for sequence to se- quence modeling that is entirely convolutional. Our model is equipped with gated linear units (Dauphin et al., 2016) and residual connections (He et al., 2015a). We also use attention in every decoder layer and demonstrate that each attention layer only adds a negligible amount of overhead. The combination of these choices enables us to tackle large scale problems (§3).
We evaluate our approach on several large datasets for ma- chine translation as well as summarization and compare to the current best architectures reported in the literature. On WMTâ 16 English-Romanian translation we achieve a new state of the art, outperforming the previous best result by 1.9 BLEU. On WMTâ 14 English-German we outperform the strong LSTM setup of Wu et al. (2016) by 0.5 BLEU and on WMTâ 14 English-French we outperform the like- lihood trained system of Wu et al. (2016) by 1.6 BLEU. Furthermore, our model can translate unseen sentences at an order of magnitude faster speed than Wu et al. (2016) on GPU and CPU hardware (§4, §5).
Popular choices for recurrent networks in encoder-decoder models are long short term memory networks (LSTM; Hochreiter & Schmidhuber, 1997) and gated recurrent units (GRU; Cho et al., 2014). Both extend Elman RNNs (El- man, 1990) with a gating mechanism that allows the mem- orization of information from previous time steps in order to model long-term dependencies. Most recent approaches also rely on bi-directional encoders to build representations of both past and future contexts (Bahdanau et al., 2014; Zhou et al., 2016; Wu et al., 2016). Models with many lay- ers often rely on shortcut or residual connections (He et al., 2015a; Zhou et al., 2016; Wu et al., 2016).
# 3. A Convolutional Architecture
Next we introduce a fully convolutional architecture for se- quence to sequence modeling. Instead of relying on RNNs to compute intermediate encoder states z and decoder states h we use convolutional neural networks (CNN).
# 3.1. Position Embeddings
# 2. Recurrent Sequence to Sequence Learning
Sequence to sequence modeling has been synonymous with recurrent neural network based encoder-decoder ar- chitectures (Sutskever et al., 2014; Bahdanau et al., 2014). The encoder RNN processes an input sequence x = (1,...,@m) of m elements and returns state representa- tions z = (z....,2m). The decoder RNN takes z and generates the output sequence y = (y1,...,Yn) left to right, one element at a time. To generate output y;+1, the decoder computes a new hidden state h;,, based on the previous state h;, an embedding g; of the previous target language word y;, as well as a conditional input c; derived from the encoder output z. Based on this generic formula- tion, various encoder-decoder architectures have been pro- posed, which differ mainly in the conditional input and the type of RNN.
First, we embed input elements x = (21,...,2m) in dis- tributional space as w = (w,...,Wy,), where w; ⬠RS is a column in an embedding matrix D ⬠RY. We also equip our model with a sense of order by embedding the ab- solute position of input elements p = (pi,...,2m) where Pj Ee RY. Both are combined to obtain input element rep- resentations e = (w, + pi,...,Wm + Pm). We proceed similarly for output elements that were already generated by the decoder network to yield output element represen- tations that are being fed back into the decoder network g = (91,---+9n). Position embeddings are useful in our architecture since they give our model a sense of which portion of the sequence in the input or output it is currently dealing with (85.4).
# 3.2. Convolutional Block Structure
Models without attention consider only the final encoder state z,, by setting c; = z,, for all i (Cho et al., 2014), or simply initialize the first decoder state with z,,, (Sutskever et al., 2014), in which case c; is not used. Architectures with attention (Bahdanau et al., 2014; Luong et al., 2015) compute c; as a weighted sum of (2)...., 2m) at each time step. The weights of the sum are referred to as attention scores and allow the network to focus on different parts of the input sequence as it generates the output sequences. At- tention scores are computed by essentially comparing each encoder state z; to a combination of the previous decoder
Both encoder and decoder networks share a simple block structure that computes intermediate states based on a fixed number of input elements. We denote the output of the /- th block as hâ = (hi,...,h!,) for the decoder network, and z! = (z!,...,2!,) for the encoder network; we refer to blocks and layers interchangeably. Each block contains a one dimensional convolution followed by a non-linearity. For a decoder network with a single block and kernel width k, each resulting state h} contains information over k input elements. Stacking several blocks on top of each other in- creases the number of input elements represented in a state. For instance, stacking 6 blocks with k = 5 results in an in- put field of 25 elements, i.e. each output depends on 25
2
Convolutional Sequence to Sequence Learning
inputs. Non-linearities allow the networks to exploit the full input field, or to focus on fewer elements if needed.
Each convolution kernel is parameterized as W ⬠IR2¢**4, by ⬠IR and takes as input X ⬠R**4 which is a concatenation of k input elements embedded in d dimen- sions and maps them to a single output element Y ⬠R74 that has twice the dimensionality of the input elements; subsequent layers operate over the k output elements of the previous layer. We choose gated linear units (GLU; Dauphin et al., 2016) as non-linearity which implement a simple gating mechanism over the output of the convolu- tion Y = [A B] ⬠R74:
v(( B)) =A®o(B)
where A, B ⬠R® are the inputs to the non-linearity, @ is the point-wise multiplication and the output v([A B]) ⬠R¢ is half the size of Y. The gates ¢(B) control which inputs A of the current context are relevant. A similar non- linearity has been introduced in Oord et al. (2016b) who apply tanh to A but Dauphin et al. (2016) shows that GLUs perform better in the context of language modelling.
To enable deep convolutional networks, we add residual connections from the input of each convolution to the out- put of the block (He et al., 2015a).
<p>
</s>
<p> They agree <p> Embeddings HH Gated Linear Units Attention I z ® ~® ~® Dot products yoyoy oy yoyo oy oy : <p> <s>_â Sie stimmen zu Sie stimmen zu </s>
hi = OW [par Migeyal + Oe) +E
For encoder networks we ensure that the output of the con- volutional layers matches the input length by padding the input at each layer. However, for decoder networks we have to take care that no future information is available to the de- coder (Oord et al., 2016a). Specifically, we pad the input by & â 1 elements on both the left and right side by zero vectors, and then remove k elements from the end of the convolution output.
Figure 1. Illustration of batching during training. The English source sentence is encoded (top) and we compute all attention values for the four German target words (center) simultaneously. Our attentions are just dot products between decoder context rep- resentations (bottom left) and encoder representations. We add the conditional inputs computed by the attention (center right) to the decoder states which then predict the target words (bottom right). The sigmoid and multiplicative boxes illustrate Gated Lin- ear Units.
target element g;:
We also add linear mappings to project between the embed- ding size f and the convolution outputs that are of size 2d. We apply such a transform to w when feeding embeddings to the encoder network, to the encoder output z;, to the fi- nal layer of the decoder just before the softmax Tae, and to all decoder layers h! before computing attention scores (1).
di = Wahi + ba + gi oO)
For decoder layer / the attention a, of state i and source el- ement j is computed as a dot-product between the decoder state summary di and each output z} of the last encoder block u:
Finally, we compute a distribution over the Tâ possible next target elements y;,1 by transforming the top decoder out- put hÂ¥ viaa linear layer with weights W, and bias bo:
PUY. Yis-+-sYisX) = softmax(W hk + by) ⬠RT
1 exp (ih -28) 0 SE exp (df - =F) a,
The conditional input c} to the current decoder layer is a weighted sum of the encoder outputs as well as the input element embeddings e,; (Figure 1, center right):
# 3.3. Multi-step Attention
m = So ai, (2} +e;) (2) j=l
We introduce a separate attention mechanism for each de- coder layer. To compute the attention, we combine the cur- rent decoder state h! with an embedding of the previous
This is slightly different to recurrent approaches which compute both the attention and the weighted sum over z7
3
Convolutional Sequence to Sequence Learning
only. We found adding e; to be beneficial and it resem- bles key-value memory networks where the keys are the z/' and the values are the 2; + e; (Miller et al., 2016). En- coder outputs z;' represent potentially large input contexts and e; provides point information about a specific input el- ement that is useful when making a prediction. Once c} has been computed, it is simply added to the output of the corresponding decoder layer hi.
This can be seen as attention with multiple *hopsâ (Sukhbaatar et al., 2015) compared to single step attention (Bahdanau et al., 2014; Luong et al., 2015; Zhou et al., 2016; Wu et al., 2016). In particular, the attention of the first layer determines a useful source context which is then fed to the second layer that takes this information into account when computing attention etc. The decoder also has immediate access to the attention history of the k â 1 previous time steps because the conditional inputs ah, seey at are part of nmi, wee hit which are input to hi. This makes it easier for the model to take into ac- count which previous inputs have been attended to already compared to recurrent nets where this information is in the recurrent state and needs to survive several non-linearities. Overall, our attention mechanism considers which words we previously attended to (Yang et al., 2016) and performs multiple attention âhopsâ per time step. In Appendix §C, we plot attention scores for a deep decoder and show that at different layers, different portions of the source are at- tended to.
of attention mechanisms we use; we exclude source word embeddings. We found this to stabilize learning since the encoder received too much gradient otherwise.
# 3.5. Initialization
Normalizing activations when adding the output of dif- ferent layers, e.g. residual connections, requires careful weight initialization. The motivation for our initialization is the same as for the normalization: maintain the variance of activations throughout the forward and backward passes. All embeddings are initialized from a normal distribution with mean 0 and standard deviation 0.1. For layers whose output is not directly fed to a gated linear unit, we initial- ize weights from N(0, \/1/n;) where n; is the number of input connections to each neuron. This ensures that the variance of a normally distributed input is retained.
For layers which are followed by a GLU activation, we pro- pose a weight initialization scheme by adapting the deriva- tions in (He et al., 2015b; Glorot & Bengio, 2010; Ap- pendix A). If the GLU inputs are distributed with mean 0 and have sufficiently small variance, then we can approx- imate the output variance with 1/4 of the input variance (Appendix A.1). Hence, we initialize the weights so that the input to the GLU activations have 4 times the variance of the layer input. This is achieved by drawing their initial values from V(0, \/4/nz). Biases are uniformly set to zero when the network is constructed.
Our convolutional architecture also allows to batch the at- tention computation across all elements of a sequence com- pared to RNNs (Figure 1, middle). We batch the computa- tions of each decoder layer individually.
# 3.4. Normalization Strategy
We stabilize learning through careful weight initialization (83.5) and by scaling parts of the network to ensure that the variance throughout the network does not change dramati- cally. In particular, we scale the output of residual blocks as well as the attention to preserve the variance of activa- tions. We multiply the sum of the input and output of a residual block by V0.5 to halve the variance of the sum. This assumes that both summands have the same variance which is not always true but effective in practice.
We apply dropout to the input of some layers so that in- puts are retained with a probability of p. This can be seen as multiplication with a Bernoulli random variable taking value 1/p with probability p and 0 otherwise (Srivastava et al., 2014). The application of dropout will then cause the variance to be scaled by 1/p. We aim to restore the incoming variance by initializing the respective layers with larger weights. Specifically, we use N(0, \/4p/n;) for lay- ers whose output is subject to a GLU and (0, \/p/ni) otherwise (Appendix A.3).
# 4. Experimental Setup
# 4.1. Datasets
We consider three major WMT translation tasks as well as a text summarization task.
The conditional input c! generated by the attention is a weighted sum of m vectors (2) and we counteract a change in variance through scaling by m,/1/m; we multiply by m to scale up the inputs to their original size, assuming the attention scores are uniformly distributed. This is generally not the case but we found it to work well in practice.
For convolutional decoders with multiple attention, we scale the gradients for the encoder layers by the number
WMTâ 16 English-Romanian. We use the same data and pre-processing as Sennrich et al. (2016b) but remove sen- tences with more than 175 words. This results in 2.8M sen- tence pairs for training and we evaluate on newstest2016.â
2We followed the pre-processing of https://github. com/rsennrich/wmt16-scripts/blob/80e21le5/ sample/preprocess. sh and added the back-translated data from http://data.statmt.org/rsennrich/wmt1l6_
4
Convolutional Sequence to Sequence Learning
We experiment with word-based models using a source vo- cabulary of 200K types and a target vocabulary of 80K types. We also consider a joint source and target byte-pair encoding (BPE) with 40K types (Sennrich et al., 2016a;b).
WMTâ 14 English-German. We use the same setup as Lu- ong et al. (2015) which comprises 4.5M sentence pairs for training and we test on newstest2014.> As vocabulary we use 40K sub-word types based on BPE.
WMTâ 14 English-French. We use the full training set of 36M sentence pairs, and remove sentences longer than 175 words as well as pairs with a source/target length ratio ex- ceeding 1.5. This results in 35.5M sentence-pairs for train- ing. Results are reported on newstest2014. We use a source and target vocabulary with 40K BPE types.
still fit in GPU memory. If the threshold is exceeded, we simply split the batch until the threshold is met and pro- cess the parts separatedly. Gradients are normalized by the number of non-padding tokens per mini-batch. We also use weight normalization for all layers except for lookup tables (Salimans & Kingma, 2016).
Besides dropout on the embeddings and the decoder out- put, we also apply dropout to the input of the convolu- tional blocks (Srivastava et al., 2014). All models are im- plemented in Torch (Collobert et al., 2011) and trained on a single Nvidia M40 GPU except for WMTâ 14 English- French for which we use a multi-GPU setup on a single machine. We train on up to eight GPUs synchronously by maintaining copies of the model on each card and split the batch so that each worker computes 1/8-th of the gradients; at the end we sum the gradients via Nvidia NCCL.
In all setups a small subset of the training data serves as val- idation set (about 0.5-1% for each dataset) for early stop- ping and learning rate annealing.
# 4.3. Evaluation
Abstractive summarization. We train on the Gigaword corpus (Graff et al., 2003) and pre-process it identically to Rush et al. (2015) resulting in 3.8M training examples and 190K for validation. We evaluate on the DUC-2004 test data comprising 500 article-title pairs (Over et al., 2007) and report three variants of recall-based ROUGE (Lin, 2004), namely, ROUGE-1 (unigrams), ROUGE-2 (bi- grams), and ROUGE-L (longest-common substring). We also evaluate on a Gigaword test set of 2000 pairs which is identical to the one used by Rush et al. (2015) and we report Fl ROUGE similar to prior work. Similar to Shen et al. (2016) we use a source and target vocabulary of 30K words and require outputs to be at least 14 words long.
We report average results over three runs of each model, where each differs only in the initial random seed. Trans- lations are generated by a beam search and we normalize log-likelihood scores by sentence length. We use a beam of width 5. We divide the log-likelihoods of the final hy- pothesis in beam search by their length |y|. For WMTâ 14 English-German we tune a length normalization constant on a separate development set (newstest2015) and we nor- malize log-likelihoods by |y|* (Wu et al., 2016). On other datasets we did not find any benefit with length normaliza- tion.
# 4.2. Model Parameters and Optimization
We use 512 hidden units for both encoders and decoders, unless otherwise stated. All embeddings, including the out- put produced by the decoder before the final linear layer, have dimensionality 512; we use the same dimensionalities for linear layers mapping between the hidden and embed- ding sizes (§3.2).
We train our convolutional models with Nesterovâs accel- erated gradient method (Sutskever et al., 2013) using a mo- mentum value of 0.99 and renormalize gradients if their norm exceeds 0.1 (Pascanu et al., 2013). We use a learn- ing rate of 0.25 and once the validation perplexity stops improving, we reduce the learning rate by an order of mag- nitude after each epoch until it falls below 10-4.
For word-based models, we perform unknown word re- placement based on attention scores after generation (Jean et al., 2015). Unknown words are replaced by looking up the source word with the maximum attention score in a pre- computed dictionary. If the dictionary contains no trans- lation, then we simply copy the source word. Dictionar- ies were extracted from the word aligned training data that we obtained with fast_align (Dyer et al., 2013). Each source word is mapped to the target word it is most fre- quently aligned to. In our multi-step attention (§3.3) we simply average the attention scores over all layers. Fi- nally, we compute case-sensitive tokenized BLEU, except for WMTâ 16 English-Romanian where we use detokenized BLEU to be comparable with Sennrich et al. (2016b).*
4https://github.com/moses-smt / mosesdecoder/blob/617e8c8/scripts/generic/ {multi-bleu.perl,mteval-vl3a.pl}
Unless otherwise stated, we use mini-batches of 64 sen- tences. We restrict the maximum number of words in a mini-batch to make sure that batches with long sentences
backtranslations/en-ro.
# Shttp://nlp.stanford.edu/projects/nmt
5
Convolutional Sequence to Sequence Learning
# 5. Results
# 5.1. Recurrent vs. Convolutional Models
We first evaluate our convolutional model on three transla- tion tasks. On WMTâ 16 English-Romanian translation we compare to Sennrich et al. (2016b) which is the winning entry on this language pair at WMTâ 16 (Bojar et al., 2016). Their model implements the attention-based sequence to sequence architecture of Bahdanau et al. (2014) and uses GRU cells both in the encoder and decoder. We test both word-based and BPE vocabularies (§4).
Table | shows that our fully convolutional sequence to se- quence model (ConvS2S) outperforms the WMTâ 16 win- ning entry for English-Romanian by 1.9 BLEU with a BPE encoding and by 1.3 BLEU with a word factored vocabu- lary. This instance of our architecture has 20 layes in the encoder and 20 layers in the decoder, both using kernels of width 3 and hidden size 512 throughout. Training took between 6 and 7.5 days on a single GPU.
On WMT?â 14 English to German translation we compare to the following prior work: Luong et al. (2015) is based on a four layer LSTM attention model, ByteNet (Kalchbrenner et al., 2016) propose a convolutional model based on char- acters without attention, with 30 layers in the encoder and 30 layers in the decoder, GNMT (Wu et al., 2016) repre- sents the state of the art on this dataset and they use eight encoder LSTMs as well as eight decoder LSTMs, we quote their result for a word-based model, such as ours, as well as a word-piece model (Schuster & Nakajima, 2012).5
WMTâ16 English-Romanian BLEU Sennrich et al. (2016b) GRU (BPE 90K) 28.1 ConvS2S (Word 80K) 29.45 ConvS2S (BPE 40K) 30.02
WMT?â14 English-German BLEU Luong et al. (2015) LSTM (Word 50K) 20.9 Kalchbrenner et al. (2016) ByteNet (Char) 23.75 Wu et al. (2016) GNMT (Word 80K) 23.12 Wu et al. (2016) GNMT (Word pieces) 24.61 ConvS2S (BPE 40K) 25.16
WMTâ14 English-French BLEU Wu et al. (2016) GNMT (Word 80K) 37.90 Wu et al. (2016) GNMT (Word pieces) 38.95 Wu et al. (2016) GNMT (Word pieces) +RL = 39.92 ConvS2S (BPE 40K) 40.51
Table 1. Accuracy on WMT tasks comapred to previous work. ConvS2S and GNMT results are averaged over several runs.
BLEU. Reinforcement learning is equally applicable to our architecture and we believe that it would further improve our results.
The results (Table 1) show that our convolutional model outpeforms GNMT by 0.5 BLEU. Our encoder has 15 lay- ers and the decoder has 15 layers, both with 512 hidden units in the first ten layers and 768 units in the subsequent three layers, all using kernel width 3. The final two layers have 2048 units which are just linear mappings with a sin- gle input. We trained this model on a single GPU over a period of 18.5 days with a batch size of 48. LSTM sparse mixtures have shown strong accuracy at 26.03 BLEU for a single run (Shazeer et al., 2016) which compares to 25.39 BLEU for our best run. This mixture sums the output of four experts, not unlike an ensemble which sums the output of multiple networks. ConvS2S also benefits from ensem- bling (85.2), therefore mixtures are a promising direction.
The ConvS2S model for this experiment uses 15 layers in the encoder and 15 layers in the decoder, both with 512 hidden units in the first five layers, 768 units in the subse- quent four layers, 1024 units in the next 3 layers, all using kernel width 3; the final two layers have 2048 units and 4096 units each but the they are linear mappings with ker- nel width 1. This model has an effective context size of only 25 words, beyond which it cannot access any infor- mation on the target size. Our results are based on training with 8 GPUs for about 37 days and batch size 32 on each worker.® The same configuration as for WMTâ 14 English- German achieves 39.41 BLEU in two weeks on this dataset in an eight GPU setup.
Finally, we train on the much larger WMTâ 14 English- French task where we compare to the state of the art re- sult of GNMT (Wu et al., 2016). Our model is trained with a simple token-level likelihood objective and we improve over GNMT in the same setting by 1.6 BLEU on average. We also outperform their reinforcement (RL) models by 0.5
Zhou et al. (2016) report a non-averaged result of 39.2 BLEU. More recently, Ha et al. (2016) showed that one can generate weights with one LSTM for another LSTM. This approach achieves 40.03 BLEU but the result is not averaged. Shazeer et al. (2016) compares at 40.56 BLEU to our best single run of 40.70 BLEU.
5We did not use the exact same vocabulary size because word pieces and BPE estimate the vocabulary differently.
®This is half of the GPU time consumed by a basic model of Wu et al. (2016) who use 96 GPUs for 6 days. We expect the time to train our model to decrease substantially in a multi-machine setup.
6
Convolutional Sequence to Sequence Learning
WMTâ14 English-German BLEU Wu et al. (2016) GNMT 26.20 Wu et al. (2016) GNMT+RL 26.30 ConvS2S 26.43
WMTâ14 English-French BLEU Zhou et al. (2016) 40.4 Wu et al. (2016) GNMT 40.35 Wu et al. (2016) GNMT+RL 41.16 ConvS2S 41.44 ConvS2S (10 models) 41.62
BLEU Time (s) GNMT GPU (K80) 31.20 3,028 GNMT CPU 88 cores 31.20 1,322 GNMT TPU 31.21 384 ConvS2S GPU (K40) b = 1 33.45 327 ConvS2S GPU (M40) b= 1 33.45 221 ConvS2S GPU (GTX-1080ti) b = 1 33.45 142 ConvS2S CPU 48 cores b = 1 33.45 142 ConvS2S GPU (K40) b = 5 34.10 587 ConvS2S CPU 48 cores b = 5 34.10 482 ConvS2S GPU (M40) b= 5 34.10 406 ConvS2S GPU (GTX-1080ti) b = 5 34.10 256
Table 2. Accuracy of ensembles with eight models. We show both likelihood and Reinforce (RL) results for GNMT; Zhou et al. (2016) and ConvS2S use simple likelihood training.
Table 3.CPU and GPU generation speed in seconds on the de- velopment set of WMTâ 14 English-French. We show results for different beam sizes b. GNMT figures are taken from Wu et al. (2016). CPU speeds are not directly comparable because Wu et al. (2016) use a 88 core machine versus our 48 core setup.
The translations produced by our models often match the length of the references, particularly for the large WMTâ 14 English-French task, or are very close for small to medium data sets such as WMTâ 14 English-German or WMTâ 16 English-Romanian.
# 5.2. Ensemble Results
Next, we ensemble eight likelihood-trained models for both WMTâ 14 English-German and WMTâ 14 English-French and compare to previous work which also reported ensem- ble results. For the former, we also show the result when ensembling 10 models. Table 2 shows that we outperform the best current ensembles on both datasets.
# 5.3. Generation Speed
Next, we evaluate the inference speed of our architecture on the development set of the WMTâ 14 English-French task which is the concatenation of newstest2012 and new- stest2013; it comprises 6003 sentences. We measure gener- ation speed both on GPU and CPU hardware. Specifically, we measure GPU speed on three generations of Nvidia cards: a GTX-1080ti, an M40 as well as an older K40 card. CPU timings are measured on one host with 48 hyper- threaded cores (Intel Xeon E5-2680 @ 2.50GHz) with 40 workers. In all settings, we batch up to 128 sentences, com- posing batches with sentences of equal length. Note that the majority of batches is smaller because of the small size of the development set. We experiment with beams of size 5 as well as greedy search, i.e beam of size 1. To make gen- eration fast, we do not recompute convolution states that have not changed compared to the previous time step but rather copy (shift) these activations.
use Nvidia K80 GPUs which are essentially two K40s. We did not have such a GPU available and therefore run ex- periments on an older K40 card which is inferior to a K80, in addition to the newer M40 and GTX-1080ti cards. The results (Table 3) show that our model can generate transla- tions on a K40 GPU at 9.3 times the speed and 2.25 higher BLEU; on an M40 the speed-up is up to 13.7 times and on a GTX-1080ti card the speed is 21.3 times faster. A larger beam of size 5 decreases speed but gives better BLEU.
On CPU, our model is up to 9.3 times faster, however, the GNMT CPU results were obtained with an 88 core machine whereas our results were obtained with just over half the number of cores. On a per CPU core basis, our model is 17 times faster at a better BLEU. Finally, our CPU speed is 2.7 times higher than GNMT on a custom TPU chip which shows that high speed can be achieved on commodity hard- ware. We do no report TPU figures as we do not have ac- cess to this hardware.
# 5.4. Position Embeddings
In the following sections, we analyze the design choices in our architecture. The remaining results in this paper are based on the WMTâ 14 English-German task with 13 en- coder layers at kernel size 3 and 5 decoder layers at kernel size 5. We use a target vocabulary of 160K words as well as vocabulary selection (Mi et al., 2016; Lâ Hostis et al., 2016) to decrease the size of the output layer which speeds up training and testing. The average vocabulary size for each training batch is about 20K target words. All figures are av- eraged over three runs (§4) and BLEU is reported on new- stest2014 before unknown word replacement.
We compare to results reported in Wu et al. (2016) who
We start with an experiment that removes the position em-
7
Convolutional Sequence to Sequence Learning
PPL BLEU ConvS2S 6.64 21.7 -source position 6.69 21.3 -target position 6.63 21.5 -source & target position 6.68 21.2
Table 4. Effect of removing position embeddings from our model in terms of validation perplexity (valid PPL) and BLEU.
Attn Layers PPL BLEU 1,2,3,4,5 6.65 21.63 1,2,3,4 6.70 21.54 1,2,3 6.95 21.36 1,2 6.92 21.47 1,3,5 6.97 21.10 1 715 21.26 2 7.09 21.30 3 TAL 21.19 4 719 21.31 5 7.66 20.24
beddings from the encoder and decoder (§3.1). These em- beddings allow our model to identify which portion of the source and target sequence it is dealing with but also im- pose a restriction on the maximum sentence length. Ta- ble 4 shows that position embeddings are helpful but that our model still performs well without them. Removing the source position embeddings results in a larger accuracy decrease than target position embeddings. However, re- moving both source and target positions decreases accuracy only by 0.5 BLEU. We had assumed that the model would not be able to calibrate the length of the output sequences very well without explicit position information, however, the output lengths of models without position embeddings closely matches models with position information. This in- dicates that the models can learn relative position informa- tion within the contexts visible to the encoder and decoder networks which can observe up to 27 and 25 words respec- tively.
Table 5. Multi-step attention in all five decoder layers or fewer layers in terms of validation perplexity (PPL) and test BLEU.
22 21.5 | - 21 20.5 BLEU 20 |- 19.5 Ncoder : Decoder ââ 12345 67 8 9 101112131415161718192021 22232425 Layers 19
Recurrent models typically do not use explicit position em- beddings since they can learn where they are in the se- quence through the recurrent hidden state computation. In our setting, the use of position embeddings requires only a simple addition to the input word embeddings which is a negligible overhead.
# 5.5. Multi-step Attention
The multiple attention mechanism (§3.3) computes a sep- arate source context vector for each decoder layer. The computation also takes into account contexts computed for preceding decoder layers of the current time step as well as previous time steps that are within the receptive field of the decoder. How does multiple attention compare to at- tention in fewer layers or even only in a single layer as is usual? Table 5 shows that attention in all decoder layers achieves the best validation perplexity (PPL). Furthermore, removing more and more attention layers decreases accu- racy, both in terms of BLEU as well as PPL.
The computational overhead for attention is very small compared to the rest of the network. Training with atten- tion in all five decoder layers processes 3624 target words per second on average on a single GPU, compared to 3772 words per second for attention in a single layer. This is only
Figure 2. Encoder and decoder with different number of layers.
a 4% slow down when adding 4 attention modules. Most neural machine translation systems only use a single mod- ule. This demonstrates that attention is not the bottleneck in neural machine translation, even though it is quadratic in the sequence length (cf. Kalchbrenner et al., 2016). Part of the reason for the low impact on speed is that we batch the computation of an attention module over all target words, similar to Kalchbrenner et al. (2016). However, for RNNs batching of the attention may be less effective because of the dependence on the previous time step.
# 5.6. Kernel size and Depth
Figure 2 shows accuracy when we change the number of layers in the encoder or decoder. The kernel width for lay- ers in the encoder is 3 and for the decoder it is 5. Deeper architectures are particularly beneficial for the encoder but less so for the decoder. Decoder setups with two layers al- ready perform well whereas for the encoder accuracy keeps increasing steadily with more layers until up to 9 layers when accuracy starts to plateau.
8
Convolutional Sequence to Sequence Learning
DUC-2004 Gigaword RG-1(R) RG-2(R) RG-L(R) RG-1(F) RG-2(F) RG-L(F) RNN MLE (Shen et al., 2016) 24.92 8.60 22.25 32.67 15.23 30.56 RNN MRT (Shen et al., 2016) 30.41 10.87 26.79 36.54 16.59 33.44 WFE (Suzuki & Nagata, 2017) 32.28 10.54 27.80 36.30 17.31 33.88 ConvS2S 30.44 10.84 26.90 35.88 17.48 33.29
Table 6. Accuracy on two summarization tasks in terms of Rouge-1 (RG-1), Rouge-2 (RG-2), and Rouge-L (RG-L).
Kernel width Encoder layers 5 9 13 3 20.61 21.17 21.63 5 20.80 21.02 21.42 7 20.81 21.30 21.09
Table 7. Encoder with different kernel width in terms of BLEU.
Kernel width Decoder layers 3 5 7 3 21.10 21.71 21.62 5 21.09 21.63 21.24 7 21.40 21.31 21.33
model structure. We expect our model to benefit from these improvements as well.
# 6. Conclusion and Future Work
We introduce the first fully convolutional model for se- quence to sequence learning that outperforms strong re- current models on very large benchmark datasets at an or- der of magnitude faster speed. Compared to recurrent net- works, our convolutional approach allows to discover com- positional structure in the sequences more easily since rep- resentations are built hierarchically. Our model relies on gating and performs multiple attention steps.
Table 8. Decoder with different kernel width in terms of BLEU.
Aside from increasing the depth of the networks, we can also change the kernel width. Table 7 shows that encoders with narrow kernels and many layers perform better than wider kernels. These networks can also be faster since the amount of work to compute a kernel operating over 3 input elements is less than half compared to kernels over 7 ele- ments. We see a similar picture for decoder networks with large kernel sizes (Table 8). Dauphin et al. (2016) shows that context sizes of 20 words are often sufficient to achieve very good accuracy on language modeling for English.
We achieve a new state of the art on several public trans- lation benchmark data sets. On the WMTâ16 English- Romanian task we outperform the previous best result by 1.9 BLEU, on WMTâ 14 English-French translation we im- prove over the LSTM model of Wu et al. (2016) by 1.6 BLEU in a comparable setting, and on WMTâ 14 English- German translation we ouperform the same model by 0.5 BLEU. In future work, we would like to apply convolu- tional architectures to other sequence to sequence learn- ing problems which may benefit from learning hierarchical representations as well.
# Acknowledgements
# 5.7. Summarization
We thank Benjamin Graham for providing a fast 1-D con- volution, and Ronan Collobert as well as Yann LeCun for helpful discussions related to this work.
Finally, we evaluate our model on abstractive sentence summarization which takes a long sentence as input and outputs a shortened version. The current best models on this task are recurrent neural networks which either opti- mize the evaluation metric (Shen et al., 2016) or address specific problems of summarization such as avoiding re- peated generations (Suzuki & Nagata, 2017). We use stan- dard likelhood training for our model and a simple model with six layers in the encoder and decoder each, hidden size 256, batch size 128, and we trained on a single GPU in one night. Table 6 shows that our likelhood trained model outperforms the likelihood trained model (RNN MLE) of Shen et al. (2016) and is not far behind the best models on this task which benefit from task-specific optimization and
# References
Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Ge- offrey E. Layer normalization. arXiv _ preprint arXiv:1607.06450, 2016.
Bahdanau, Dzmitry, Cho, Kyunghyun, and Bengio, Yoshua. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv: 1409.0473, 2014.
Bojar, Ondej, Chatterjee, Rajen, Federmann, Christian, Graham, Yvette, Haddow, Barry, Huck, Matthias,
9
Convolutional Sequence to Sequence Learning
Jimeno-Yepes, Antonio, Koehn, Philipp, Logacheva, Varvara, Monz, Christof, Negri, Matteo, Névéol, Aurélie, Neves, Mariana L., Popel, Martin, Post, Matt, Rubino, Raphaél, Scarton, Carolina, Specia, Lucia, Turchi, Marco, Verspoor, Karin M., and Zampieri, Mar- cos. Findings of the 2016 conference on machine trans- lation. In Proc. of WMT, 2016.
Bradbury, James, Merity, Stephen, Xiong, Caiming, and Socher, Richard. Quasi-Recurrent Neural Networks. arXiv preprint arXiv:1611.01576, 2016.
Cho, Kyunghyun, Van Merriénboer, Bart, Gulcehre, Caglar, Bahdanau, Dzmitry, Bougares, Fethi, Schwenk, Holger, and Bengio, Yoshua. Learning Phrase Represen- tations using RNN Encoder-Decoder for Statistical Ma- chine Translation. In Proc. of EMNLP, 2014.
Chorowski, Jan K, Bahdanau, Dzmitry, Serdyuk, Dmitriy, Cho, Kyunghyun, and Bengio, Yoshua. Attention-based models for speech recognition. In Advances in Neural Information Processing Systems, pp. 577-585, 2015.
Collobert, Ronan, Kavukcuoglu, Koray, and Farabet, Clement. Torch7: A Matlab-like Environment for Ma- chine Learning. In BigLearn, NIPS Workshop, 2011. URL http://torch.ch.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Surpassing human- level performance on imagenet classification. In Pro- ceedings of the IEEE International Conference on Com- puter Vision, pp. 1026-1034, 2015b.
Hochreiter, Sepp and Schmidhuber, Jiirgen. Long short- term memory. Neural computation, 9(8):1735-1780, 1997.
loffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pp. 448-456, 2015.
Jean, Sébastien, Firat, Orhan, Cho, Kyunghyun, Memi- sevic, Roland, and Bengio, Yoshua. Montreal Neural Machine Translation systems for WMT15. In Proc. of WMT, pp. 134-140, 2015.
Kalchbrenner, Nal, Espeholt, Lasse, Simonyan, Karen, van den Oord, Aaron, Graves, Alex, and Kavukcuoglu, Koray. Neural Machine Translation in Linear Time. arXiv, 2016.
LeCun, Yann and Bengio, Yoshua. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
Dauphin, Yann N., Fan, Angela, Auli, Michael, and Grang- ier, David. Language modeling with gated linear units. arXiv preprint arXiv:1612.08083, 2016.
LU Hostis, Gurvan, Grangier, David, and Auli, Michael. Vo- cabulary Selection Strategies for Neural Machine Trans- lation. arXiv preprint arXiv:1610.00072, 2016.
Dyer, Chris, Chahuneau, Victor, and Smith, Noah A. A Simple, Fast, and Effective Reparameterization of IBM Model 2. In Proc. of ACL, 2013.
Lin, Chin-Yew. Rouge: A package for automatic evalu- ation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pp. 74-81, 2004.
Elman, Jeffrey L. Finding Structure in Time. Cognitive Science, 14:179-211, 1990.
Gehring, Jonas, Auli, Michael, Grangier, David, and Dauphin, Yann N. A Convolutional Encoder Model for Neural Machine Translation. arXiv preprint arXiv: 1611.02344, 2016.
Glorot, Xavier and Bengio, Yoshua. Understanding the difficulty of training deep feedforward neural networks. The handbook of brain theory and neural networks, 2010.
Luong, Minh-Thang, Pham, Hieu, and Manning, Christo- pher D. Effective approaches to attention-based neural machine translation. In Proc. of EMNLP, 2015.
Meng, Fandong, Lu, Zhengdong, Wang, Mingxuan, Li, Hang, Jiang, Wenbin, and Liu, Qun. Encoding Source Language with Convolutional Neural Network for Ma- chine Translation. In Proc. of ACL, 2015.
Mi, Haitao, Wang, Zhiguo, and Ittycheriah, Abe. Vocab- ulary Manipulation for Neural Machine Translation. In Proc. of ACL, 2016.
Graff, David, Kong, Junbo, Chen, Ke, and Maeda, Kazuaki. English gigaword. Linguistic Data Consor- tium, Philadelphia, 2003.
Ha, David, Dai, Andrew, and Le, Quoc V. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. In Proc. of CVPR, 2015a.
Miller, Alexander H., Fisch, Adam, Dodge, Jesse, Karimi, Amir-Hossein, Bordes, Antoine, and Weston, Jason. Key-value memory networks for directly reading docu- ments. In Proc. of EMNLP, 2016.
Nallapati, Ramesh, Zhou, Bowen, Gulcehre, Caglar, Xi- ang, Bing, et al. Abstractive text summarization us- ing sequence-to-sequence rnns and beyond. In Proc. of EMNLP, 2016.
10
Convolutional Sequence to Sequence Learning
Oord, Aaron van den, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv: 1601.06759, 2016a.
Sutskever, Ilya, Martens, James, Dahl, George E., and Hin- ton, Geoffrey E. On the importance of initialization and momentum in deep learning. In JCML, 2013.
Oord, Aaron van den, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv: 1606.05328, 2016b.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to Sequence Learning with Neural Networks. In Proc. of NIPS, pp. 3104-3112, 2014.
Over, Paul, Dang, Hoa, and Harman, Donna. Duc in con-
text. Information Processing & Management, 43(6): 1506-1520, 2007.
Suzuki, Jun and Nagata, Masaaki. Cutting-off redundant repeating generations for neural abstractive summariza- tion. arXiv preprint arXiv: 1701.00138, 2017.
Pascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difficulty of training recurrent neural networks. In Proceedings of The 30th International Conference on Machine Learning, pp. 1310-1318, 2013.
Waibel, Alex, Hanazawa, Toshiyuki, Hinton, Geoffrey, Shikano, Kiyohiro, and Lang, Kevin J. Phoneme Recog- nition using Time-delay Neural Networks. /EEE trans- actions on acoustics, speech, and signal processing, 37 (3):328-339, 1989.
Rush, Alexander M, Chopra, Sumit, and Weston, Jason. A neural attention model for abstractive sentence summa- rization. In Proc. of EMNLP, 2015.
Salimans, Tim and Kingma, Diederik P. Weight nor- malization: A simple reparameterization to acceler- ate training of deep neural networks. arXiv preprint arXiv: 1602.07868, 2016.
Schuster, Mike and Nakajima, Kaisuke. Japanese and ko- rean voice search. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2012 IEEE International Conference on, pp. 5149-5152. IEEE, 2012.
Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Neural Machine Translation of Rare Words with Sub- word Units. In Proc. of ACL, 2016a.
Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Googleâs Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv: 1609.08144, 2016.
Yang, Zichao, Hu, Zhiting, Deng, Yuntian, Dyer, Chris, and Smola, Alex. Neural Machine Translation with Recurrent Attention Modeling. arXiv preprint arXiv: 1607.05108, 2016.
Zhou, Jie, Cao, Ying, Wang, Xuguang, Li, Peng, and Xu, Wei. Deep Recurrent Models with Fast-Forward Con- nections for Neural Machine Translation. arXiv preprint arXiv: 1606.04199, 2016.
Sennrich, Rico, Haddow, Barry, and Birch, Alexandra. Ed- inburgh Neural Machine Translation Systems for WMT 16. In Proc. of WMT, 2016b.
Shazeer, Noam, Mirhoseini, Azalia, Maziarz, Krzysztof, Davis, Andy, Le, Quoc, Hinton, Geoffrey, and Dean, Jeff. Outrageously large neural networks: The sparsely- gated mixture-of-experts layer. ArXiv e-prints, January 2016.
Shen, Shigi, Zhao, Yu, Liu, Zhiyuan, Sun, Maosong, et al. Neural headline generation with sentence-wise op- timization. arXiv preprint arXiv: 1604.01904, 2016.
Srivastava, Nitish, Hinton, Geoffrey E., Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: a simple way to prevent Neural Networks from overfitting. JMLR, 15:1929-1958, 2014.
Sukhbaatar, Sainbayar, Weston, Jason, Fergus, Rob, and Szlam, Arthur. End-to-end Memory Networks. In Proc. of NIPS, pp. 2440-2448, 2015.
11
Convolutional Sequence to Sequence Learning
# A. Weight Initialization
With « ~ N(0, std(zx)), this yields
We derive a weight initialization scheme tailored to the GLU activation function similar to Glorot & Bengio (2010); He et al. (2015b) by focusing on the variance of activations within the network for both forward and back- ward passes. We also detail how we modify the weight initialization for dropout.
ne lope 11 E[o(x)â] < ig El") ats (13)
1 1 = Var lz] + z (14)
With (7) and Var[y_,] = Varly?_,] = Var[y_i], this results in
# A.1, Forward Pass
Assuming that the inputs x; of a convolutional layer / and its weights W) are independent and identically distributed (i.i.d.), the variance of its output, computed as yy = W;x;+ by, is
Var[y] =n Var [wii] (3)
Var [x1] < ne yal + iVar[y-1]. (15)
We initialize the embedding matrices in our network with small variances (around 0.01), which allows us to dismiss the quadratic term and approximate the GLU output vari- ance with
where mn; is the number inputs to the layer. For one- dimensional convolutional layers with kernel width k and input dimension c, this is kc. We adopt the notation in (He et al., 2015b), i.e. y, wy and x; represent the random vari- ables in y;, W; and x;. With w; and 2; independent from each other and normally distributed with zero mean, this amounts to
Var [yi] = nVar{wi|Var [x1]. (4)
1 Var|x] © Vary). (16)
If L network layers of equal size and with GLU activations are combined, the variance of the final output y,;, is given by
Le L Varlyr] © Varlyi] Il q nVar[wi] } . (17)
x; is the result of the GLU activation function yf1o(yf_1) with ya = (yfLa,yta) and y@i.y?1 iid. Next, we formulate upper and lower bounds in or- der to approximate Var{x,]. If yi_1 follows a symmetric distribution with mean 0, then
Var [x1] = Var[yf_ o(y?_4)] (5) = E[(yiL 1o(yp ) *| â BEâ lyf o(yt_s)] (6)
Following (He et al., 2015b), we aim to satisfy the condi- tion
1 qmuVar[wi] =1,Vvl (18)
so that the activations in a network are neither exponen- tially magnified nor reduced. This is achieved by initializ- ing W, from N(0, \/4/ni).
# A.2. Backward Pass
= Varlyf_JE[o(y?_1)â]- )
A lower bound is given by (1/4)Var|yj_,] when expand- ing (6) with E?[o(y?_,)] = 1/4:
Var[x] = Var[yf_1 o(y?_a)} (8)
= Var yf] EB? o(yi-1)J+
Var[yf_s]Var[o(y?_1)] 1 = Very] + Var[yf_,]Var[o(y?_1)] (10)
(9)
The gradient of a convolutional layer is computed via back- propagation as Ax, = W/y;. Considering separate gradi- ents Ay? and Ay? for GLU, the gradient of x is given by
Ax, = Wf Ay? + WPAy?. (19)
Ww corresponds to W with re-arranged weights to enable back-propagation. Analogously to the forward pass, Az, wy, and Ay represent the random variables for the values in Ax, WwW, and Ay, respectively. Note that W and W contain the same values, i.e. Similar to (3), the variance of Az is w=w.
and Var[yf_,|Var[o(y?_,)] > 0. We utilize the relation a(x)? < (1/16)a? â 1/4 + (a) (Appendix B) to provide an upper bound on E[o(x)?]:
Blo(o)"|< El aa? -F+olw)) a
1 â 1 = qe) â 7 t Blo(a)] (12)
Var|Ax)| = fy (Variwf]Var[Ay?)] + Var(w}|Var(Ayt]) (20)
Here, i; is the number of inputs to layer /+1. The gradients for the GLU inputs are:
Ay} = Axizio(y?) and Q1)
Ay? = Axisiy?o'(y?). (22)
12
Convolutional Sequence to Sequence Learning
The approximation for the forward pass can be used for Var[Ayf], and for estimating Var[Ay?] we assume an up- per bound on E[oâ(y?)] of 1/16 since oâ(y?) ⬠[0, 4]. Hence,
of r and Ex] = 0, the variance after dropout is
Var(ar] = Elr?Var[2] + Var[r]Var[z] (29)
= (: + -*) Var{c] (30) P
1 1 Var[Ayf] â qo Acs] < qa Vor ArelVarlyn)) (23)
= 1 Vvar{a] (1) P
1 Var[Ay?] < pg aVar An V arly? (24) )
We observe relatively small gradients in our network, typ- ically around 0.001 at the start of training. Therefore, we approximate by discarding the quadratic terms above, i.e.
Assuming that a the input of a convolutional layer has been subject to dropout with a retain probability p, the varia- tions of the forward and backward activations from §A.1 and §A.2 can now be approximated with
1 Var[ai41] © gv arledVarler) and (32)
1 Var[Ayt] © 4 Var[Ac+] (25)
Var[Ay/] ¥ 0 (26)
Var[Aa)] © rinVarlwf]Var{Are (27)
As for the forward pass, the above result can be general- ized to backpropagation through many successive layers, resulting in
Var[Aa)] © FinVarlwflVarlAei43} (33)
This amounts to a modified initialization of W; from a nor- mal distribution with zero mean and a standard deviation of \/4p/n. For layers without a succeeding GLU activation function, we initialize weights from N(0, \/p/n) to cali- brate for any immediately preceding dropout application.
# B. Upper Bound on Squared Sigmoid
L Var[Avs] © Var[Axry+1] Il tinVar(w!] (28) 1=2
and a similar condition, i.e. (1/4)f,Var[wf] = 1. In the networks we consider, successions of convolutional layers usually operate on the same number of inputs so that most cases ny = fy. Note that W/ is discarded in the approx- imation; however, for the sake of consistency we use the same initialization for Wt and W).
The sigmoid function o(a) can be expressed as a hyper- bolic tangent by using the identity tanh(x) = 20(2a) â1. The derivative of tanh is tanhâ(x) = 1 â tanh?(x), and with tanh(x) ⬠[0,1], 2 > 0 it holds that
tanhâ (x) < 1,2 >0 (34)
[ tann'(e) dz < [ora (35) Jo Jo
tanh(â) <a,«#>0 (36)
For arbitrarily large variances of network inputs and activa- tions, our approximations are invalid; in that case, the ini- tial values for W;? and W) would have to be balanced for the input distribution to be retained. Alternatively, meth- ods that explicitly control the variance in the network, e.g. batch normalization (loffe & Szegedy, 2015) or layer nor- malization (Ba et al., 2016) could be employed.
We can express this relation with o() as follows:
1 2o(a)-1< gur20 (37)
Both terms of this inequality have rotational symmetry w.r.t 0, and thus
# A.3. Dropout
2 (20(x) -1)â< (5°) Var (38)
Dropout retains activations in a neural network with a prob- ability p and sets them to zero otherwise (Srivastava et al., 2014). It is common practice to scale the retained activa- tions by 1/p during training so that the weights of the net- work do not have to be modified at test time when p is set to 1. In this case, dropout amounts to multiplying activations x by a Bernoulli random variable r where Pr[r = 1/p] = p and Pr[r = 0] = 1 â p (Srivastava et al., 2014). It holds that E[r] = 1 and Var[r] = (1 âp)/p. If x is independent
â¬a(2)? < a - ; +(x). (39)
# C. Attention Visualization
Figure 3 shows attention scores for a generated sentence from the WMTâ 14 English-German task. The model used for this plot has 8 decoder layers and a 80K BPE vocabu- lary. The attention passes in different decoder layers cap- ture different portions of the source sentence. Layer 1, 3
13
Convolutional Sequence to Sequence Learning
and 6 exhibit a linear alignment. The first layer shows the clearest alignment, although it is slig htly off and frequently attends to the corresponding source word of the previously generated target word. Layer 2 an ture and are presumably collecting 8 lack a clear struc- information about the whole source sentence. The fourth layer shows high align- ment scores on nouns such as âfestivalâ, âwayâ and âworkâ for both the generated target nouns as well as their preced- ing words. Note that in German, those preceding words depend on gender and object relationship of the respec- tive noun. Finally, the attention scores in layer 5 and 7 focus on âbuiltâ, which is reordere in the German trans- lation and is moved from the beginning to the very end of the sentence. One interpretation for tion progresses, the model repeated this is that as genera- ly tries to perform the re-ordering. âaufgebautâ can be generated after a noun or pronoun only, which is reflected in tl sitions 2, 5, 8, 11 and 13. he higher scores at po-
14
Convolutional Sequence to Sequence Learning Layer | Layer 2 Layer 3 [15] </s> [15] </s> [15] </s> [14] - (14). (14] . [13] them [13] them [13] them [12] with [12] with [12] with [11] work [11] work [11] work [10] to [10] to [10] to [9] continuing [9] continuing [9] continuing [8] of [8] of [8] of [7] way [7] way [7] way [6]a [6] a [6]a [5S] as [5] as [5S] as [4] fesvval [4] festival [4] fesvval [3] this [3] this [3] this [2] built [2] built [2] built [1] We [1] We [1] We _ Ny Ww S uw a ~N @ re) _ â_ _ _ _ N w n cal na I © ~ -_ _ _ _ _ Ny Ww ns uw a ~w @ re) _ â_ a _ Srarce gn ee otsesas szsramrvoaogn ge tteea Srarce gn ee otsesas Sea RES ey BFE 4 PeegR 7 2FFSRE RFE 4 Sea RES ey BFE 4 3 8 < 5 2 2 gs 9 v 3 8 < 5 2 ¢ 8 v 3 8 < 5 2 2 e 9 v s = ® 3 S 2 = ® 3 - ¢& s = ® 3 S a ° 3 @ 3 - 3 9 a ° 3 @ & 3 c = 8 c = 2 5S ey ao ey D ® D Layer 4 Layer 5 Layer 6 [15] </s> [15] </s> [15] </s> [14]. [14]. [14]. [13] them [13] them [13] them [12] with [12] with [12] with [11] work [11] work [11] work [10] to [10] to [10] to [9] continuing 9] continuing [9] continuing [8] of [8] of [8] of [7] way [7] way [7] way [6]a [6] a [6]a (S] as (S] as (S] as [4] festival [4] festival [4] festival (3] this [3] this (3] this [2] built [2] built [2] built [1] We [1] We [1] We FPNWBGBUAN GeO EEPEHRER ER EN WREUTAN OB eOEHRP ER EE EFNUWSBGaAN @eo EERE EE orRFN WwW SO YW o Fr Nn WwW +} WU orRFN WwW SO YW ezeaenreve ge eer ots ezavnrx,exzen ere reese ezeaenreve ge eer ots 7 eeRr eggs srezaFe° 4 -egkr secs eeFe° 4 7 eeRr eggs srezaFe° 4 3 8 < a 2 3 ca) v 5 8< a © 8 e © Vv 3 8 < a 2 3 ca) v = > ® 3 cS £ > ® 3 - ¢ = > ® 3 cS a P 3 @ Fs ° 3 9 a P 3 @ ° Z ° - 8 * # 3 5 ov o ov D ® D Layer 7 Layer 8 [15] </s> [15] </s> [14] - (14). [13] them [13] them [12] with [12] with [11] work [11] work [10] to [10] to [9] continuing 9] continuing [8] of [8] of [7] way [7] way ({6]a [6] a (S] as (S] as [4] festival [4] festival [3] this [3] this [2] built [2] built [1] We [1] We FPNWBGBUAN GeO EEPEHRER ER EN WREUTAN OB eOEHRP ER EE orRFN WwW SO YW o Fr Nn WwW +} WU ezeaenreve ge eer ots ezavnrx,exzen ere reese Fee Rr*esESSERBFe Ss FegkR>⢠3SSS8PRBFe° 4 3 8 < a g 3 g 3 v 3 8 <¢ 5 2. ae v 2 => ® 3 S o > ® 3 â ¢⬠x J 3 o xn J 3 oO a. c 2 © ta cy ta = fc) ts 2 2 ov o > S D o Figure 3. Attention scores for different decoder layers for a sentence translated from English (y-axis) to German (x-axis). This model uses 8 decoder layers and a 80k BPE vocabulary.
15 | {
"id": "1611.01576"
} |
1705.00652 | Efficient Natural Language Response Suggestion for Smart Reply | This paper presents a computationally efficient machine-learned method for
natural language response suggestion. Feed-forward neural networks using n-gram
embedding features encode messages into vectors which are optimized to give
message-response pairs a high dot-product value. An optimized search finds
response suggestions. The method is evaluated in a large-scale commercial
e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence
approach, the new system achieves the same quality at a small fraction of the
computational requirements and latency. | http://arxiv.org/pdf/1705.00652 | Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, Laszlo Lukacs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, Ray Kurzweil | cs.CL | null | null | cs.CL | 20170501 | 20170501 | 7 1 0 2
y a M 1
# ] L C . s c [
1 v 2 5 6 0 0 . 5 0 7 1 : v i X r a
# Efficient Natural Language Response Suggestion for Smart Reply
MATTHEW HENDERSON, RAMI AL-RFOU, BRIAN STROPE, YUN-HSUAN SUNG, LASZLO LUKACS, RUIQI GUO, SANJIV KUMAR, BALINT MIKLOS, and RAY KURZWEIL, Google
This paper presents a computationally efficient machine-learned method for natural language response suggestion. Feed-forward neural networks using n-gram embedding features encode messages into vectors which are optimized to give message-response pairs a high dot-product value. An optimized search finds response suggestions. The method is evaluated in a large-scale commercial e-mail application, Inbox by Gmail. Compared to a sequence-to-sequence approach, the new system achieves the same quality at a small fraction of the computational requirements and latency.
Additional Key Words and Phrases: Natural Language Understanding; Deep Learning; Semantics; Email
# 1 INTRODUCTION
Applications of natural language understanding (NLU) are becoming increasingly interesting with scalable machine learning, web-scale training datasets, and applications that enable fast and nuanced quality evaluations with large numbers of user interactions.
Early NLU systems parsed natural language with hand-crafted rules to explicit semantic repre- sentations, and used manually written state machines to generate specific responses from the output of parsing [18]. Such systems are generally limited to the situations imagined by the designer, and much of the development work involves writing more rules to improve the robustness of semantic parsing and the coverage of the state machines. These systems are brittle, and progress is slow [31]. Eventually adding more parsing rules and response strategies becomes too complicated for a single designer to manage, and dependencies between the rules become challenging to coordinate across larger teams. Often the best solution is to keep the domains decidedly narrow.
Statistical systems can offer a more forgiving path by learning implicit trade-offs, generalizations, and robust behaviors from data. For example, neural network models have been used to learn more robust parsers [14, 24, 29]. In recent work, the components of task-oriented dialog systems have been implemented as neural networks, enabling joint learning of robust models [7, 26, 27]. However these methods all rely on either an explicit semantic representation or an explicit representation of the task, always hand-crafted.
End-to-end systems avoid using hand-crafted explicit representations, by learning to map to and from natural language via implicit internal vector representations [19, 25]. Such systems avoid the unnecessary contraints and bottlenecks inevitably imposed by the system designer. In that context, natural language understanding might be evaluated less in terms of an explicit semantic representation, and more by the utility of the system itself. The system shows evidence of understanding when it offers useful responses.
Such end-to-end tasks are difficult: systems not only need to learn language but also must learn to do something useful with it. This paper addresses the task of suggesting responses in human-to- human conversations. There are further challenges that arise when building an end-to-end dialog
Corresponding authors: {matthen, rmyeid, bps}@google.com.
© 2017 Copyright held by the owner/author(s). Publication rights licensed to ACM.
# Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
system, i.e. a computer agent that interacts directly with a human user. Dialog systems must learn effective and robust interaction strategies, and goal-oriented systems may need to interact with discrete external databases. Dialog systems must also learn to be consistent throughout the course of a dialog, maintaining some kind of memory from turn to turn.
Machine learning requires huge amounts of data, and lots of helpful users to guide development through live interactions, but we also need to make some architectural decisions, in particular how to represent natural language text.
Neural natural language understanding models typically represent words, and possibly phrases, sentences, and documents as implicit vectors. Vector representations of words, or word embeddings, have been widely adopted, particularly since the introduction of efficient computational learning algorithms that can derive meaningful embeddings from unlabeled text [15, 17, 20].
Though a simple representation of a sequence of words can be obtained by summing the individual word embeddings, this discards information about the word ordering. The sequence-to-sequence (Seq2Seq) framework uses recurrent neural networks (RNNs), typically long short-term memory (LSTM) networks, to encode sequences of word embeddings into representations that depend on the order, and uses a decoder RNN to generate output sequences word by word. This framework provides a direct path for end-to-end learning [23]. With attention mechanisms and more layers, these systems are revolutionizing the field of machine translation [28]. A similar system was initially used to deployed Googleâs Smart Reply system for Inbox by Gmail [11].
While Seq2Seq models provide a generalized solution, it is not obvious that they are maximally efficient, and training these systems can be slow and complicated. Also they are derived as a generative model, and so using them to rank a fixed set of responses (as in the context of Smart Reply) requires extra normalization to bias the system away from common responses.
In a broader context, Kurzweilâs work outlines a path to create a simulation of the human neocortex (the outer layer of the brain where we do much of our thinking) by building a hierarchy of similarly structured components that encode increasingly abstract ideas as sequences [12]. Kurzweil provides evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules. Longer relationships (between elements that are far away in time or spatial distance) are modeled by the hierarchy itself. In this work we adopt such a hierarchical structure, representing each sequential model as a feed-forward vector computation (with underlying sequences implicitly represented using n-grams). Whereas a long short-term memory (LSTM) network could also model such sequences, we donât need an LSTMâs ability to directly encode long-term relationships (since the hierarchy does that) and LSTMs are much slower than feed-forward networks for training and inference since the computation scales with the length of the sequence.
Similarly, the work on paragraph vectors shows that word embeddings can be back-propagated to arbitrary levels in a contextual hierarchy [13]. Machines can optimize sentence vectors, paragraph vectors, chapter vectors, book vectors, author vectors, and so on, with simple back-propagation and computationally efficient feed-forward networks.
Putting a few of these ideas together, we wondered if we could predict a sentence using only the sum of its n-gram embeddings. Without the ordering of the words, can we use the limited sequence information from the n-grams, and the redundancy of language, to recreate the original word sequence? With a simple RNN as a decoder, our preliminary experiments showed perplexities of around 1.2 over a vocabulary of hundreds of thousands of words. A lot of the sequence information remains in this simple n-gram sentence representation. As a corollary, a hierarchy built on top of n-gram representations could indeed adequately represent the increasingly abstract sequences underlying natural language. Networks built on n-gram embeddings such as those presented in this
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. paper (see section 4) are computationally inexpensive relative to RNN and convolutional network [6, 30] encoders. To make sure there is enough data and the necessary live feedback from users, we train on the anonymized Gmail data that was used in Kannan et al. [11], and use our models to give Smart Reply response suggestions to users of Inbox by Gmail (see figure 1). Smart Reply provides a real world application in which we can measure the quality of our response suggestion models. Just as in Kannan et al. [11], we consider natural language response suggestion from a fixed set of candidates. For efficiency, we frame this as a search problem. Inputs are combined with potential responses using final dot products to enable precomputation of the âresponse sideâ of the system. Adding deep layers and delaying combination between input and responses encourages the network to derive implicit semantic representations of the input and responsesâ if we assume that the best way to predict their relationships is to understand them. We precompute a minimal hierarchy of deep feed-forward networks for all potential responses, and at runtime propagate only the input through the hierarchical network. We use an efficient nearest-neighbor search of the hierarchical embeddings of the responses to find the best suggestions. 2 PROBLEM DEFINITION The Smart Reply system gives short response suggestions to help users respond quickly to emails. Emails are processed by the system according to the pipeline detailed in figure 2. The decision of whether to give suggestions is made by a deep neural network classifier, called the triggering model. This model takes various features of the received email, including a word n-gram representation, and is trained to estimate the probability that the user would type a short reply to the input email, see Kannan et al. [11]. If the output of the triggering model is above a threshold, then Smart Reply will give m (typically 3) short response suggestions for the email. Otherwise no suggestions are given. As a result, suggestions are not shown for emails where a response is not likely (e.g. spam, news letters, and promotional emails), reducing clutter in the user interface and saving unnecessary computation. The system is restricted to a fixed set of response suggestions, R, selected from millions of common messages. The response selection step involves searching for the top N (typically around 100) scoring responses in R according to a response selection model P(y | x). The output of response selection is a list of suggestions (yi, yo, .--, yn) with y; ⬠R ordered by their probability. Kannan et al. [11] used a sequence-to-sequence model for P(y | x) and used a beam search over the Smart Reply paper IOtv § Matthew Henderson to me * Apr 17 Do you think the abstract looks okay? (ee Reply ~ | think it's fine. Looks good to me. It needs some work. Fig. 1. Our natural language understanding models are trained on email data, and evaluated in the context of the Smart Reply feature of Inbox by Gmail, pictured here.
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
# new email x
Suggestions? suggestions , uf Response Response selection - - 4 set R and clustering Diversification (iys +1 Yin) Smart Reply suggestions are shown
Fig. 2. The Smart Reply pipeline. A re- ceived email is run through the triggering model that decides whether suggestions should be given. Response selection searches the response set for good sug- gestions. Finally, diversification ensures diversity in the final set shown to the user. This paper focuses on the response se- lection step.
prefices in R (see section 3). This paper presents a feed-forward neural network model for P(y | x), including a factorized dot-product model where selection can be performed using a highly efficient and accurate approximate search over a precomputed set of vectors, see section 4.
Finally the diversification stage ensures diversity in the final m response suggestions. A clus- tering algorithm is used to omit redundant suggestions, and a labeling of R is used to ensure a negative suggestion is given if the other two are affirmative and vice-versa. Full details are given in Kannan et al. [11].
# 3 BASELINE SEQUENCE-TO-SEQUENCE SCORING
The response selection model presented in Kannan et al. [11] is a long short-term memory (LSTM) recurrent neural network [8] â an application of the sequence-to-sequence learning framework (Seq2Seq) [23].
The input email « is tokenized into a word sequence (1, ..., X,) and the LSTM computes the conditional probability over a response sequence y = (yi, ..., Yn) as:
Piy|z) = Ply, .--; Yn | 1, ---, &m) n = [Tiki sm (yi | v1, ---5 &ms Ya, +++ Yi-1)
where Py srm is the output of the word-level LSTM. The LSTM is trained to maximize the log- probability according to P(y | x) of the training data (a large collection of emails and responses, see section 5.1). At inference time, likely responses from the candidate set R are found using a beam search that is restricted to the prefix trie of R. The time complexity of this search is O(|x| + b|y|) where b is the beam width and should be scaled appropriately with ||. This search dominates the computation of the original Smart Reply system.
# 4 FEEDFORWARD APPROACH
Rather than learning a generative model, we investigate learning a feedforward network to score potential responses.
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
Recall the goal of response selection is to model P(y | x), which is used to rank possible responses y given an input email «x. This probability distribution can be written as:
q)
The joint probability of P(x, y) is estimated using a learned neural network scoring function, S such that:
P(x, y) x eS) @)
Note that the calculation of equation | requires summing over the neural network outputs for all possible responses y;,. (This is only an issue for training, and not inference since the denominator is a constant for any given x and so does not affect the arg max over y). This is prohibitively expensive to calculate, so we will approximate P(a) by sampling K responses including y uniformly from our corpus during training:
P(a,y) an P(x, yx) Papprox(y | #) = (3)
Combining equations 2 and 3 gives the approximate probability of the training data used to train the neural networks:
eS (ay) Sr, eSen) Papprox(y | ©) = (4)
The following subsections show several scoring models; how to extend the models to multiple features; how to overcome bias introduced by the sampling procedure; and an efficient search algorithm for response selection.
# 4.1 N-gram Representation
To represent input emails x and responses y as fixed-dimensional input features, we extract n- gram features from each. During training, we learn a d-dimensional embedding for each n-gram jointly with the other neural network parameters. To represent sequences of words, we combine n-gram embeddings by summing their values. We will denote this bag of n-grams representation as W(x) ⬠R¢. This representation is quick to compute and captures basic semantic and word ordering information.
# 4.2 Joint Scoring Model
Figure 3a shows the joint scoring neural network model that takes the bag of n-gram representations of the input email x and the response y, and produces a scalar score S(x,y). This deep neural network can model complex joint interactions between input and responses in its computation of the score.
# 4.3 Dot-Product Scoring Model
Figure 3b shows the structure of the dot-product scoring model, where S(x,y) is factorized as a dot-product between a vector h, that depends only on x and a vector h, that depends only on y. This is similar to Deep Structured Semantic Models, which use feedforward networks to project queries and documents into a common space where the relevance of a document given a query is computed as the cosine distance between them [9].
While the interaction between features is not as direct as the joint scoring model (see section 4.2), this factorization allows us to calculate the representation of the input x and possible responses y
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
# ReLU layer h
# ReLU layer
# U(x)
# Wy)
=h,7h aâ hy tanh layer hy Ss tanh layer he U(x) Wy)
(a) A neural network that calculates a score between emails and their responses. Rec- tified Linear Unit (ReLU) layers are used to reduce the (2d)-dimensional concatenation of the bag of n-gram representations to a scalar S(x,y).
(b) Dot-product architecture, where a tower of tanh activation hidden layers encodes x to h, and a separate tower encodes y to hy, such that the score S(x,y) is the dot-product hy, hy.
Fig. 3. Feedforward scoring models that take the n-gram representation of an email body and a response, and compute a score.
independently. In particular, the representations of the response set R can be precomputed. Then searching for response suggestions reduces to encoding a new email x in a simple feed-forward step to the vector h,,, and then searching for high dot-product scoring responses in the precomputed set (see section 4.7).
It is also efficient to compute the scores S(2;, y;) for all pairs of inputs and responses in a training batch of n examples, as that requires only an additional matrix multiplication after computing the h,,, and h,, vectors. This leads to vastly more efficient training with multiple negatives (see section 4.4) than is possible with the joint scoring model.
# 4.4 Multiple Negatives
Recall from section 4 that a set of K possible responses is used to approximate P(y | x) â one correct response and /v â 1 random negatives. For efficiency and simplicity we use the responses of other examples in a training batch of stochastic gradient descent as negative responses. For a batch of size K, there will be K input emails x = (1,...,a«) and their corresponding responses y =(m,---, yx). Every reply y; is effectively treated as a negative candidate for «; if i A j. The i â 1 negative examples for each x are different at each pass through the data due to shuffling in stochastic gradient descent.
The goal of training is to minimize the approximated mean negative log probability of the data. For a single batch this is:
T(x, y,9)
1 K = "kK > log Prapprox (Yi | xj) i=l (5) 1 K K = FL [Seon âoe ren
using equation 4, where @ represents the word embeddings and neural network parameters used to calculate S. Note that this loss function is invariant to adding any function f(x) to S(«, y), so
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
Si ) = Wh ReLU layer h ReLU layer ReLU layer ReLU layer hi ReLU layer ReLU layer V(2') V(y) Vi
S(x,y) = hr by tanh layer h, tanh layer hy tanh layer tanh layer Yar hy layer tanh layer tanh layer V(2") W(y) Vi
(a) Joint scoring model using multiple features of the input email xâ. A subnetwork scores the response using each feature alone, be- fore the top-level hidden representations hâ are concatenated (@â¢, hâ) and then used to compute the final score. This is an ap- plication of the multi-loss architecture from Al-Rfou et al. [2].
(b) Dot-product scoring model with multiple input features xâ. This is a novel setup of the multi-loss architecture, whereby the feature- level scores S(aâ,y) and the final score S(x,y) are computed as a dot-product be- tween the parallel inout and response sides.
Fig. 4. Scoring models that use multiple features of the input email.
S(x,y) is learned up to an additive term that does not affect the arg max over y performed in the inference time search.
# 4.5 Incorporating Multiple Features
There is structure in emails that can be used to improve the accuracy of scoring models. We follow the multi-loss architecture of Al-Rfou et al. [2] to incorporate additional features beyond the message body, for example the subject line. Figure 4 shows the multi-loss architecture applied to both the joint and dot-product scoring models.
The multi-loss networks have a sub-network for each feature of the email, which are trained to independently score candidate responses using that feature alone. The highest level hidden layer of the sub-network is used in a final sub-network that is trained to combine the information from all the features and give a final score. This hierarchical structure results in models that learn how to use each feature faster than a network that sees all the features at once, and also allows for learning deeper networks than is otherwise possible [2].
Formally, denote the MM features of an input email x as wy ..., aâ¢. Then for each i, a sub- network produces a hidden vector representation hâ, and a score of the response y using only 2â, S(x,y). Denoting (xi,..., 24.) as x", a loss function 7(xâ, y, 0) encourages S(zx', y) to be high
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
Message: Did you manage to print the document? With response bias Without response bias â Yes, I did. â Itâs printed. â Yes, itâs done. â Ihave printed it. âNo, I didnât. â Yes, all done.
Table 1. Examples of Smart Reply suggestions with and without the response bias. Without biasing, the model prefers responses that are very closely related to the input email, but are less likely to be chosen than the more generic yes/no responses.
for corresponding pairs in the training batch, and low for the random pairs. The second stage of the network produces a final score S(x,y) that is a function of all of the hâ vectors. The network is trained end-to-end with a single loss:
M T(x, ¥,0) + 3° I(x", y, 8) i=l
Note that the final score produced by the multi-loss dot-product model (figure 4b) is a dot-product of a vector h, that depends only on the input x, and a vector h,, that depends only on the response y, as in the single-feature case. As a result, it can still be used for the fast vector search algorithm described in section 4.7, and training with multiple negatives remains efficient.
For the multi-loss joint scoring model, the input feature vector for the final sub-network is the concatenation of the hâ vectors and therefore scales with the number of features, leading to a computational bottleneck. For the dot-product scoring model, the hidden layer representations are learned such that they are meaningful vectors when compared using a dot product. This motivates combining the representations for the final sub-network using vector arithmetic. The features extracted from the input email, xâ, are averaged (1/m ye hâ), as are the response representations learned from different sub-networks (1/1 ean h,,), before being passed to the final neural network layers. While this choice may constrain the representations learned by each sub-network, and may limit the ability of the final sub-network to differentiate information from different features, it also encourages them to exist in the same semantic space.
# 4.6 Response Biasing
The discriminative objective function introduced in section 4.4 leads to a biased estimation of the denominator in equation (1). Since our negative examples are sampled from the training data distribution, common responses with high prior likelihood appear more often as negative examples. In practice, we observed that this bias leads to models that favor specific and long responses instead of short and generic ones. To encourage more generic responses, we bias the responses in R using a score derived from the log likelihood of the response as estimated using a language model. Our final score S(x,y) of any input email response pair is calculated as:
Ss (x,y) = Sm(x,y) + alog Pim(y) (6)
where S;,, is the score calculated by our trained scoring model, P,y(y) is the probability of y according to the language model, and a is tuned with online experiments. Note that the additional term is dependent only on y, and so can be precomputed for every response in F prior to inference time.
Table 1 demonstrates the effect of including the response bias using an example email.
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
# 4.7 Hierarchical Quantization for Efficient Search
At inference time, given an input email x, we use the dot-product scoring model to find response suggestions y ⬠R with the highest scores S(x,y), where the scoring function is the dot-product: S(x,y) =h?h,!. The problem of finding datapoints with the largest dot-product values is some- times called Maximum Inner Product Search (MIPS). This is a research topic of its own and is also useful for inference in neural networks with a large number of output classes.
Maximum Inner Product Search is related to nearest neighbor search (NNS) in Euclidean space, but comes with its own challenges because the dot-product âdistanceâ is non-metric and many classical approaches such as KD-trees cannot be applied directly. For more background, we refer readers to the relevant works of [3, 5, 21, 22]. In the Smart Reply system, we need to keep very high retrieval recall (for example > 99% in top-30 retrieval). However, many of the existing methods are not designed to work well in the high recall regime without slowing down the search considerably. To achieve such high recall, hashing methods often require a large number of hash bits and tree methods often need to search a large number of leaves.
In this work, we use a hierarchical quantization approach to solve the search problem. For our use case, the responses y come from a fixed set R and thus the h, vectors are computed ahead of inference time. Unlike the previous work in [5], we propose a hierarchical combination of vector quantization, orthogonal transformation and product quantization of the transformed vector quantization residuals. Our hierarchical model is based on the intuition that data in DNN hidden layers often resemble low dimensional signal with high dimensional residuals. Vector quantization is good at capturing low dimensional signals. Product quantization works by decomposing the high-dimensional vectors into low-dimensional subspaces and then quantizing them separately [4]. We use a learned rotation before product quantization as it has been shown to improve quantization error [16].
Specifically, h,, is approximated by a hierarchical quantization H@Q(h,), which is the sum of the vector quantization component V Q(h,,) and the residuals. A learned orthogonal transformation R is applied to the residual, followed by product quantization.
hy © HQ(hy) = VQ(hy) + Râ¢PQ(ry), where ry = R(h, â VQ(h,))
Here, given a vector quantization codebook Cvq, product quantization codebooks of {Cpq} for each of the subspaces k, and the learned orthogonal matrix R. ⬠R?*4, the vector quantization of hy, is VQ(h,) = arg min, |h, â c||?. The product quantization of the rotated residual ry is computed by first dividing the rotated residuals r,, into K subvectors rh) k =1,2,--- ,K, and then câ¬Cva | a . . (k) , quantizing the subvectors independently by vector quantizers Cp:
PQ (r) = argmin ||sâ rl) |/?. sâ¬{Cpaq(*)}
Finally the full product quantization PQ(r,) is given by the concatenation of the quantization in each subspace:
PQM (ry?) ry PQ (r?) (2) PQ(ry) = . " > Ty = : PAM (ry) ne
' The bias term, a log Pim(y), can be included in the dot product e.g. by extending the hz vector with {a} and the hy vector with {log Pim (y)}
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
At training time, the codebook for vector quantization, Cyq, codebooks for product quantization Cho. and the rotation matrix R are jointly learned by minimizing the reconstruction error of h, â HQ(h,) with stochastic gradient descent (SGD). At inference time, prediction is made by taking the candidates with the highest quantized dot product, i.e.
hi VQ(hy) + (Rh,z)" PQ(ry)
The distance computation can be performed very efficiently without reconstructing HQ(h,), instead utilizing a lookup table for asymmetric distance computation [10]. Furthermore, the lookup operation is carried out in register using SIMD (single instruction, multiple data) instructions in our implementation, providing a further speed improvement.
Top-30 recall © © 8 â® Hierarchical Quantization N 0.86 â -@- Clustering [3] a A (a> ALSH [20] 1 5 10 15 20 25 30 Speed up over exhaustive search
Fig. 5. Evaluation of retrieval speed vs. recall of top-30 neighbors with maximum dot product. The curve is produced by varying the number of approximate neighbors retrieved by our hierarchical quantization method and by Asymmetric LSH [22], and varying the number of leaves searched by the clustering algorithm of [3].
We summarize the speed-recall evaluation of using different approximate MIPS algorithms in figure 5. The y-axis shows the recall of the top-30 retrieved responses, where the ground truth is computed by exhaustive search. The x-axis shows the speed up factor with respect to exhaustive search. Therefore, exhaustive search achieves a recall of 100% with a speed up factor of 1. Our algorithm achieves 99.89% recall with a speed-up factor over 10, outperforming the baselines of [3, 22].
# 5 EVALUATION
# 5.1 Experimental Setup
Data. Pairs of emails and their responses are sampled from user data to create datasets for training and testing the feedforward response scoring models. In total around 300M pairs are collected. The data is split uniformly at random into two disjoint sets of 95% and 5%, which constitute the training and test sets respectively.
All email data (raw data, preprocessed data and training/evaluation data) is encrypted. Engineers can only inspect aggregated statistics on anonymized sentences that occurred across many users and do not identify any user.
Language identification is run on the emails, and only English language emails are kept. The subject lines and message bodies are tokenized into word sequences, from which n-gram features are extracted. Infrequent words, URLs, email addresses, phone numbers etc. are replaced with special
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
tokens. Quoted text arising from replying and forwarding is also removed. We used hundreds of thousands of the most frequent n-grams as features to represent the text.
Training. Each of our DNN sub-networks consists of 3 hidden layers of sizes 500, 300, 100 in the case of the joint scoring models and 300, 300, 500 for the dot-product models . The embedding dimensionality d of our n-grams is 320. We train each model for at least 10 epochs. We set the learning rate to 0.01 during the first 40 million batches, after which it is reduced to 0.001. The models are trained on CPUs across 50 machines using a distributed implementation of TensorFlow [1].
# 5.2 Offline Evaluation
Our models are evaluated offline on their ability to identify the true response to an email in the test data against a set of randomly selected competing responses. In this paper, we score a set of 100 responses that includes the correct response and 99 randomly selected incorrect competitors. We rank responses according to their scores, and report precision at 1 (P@/) as a metric of evaluation. We found that P@1 correlates with the quality of our models as measured in online experiments with users (see section 5.3).
Batch Size Scoring Model P@1 25 Joint 49% 25 Dot-product 48% 50 Dot-product 52%
Table 2. P@1 results on the test set for the joint and dot-product multi-loss scoring models. The training objective discriminates against more random negative examples for larger batch sizes.
Table 2 presents the results of the offline evaluation for joint and dot-product scoring models. The joint scoring model outperforms the dot-product model trained on the same batch size. This model learns complex cross-features between the input email and the response leading to a better scoring. However, the joint scoring model does not scale well to larger batches, since each possible pairing of input email and response requires a full forward pass through the network. The number of forward passes through the joint scoring model grows quadratically with the batch size. Recall the dot-product scoring model is a lot faster to train with multiple negatives than the joint scoring models since it requires a linear number of forward passes followed by a single kX by K matrix multiply to score all possible pairings, where K is the batch size. As a result, the multi-loss dot-product models can be trained on larger batches to produce more accurate models.
Note that the models in table 2 are trained with the multiple negatives training loss of section 4.4. It is also possible to train the models as a classifier with a sigmoid loss. We find that multiple negative training results in a consistent 20% reduction in error rate for P@1 relative to training as a classifier across all of our conversational datasets. For example, in a different version of the Smart Reply data it improved the P@1 of a dot-product model from 47% to 58%.
# 5.3 Online Evaluation
Though the offline ranking metric gives a useful signal during development, the ultimate proof of a response selection model is how it affects the quality of the suggestions that users see in the end-to-end Smart Reply system. Suggestion quality or usefulness is approximated here by the observed conversion rate, i.e. the percentage of times users click on one of the suggestions when they are shown.
# Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
new email « DNN gives score Use heap to si = S(a,y) find top N Wy ER scoring y ⬠R Response set R
new email « ID al DNN gives search gives â S(a,y) Vy ⬠(Cabbest list nine Encode to hy Wye R Use heap to \ find top N \ - precomputed scoring y ⬠score sj = M-best list
(a) The exhaustive search setup scores all the examples in the response set using a joint scoring model. The top N scoring responses are then found using a heap.
(b) The two pass setup first uses a fast dot product scoring model to produce an M- best list of response suggestions from the re- sponse set. The -best list is then exhaus- tively searched to find the top N scoring re- sponses according to a more accurate joint scoring model.
new email « â>( Encode to hy Dot product search gives N-best list . Encode to hy Vy eR \ precomputed Response set R
(c) The single pass setup uses a dot product scoring model and no joint scoring model.
Fig. 6. Online system architectures.
This section describes the evolution of our system and shows the effect of each iteration on latency and quality relative to the baseline Seq2Seq system. An outline of this series of online experiments is presented in table 3.
5.3.1 Exhaustive Search. Our initial system scored the input email against every response in the response set R using the joint scoring model and the Email body feature alone (see figure 6a). Given that the joint scoring model requires a forward pass for each response in R, this approach is too computationally expensive for an online experiment, see row 1 of table 3.
5.3.2 Two pass. The computational expense of the initial exhaustive search system motivated the design of a two-stage approach where the first stage is fast and the second is more accurate, as shown in figure 6b.
The first stage consists of a dot-product scoring model utilizing the text of the email body alone. As a pre-processing step, all of the responses in the response set R = {y1,..., Yn} are encoded to their vector representations to give a matrix R = [h,,,..., hy,,] (see figure 3b). At inference time, a new input email is encoded to its representation h,., and the vector of all scores is calculated as the dot product with the precomputed matrix: Rh,. A heap is then used to find the M highest scoring responses. The second stage uses the joint scoring model to score the candidates from the first stage. Row 2 of table 3, shows the 50x speedup improvement from using this two pass system.
The system tended to suggest overly specific and often long responses because of the biased negative sampling procedure, see section 4.6. Therefore, we added an extra score to boost the scores of more likely responses using a language model. This change significantly improved the quality of
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
Conversion Latency System Experiment rate relative to relative to Seq2Seq Seq2Seq Exhaustive (1) Use a joint scoring model to score _ 500% search all responses in R. (2) Two passes: dot-product then joint 61% 10% scoring. Two pass (3) Include response bias. 88% 10% (4) Improve sampling of dataset, and use multi-loss structure. loam 10% . (3) Remove second pass. 104% 2% Single pass (6) Use hierarchical quantization for search. 104% 1%
Table 3. Details of several successive iterations of the Smart Reply system, showing the conversion rate and latency relative to the baseline Seq2Seq system of Kannan et al. [11].
the suggestions, see row 3 of table 3, moving the systems toward shorter and more generic responses that users were more likely to find appropriate and click.
Improving our dataset sampling, and using the multi-loss structure brought the conversion rate of the system above that of the Seq2Seq system, see row 4 of table 3).
5.3.3 Single pass. To improve the latency, we removed the second pass step and relied solely on the responses found by the first pass dot-product step (see figure 6c). However, to maintain the quality, we had to improve the quality of the dot-product model.
Since the dot-product scoring model scales better with more negatives during training, we doubled the number of negatives for training the first pass system. We also applied the multi-loss architecture to the first pass dot-product model, using additional input features (see figure 4b). Together these changes made the dot-product model slightly more accurate than the joint model (see table 2). As a result, the system quality stayed the same while the speed increased 5 times, as shown in row 5 of table 3.
So far, we have been computing the dot-product between the new email representation and all the precomputed representations of the responses in the response set, and searching the entire list to find high scoring responses. Switching from this exhaustive search to the hierarchical quantization search described in section 4.7 doubles the speed of the system without compromising quality (see row 6 of table 3).
As a result, our final system produces better quality suggestions than the baseline Seq2Seq system with a small percentage of the computation and latency.
# 6 CONCLUSIONS
This paper presents a feed-forward approach for scoring the consistency between input messages and potential responses. A hierarchy of deep networks on top of simple n-gram representations is shown to outperform competitive sequence-to-sequence models in this context.
Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
The deep networks use different components for reading inputs and precomputing the representa- tion of possible responses. That architecture enables a highly efficient runtime search.
We evaluate the models with the Smart Reply application. Live experiments with production traffic enabled a series of improvements that resulted in a system of higher quality than the original sequence-to-sequence system and a small fraction of the computation and latency.
Without addressing the generation of novel responses, this paper suggests a minimal, efficient, and scalable implementation that enables many ranking-based applications.
# ACKNOWLEDGMENTS
Thanks to Fernando Pereira, Corinna Cortes, Anjuli Kannan, Dilek Hakkani-Tiir and Larry Heck for their valuable input to this paper. We would also like to acknowledge the many engineers at Google whose work on the tools and infrastructure made these experiments possible. Thanks especially to the users of Smart Reply.
# REFERENCES
1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016.
2] R. Al-Rfou, M. Pickett, J. Snaider, Y. Sung, B. Strope, and R. Kurzweil. Conversational contextual cues: The case of personalization and history for response ranking. arXiv preprint arXiv: 1606.00372, 2016.
3] A. Auvolat, S. Chandar, P. Vincent, H. Larochelle, and Y. Bengio. Clustering is efficient for approximate maximum inner product search. arXiv preprint arXiv: 1507.05910, 2015.
4] R.M. Gray. Vector quantization. ASSP Magazine, IEEE, 1(2):4-29, 1984.
5] R. Guo, S. Kumar, K. Choromanski, and D. Simcha. Quantization based fast inner product search. In International Conference on Artificial Intelligence and Statistics, 2016.
6] H. He, K. Gimpel, and J. J. Lin. Multi-perspective sentence similarity modeling with convolutional neural networks. In Empirical Methods on Natural Language Processing (EMNLP), 2015.
7| M. Henderson, B. Thomson, and S. Young. Word-based dialog state tracking with recurrent neural networks. In Special Interest Group on Discourse and Dialogue (SIGDIAL), 2014.
8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8), Nov. 1997.
9] P-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck. Learning deep structured semantic models for web search using clickthrough data. 2013.
10] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. Pattern Analysis and Machine Intelligence, 33(1), 2011.
11] A. Kannan, K. Kurach, S. Ravi, T. Kaufman, B. Miklos, G. Corrado, A. Tomkins, L. Lukacs, M. Ganea, P. Young, and V. Ramavajjala. Smart Reply: Automated response suggestion for email. In Conference on Knowledge Discovery and Data Mining (KDD). ACM, 2016.
12] R. Kurzweil. How to Create a Mind: The Secret of Human Thought Revealed. Penguin Books, New York, NY, USA, 2013.
13] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In International Conference on Machine Learning (ICML), 2014.
14] G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L. Deng, X. He, L. Heck, G. Tur, D. Hakkani-Tiir, D. Yu, and G. Zweig. Using recurrent neural networks for slot filling in spoken language understanding. 2015.
15] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
16] M. Norouzi and D. J. Fleet. Cartesian k-means. In Conference on Computer Vision and Pattern Recognition, pages 3017-3024. IEEE, 2013.
17] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In Empirical Methods on Natural Language Processing (EMNLP), 2014.
18] P.J. Price. Evaluation of spoken language systems: The ATIS domain. In Workshop on Speech and Natural Language, HLT â90. Association for Computational Linguistics, 1990.
19] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Conference on Artificial Intelligence. AAAI, 2016.
# Efficient Natural Language Response Suggestion for Smart Reply
Henderson et al.
20 N. Shazeer, R. Doherty, C. Evans, and C. Waterson. Swivel: Improving embeddings by noticing whatâs missing. arXiv preprint arXiv: 1602.02215, 2016.
21 F. Shen, W. Liu, S. Zhang, Y. Yang, and H. Tao Shen. Learning binary codes for maximum inner product search. In International Conference on Computer Vision. IEEE, 2015.
22 A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in neural information processing systems (NIPS), 2014.
23 I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS), 2014.
24 O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. In Advances in neural information processing systems (NIPS), 2015.
25 O. Vinyals and Q. V. Le. A neural conversational model. In International Conference on Machine Learning (ICML), 2015.
26 T.-H. Wen, D. Vandyke, N. Mrksic, M. Gasic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, and S. Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv: 1604.04562, 2016.
27 J. D. Williams and G. Zweig. End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016.
28 Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv: 1609.08144, 2016.
K. Yao, G. Zweig, M.-Y. Hwang, Y. Shi, and D. Yu. Recurrent neural networks for language understanding. In Interspeech, 2013.
W. Yin, H. Schiitze, B. Xiang, and B. Zhou. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics, 4, 2016.
S. Young. Talking to machines (statistically speaking). In Interspeech, 2002. | {
"id": "1606.01269"
} |
1704.07813 | Unsupervised Learning of Depth and Ego-Motion from Video | We present an unsupervised learning framework for the task of monocular depth
and camera motion estimation from unstructured video sequences. We achieve this
by simultaneously training depth and camera pose estimation networks using the
task of view synthesis as the supervisory signal. The networks are thus coupled
via the view synthesis objective during training, but can be applied
independently at test time. Empirical evaluation on the KITTI dataset
demonstrates the effectiveness of our approach: 1) monocular depth performing
comparably with supervised methods that use either ground-truth pose or depth
for training, and 2) pose estimation performing favorably with established SLAM
systems under comparable input settings. | http://arxiv.org/pdf/1704.07813 | Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe | cs.CV | Accepted to CVPR 2017. Project webpage:
https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/ | null | cs.CV | 20170425 | 20170801 | 7 1 0 2
g u A 1 ] V C . s c [
2 v 3 1 8 7 0 . 4 0 7 1 : v i X r a
# Unsupervised Learning of Depth and Ego-Motion from Video
# Tinghui Zhouâ UC Berkeley
# Matthew Brown Google
# Noah Snavely Google
# David G. Lowe Google
# Abstract
We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with re- cent work [10, 14, 16], we use an end-to-end learning ap- In proach with view synthesis as the supervisory signal. contrast to the previous work, our method is completely un- supervised, requiring only monocular video sequences for training. Our method uses single-view depth and multi- view pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical eval- uation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs fa- vorably compared to established SLAM systems under com- parable input settings.
(a) Training: unlabeled video clips. Target view Depth CNN \g- ee (b) Testing: single-view depth and multi-view pose estimation.
Figure 1. The training data to our system consists solely of un- labeled image sequences capturing scene appearance from differ- ent viewpoints, where the poses of the images are not provided. Our training procedure produces two models that operate inde- pendently, one for single-view depth prediction, and one for multi- view camera pose estimation.
# 1. Introduction
Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision has failed to recreate similar modeling capabilities for real-world scenes (e.g., where non-rigidity, occlusion and lack of texture are present). So why do humans excel at this task? One hypoth- esis is that we develop a rich, structural understanding of the world through our past visual experience that has largely consisted of moving around and observing vast numbers of scenes and developing consistent modeling of our observa- tions. From millions of such observations, we have learned about the regularities of the worldâroads are ï¬at, buildings are straight, cars are supported by roads etc., and we can apply this knowledge when perceiving a new scene, even from a single monocular image.
âThe majority of the work was done while interning at Google.
In this work, we mimic this approach by training a model that observes sequences of images and aims to explain its observations by predicting likely camera motion and the scene structure (as shown in Fig. 1). We take an end-to- end approach in allowing the model to map directly from input pixels to an estimate of ego-motion (parameterized as 6-DoF transformation matrices) and the underlying scene structure (parameterized as per-pixel depth maps under a reference view). We are particularly inspired by prior work that has suggested view synthesis as a metric [44] and recent work that tackles the calibrated, multi-view 3D case in an end-to-end framework [10]. Our method is unsupervised, and can be trained simply using sequences of images with no manual labeling or even camera motion information.
Our approach builds upon the insight that a geomet- ric view synthesis system only performs consistently well when its intermediate predictions of the scene geometry and the camera poses correspond to the physical ground-
1
truth. While imperfect geometry and/or pose estimation can cheat with reasonable synthesized views for certain types of scenes (e.g., textureless), the same model would fail miserably when presented with another set of scenes with more diverse layout and appearance structures. Thus, our goal is to formulate the entire view synthesis pipeline as the inference procedure of a convolutional neural net- work, so that by training the network on large-scale video data for the âmetaâ-task of view synthesis the network is forced to learn about intermediate tasks of depth and cam- era pose estimation in order to come up with a consistent explanation of the visual world. Empirical evaluation on the KITTI [15] benchmark demonstrates the effectiveness of our approach on both single-view depth and camera pose estimation. Our code will be made available at https: //github.com/tinghuiz/SfMLearner.
# 2. Related work
Structure from motion The simultaneous estimation of structure and motion is a well studied problem with an estab- lished toolchain of techniques [12, 50, 38]. Whilst the traditional toolchain is effective and efï¬cient in many cases, its reliance on ac- curate image correspondence can cause problems in areas of low texture, complex geometry/photometry, thin structures, and occlu- sions. To address these issues, several of the pipeline stages have been recently tackled using deep learning, e.g., feature match- ing [18], pose estimation [26], and stereo [10, 27, 53]. These learning-based techniques are attractive in that they are able to leverage external supervision during training, and potentially over- come the above issues when applied to test data.
Warping-based view synthesis One important application of geometric scene understanding is the task of novel view syn- thesis, where the goal is to synthesize the appearance of the scene seen from novel camera viewpoints. A classic paradigm for view synthesis is to ï¬rst either estimate the underlying 3D geometry explicitly or establish pixel correspondence among input views, and then synthesize the novel views by compositing image patches from the input views (e.g., [4, 55, 43, 6, 9]). Recently, end-to- end learning has been applied to reconstruct novel views by trans- forming the input based on depth or ï¬ow, e.g., DeepStereo [10], Deep3D [51] and Appearance Flows [54]. In these methods, the underlying geometry is represented by quantized depth planes (DeepStereo), probabilistic disparity maps (Deep3D) and view- dependent ï¬ow ï¬elds (Appearance Flows), respectively. Unlike methods that directly map from input views to the target view (e.g., [45]), warping-based methods are forced to learn intermedi- ate predictions of geometry and/or correspondence. In this work, we aim to distill such geometric reasoning capability from CNNs trained to perform warping-based view synthesis.
Learning single-view 3D from registered 2D views Our work is closely related to a line of recent research on learning single-view 3D inference from registered 2D observations. Garg et al. [14] propose to learn a single-view depth estimation CNN us- ing projection errors to a calibrated stereo twin for supervision. Concurrently, Deep3D [51] predicts a second stereo viewpoint
from an input image using stereoscopic ï¬lm footage as training data. A similar approach was taken by Godard et al. [16], with the addition of a left-right consistency constraint, and a better ar- chitecture design that led to impressive performance. Like our approach, these techniques only learn from image observations of the world, unlike methods that require explicit depth for training, e.g., [20, 42, 7, 27, 30].
These techniques bear some resemblance to direct methods for structure and motion estimation [22], where the camera parame- ters and scene depth are adjusted to minimize a pixel-based error function. However, rather than directly minimizing the error to obtain the estimation, the CNN-based methods only take a gradi- ent step for each batch of input instances, which allows the net- work to learn an implicit prior from a large corpus of related im- agery. Several authors have explored building differentiable ren- dering operations into their models that are trained in this way, e.g., [19, 29, 34].
While most of the above techniques (including ours) are mainly focused on inferring depth maps as the scene geometry output, re- cent work (e.g., [13, 41, 46, 52]) has also shown success in learn- ing 3D volumetric representations from 2D observations based on similar principles of projective geometry. Fouhey et al. [11] fur- ther show that it is even possible to learn 3D inference without 3D labels (or registered 2D views) by utilizing scene regularity.
Unsupervised/Self-supervised learning from video An- other line of related work to ours is visual representation learning from video, where the general goal is to design pretext tasks for learning generic visual features from video data that can later be re-purposed for other vision tasks such as object detection and se- mantic segmentation. Such pretext tasks include ego-motion esti- mation [2, 24], tracking [49], temporal coherence [17], temporal order veriï¬cation [36], and object motion mask prediction [39]. While we focus on inferring the explicit scene geometry and ego-motion in this work, intuitively, the internal representation learned by the deep network (especially the single-view depth CNN) should capture some level of semantics that could gener- alize to other tasks as well.
Concurrent to our work, Vijayanarasimhan et al. [48] indepen- dently propose a framework for joint training of depth, camera motion and scene motion from videos. While both methods are conceptually similar, ours is focused on the unsupervised aspect, whereas their framework adds the capability to incorporate super- vision (e.g., depth, camera motion or scene motion). There are signiï¬cant differences in how scene dynamics are modeled during training, in which they explicitly solve for object motion whereas our explainability mask discounts regions undergoing motion, oc- clusion and other factors.
# 3. Approach
Here we propose a framework for jointly training a single-view depth CNN and a camera pose estimation CNN from unlabeled video sequences. Despite being jointly trained, the depth model and the pose estimation model can be used independently during test-time inference. Training examples to our model consist of short image sequences of scenes captured by a moving camera. While our training procedure is robust to some degree of scene
Depth CNN
Figure 2. Overview of the supervision pipeline based on view syn- thesis. The depth network takes only the target view as input, and outputs a per-pixel depth map ËDt. The pose network takes both the target view (It) and the nearby/source views (e.g., Itâ1 and It+1) as input, and outputs the relative camera poses ( ËTtâtâ1, ËTtât+1). The outputs of both networks are then used to inverse warp the source views (see Sec. 3.2) to reconstruct the target view, and the photometric reconstruction loss is used for training the CNNs. By utilizing view synthesis as supervision, we are able to train the entire framework in an unsupervised manner from videos.
motion, we assume that the scenes we are interested in are mostly rigid, i.e., the scene appearance change across different frames is dominated by the camera motion.
# 3.1. View synthesis as supervision
The key supervision signal for our depth and pose prediction CNNs comes from the task of novel view synthesis: given one input view of a scene, synthesize a new image of the scene seen from a different camera pose. We can synthesize a target view given a per-pixel depth in that image, plus the pose and visibility in a nearby view. As we will show next, this synthesis process can be implemented in a fully differentiable manner with CNNs as the geometry and pose estimation modules. Visibility can be handled, along with non-rigidity and other non-modeled factors, using an âexplanabilityâ mask, which we discuss later (Sec. 3.3).
Let us denote < [1,...,Jy > as a training image sequence with one of the frames J; being the target view and the rest being the source views I,(1 < s < N,s # t). The view synthesis objective can be formulated as
Los = >So Ihe) â Lp), ()
where p indexes over pixel coordinates, and ËIs is the source view Is warped to the target coordinate frame based on a depth image- based rendering module [8] (described in Sec. 3.2), taking the pre- dicted depth ËDt, the predicted 4Ã4 camera transformation matrix1 ËTtâs and the source view Is as input.
Note that the idea of view synthesis as supervision has also been recently explored for learning single-view depth estima- tion [14, 16] and multi-view stereo [10]. However, to the best of our knowledge, all previous work requires posed image sets dur- ing training (and testing too in the case of DeepStereo), while our
1In practice, the CNN estimates the Euler angles and the 3D translation vector, which are then converted to the transformation matrix.
ti I, Project Warp oe . Pt Pt
Figure 3. Illustration of the differentiable image warping process. For each point pt in the target view, we ï¬rst project it onto the source view based on the predicted depth and camera pose, and then use bilinear interpolation to obtain the value of the warped image ËIs at location pt.
framework can be applied to standard videos without pose infor- mation. Furthermore, it predicts the poses as part of the learning framework. See Figure 2 for an illustration of our learning pipeline for depth and pose estimation.
# 3.2. Differentiable depth image-based rendering
As indicated in Eq. 1, a key component of our learning frame- work is a differentiable depth image-based renderer that recon- structs the target view It by sampling pixels from a source view Is based on the predicted depth map ËDt and the relative pose ËTtâs. Let pt denote the homogeneous coordinates of a pixel in the target view, and K denote the camera intrinsics matrix. We can obtain ptâs projected coordinates onto the source view ps by2
ps â¼ K ËTtâs ËDt(pt)K â1pt (2)
Notice that the projected coordinates ps are continuous values. To obtain I,(p;) for populating the value of f,(p,) (see Figure 3), we then use the differentiable bilinear sampling mechanism pro- posed in the spatial transformer networks [23] that linearly in- terpolates the values of the 4-pixel neighbors (top-left, top-right, bottom-left, and bottom-right) of ps to approximate I,(ps), i.e. I.(p.) = Is(Ps) = Dieses jeqry Wâ La(PY), where w'? is linearly proportional to the spatial proximity between p, and pyâ , and Y7,,; wv = 1. A similar strategy is used in [54] for learning to directly warp between different views, while here the coordi- nates for pixel warping are obtained through projective geometry that enables the factorization of depth and camera pose.
# 3.3. Modeling the model limitation
Note that when applied to monocular videos the above view synthesis formulation implicitly assumes 1) the scene is static without moving objects; 2) there is no occlusion/disocclusion be- tween the target view and the source views; 3) the surface is Lam- bertian so that the photo-consistency error is meaningful. If any of these assumptions are violated in a training sequence, the gra- dients could be corrupted and potentially inhibit training. To im- prove the robustness of our learning pipeline to these factors, we additionally train a explainability prediction network (jointly and simultaneously with the depth and pose networks) that outputs a per-pixel soft mask ËEs for each target-source pair, indicating the
2For notation simplicity, we omit showing the necessary conversion to homogeneous coordinates along the steps of matrix multiplication.
(5 Input | Conv [EG Deconv â> Concat ----» Upsample + Concat ââ» Prediction (a) Sing (b) Pose/explainability network le-view depth network
Figure 4. Network architecture for our depth/pose/explainability prediction modules. The width and height of each rectangular block indi- cates the output channels and the spatial dimension of the feature map at the corresponding layer respectively, and each reduction/increase in size indicates a change by the factor of 2. (a) For single-view depth, we adopt the DispNet [35] architecture with multi-scale side pre- dictions. The kernel size is 3 for all the layers except for the ï¬rst 4 conv layers with 7, 7, 5, 5, respectively. The number of output channels for the ï¬rst conv layer is 32. (b) The pose and explainabilty networks share the ï¬rst few conv layers, and then branch out to predict 6-DoF relative pose and multi-scale explainability masks, respectively. The number of output channels for the ï¬rst conv layer is 16, and the kernel size is 3 for all the layers except for the ï¬rst two conv and the last two deconv/prediction layers where we use 7, 5, 5, 7, respectively. See Section 3.5 for more details.
networkâs belief in where direct view synthesis will be success- fully modeled for each target pixel. Based on the predicted ËEs, the view synthesis objective is weighted correspondingly by
Lvs = ËEs(p)|It(p) â ËIs(p)| . <I1,...,IN >âS p (3)
explicit multi-scale and smoothness loss (e.g., as in [14, 16]) that allows gradients to be derived from larger spatial regions directly. We adopt the second strategy in this work as it is less sensitive to architectural choices. For smoothness, we minimize the L1 norm of the second-order gradients for the predicted depth maps (similar to [48]).
Since we do not have direct supervision for ËEs, training with the above loss would result in a trivial solution of the network always predicting ËEs to be zero, which perfectly minimizes the loss. To resolve this, we add a regularization term Lreg( ËEs) that encour- ages nonzero predictions by minimizing the cross-entropy loss with constant label 1 at each pixel location. In other words, the network is encouraged to minimize the view synthesis objective, but allowed a certain amount of slack for discounting the factors not considered by the model.
Our ï¬nal objective becomes
Lyinat = Y> Los + AsLimooth + re Y Lrea(Bs), (4) l s
where l indexes over different image scales, s indexes over source images, and λs and λe are the weighting for the depth smoothness loss and the explainability regularization, respectively.
# 3.5. Network architecture
# 3.4. Overcoming the gradient locality
One remaining issue with the above learning pipeline is that the gradients are mainly derived from the pixel intensity difference be- tween I(pt) and the four neighbors of I(ps), which would inhibit training if the correct ps (projected using the ground-truth depth and pose) is located in a low-texture region or far from the current estimation. This is a well known issue in motion estimation [3]. Empirically, we found two strategies to be effective for overcom- ing this issue: 1) using a convolutional encoder-decoder architec- ture with a small bottleneck for the depth network that implicitly constrains the output to be globally smooth and facilitates gradi- ents to propagate from meaningful regions to nearby regions; 2)
Single-view depth For single-view depth prediction, we adopt the DispNet architecture proposed in [35] that is mainly based on an encoder-decoder design with skip connections and multi-scale side predictions (see Figure 4). All conv layers are followed by ReLU activation except for the prediction layers, where we use 1/(α â sigmoid(x) + β) with α = 10 and β = 0.01 to con- strain the predicted depth to be always positive within a reason- able range. We also experimented with using multiple views as input to the depth network, but did not ï¬nd this to improve the results. This is in line with the observations in [47], where opti- cal ï¬ow constraints need to be enforced to utilize multiple views effectively.
Pose The input to the pose estimation network is the target view concatenated with all the source views (along the color channels), and the outputs are the relative poses between the target view and each of the source views. The network consists of 7 stride-2 con- volutions followed by a 1 Ã 1 convolution with 6 â (N â 1) output channels (corresponding to 3 Euler angles and 3-D translation for each source view). Finally, global average pooling is applied to aggregate predictions at all spatial locations. All conv layers are followed by ReLU except for the last layer where no nonlinear activation is applied.
Explainability mask The explainability prediction network shares the ï¬rst ï¬ve feature encoding layers with the pose network, followed by 5 deconvolution layers with multi-scale side predic- tions. All conv/deconv layers are followed by ReLU except for the prediction layers with no nonlinear activation. The number of output channels for each prediction layer is 2 â (N â 1), with ev- ery two channels normalized by softmax to obtain the explainabil- ity prediction for the corresponding source-target pair (the second channel after normalization is ËEs and used in computing the loss in Eq. 3).
# 4. Experiments
Here we evaluate the performance of our system, and compare with prior approaches on single-view depth as well as ego-motion estimation. We mainly use the KITTI dataset [15] for benchmark- ing, but also use the Make3D dataset [42] for evaluating cross- dataset generalization ability.
Training Details We implemented the system using the pub- licly available TensorFlow [1] framework. For all the experiments, we set λs = 0.5/l (l is the downscaling factor for the correspond- ing scale) and λe = 0.2. During training, we used batch normal- ization [21] for all the layers except for the output layers, and the Adam [28] optimizer with β1 = 0.9, β2 = 0.999, learning rate of 0.0002 and mini-batch size of 4. The training typically converges after about 150K iterations. All the experiments are performed with image sequences captured with a monocular camera. We re- size the images to 128 à 416 during training, but both the depth and pose networks can be run fully-convolutionally for images of arbitrary size at test time.
# 4.1. Single-view depth estimation
We train our system on the split provided by [7], and exclude all the frames from the testing scenes as well as static sequences with mean optical ï¬ow magnitude less than 1 pixel for training. We ï¬x the length of image sequences to be 3 frames, and treat the central frame as the target view and the ±1 frames as the source views. We use images captured by both color cameras, but treated them independently when forming training sequences. This results in a total of 44, 540 sequences, out of which we use 40, 109 for training and 4, 431 for validation.
To the best of our knowledge, no previous systems exist that learn single-view depth estimation in an unsupervised manner from monocular videos. Nonetheless, here we provide comparison with prior methods with depth supervision [7] and recent methods that use calibrated stereo images (i.e. with pose supervision) for
Input image Our prediction
Figure 5. Our sample predictions on the Cityscapes dataset using the model trained on Cityscapes only.
training [14, 16]. Since the depth predicted by our method is de- ï¬ned up to a scale factor, for evaluation we multiply the predicted depth maps by a scalar Ës that matches the median with the ground- truth, i.e. Ës = median(Dgt)/median(Dpred).
Similar to [16], we also experimented with ï¬rst pre-training the system on the larger Cityscapes dataset [5] (sample predictions are shown in Figure 5), and then ï¬ne-tune on KITTI, which results in slight performance improvement.
KITTI Here we evaluate the single-view depth performance on the 697 images from the test split of [7]. As shown in Table 1, our unsupervised method performs comparably with several su- pervised methods (e.g. Eigen et al. [7] and Garg et al. [14]), but falls short of concurrent work by Godard et al. [16] that uses cal- ibrated stereo images (i.e. with pose supervision) with left-right cycle consistency loss for training. For future work, it would be in- teresting to see if incorporating the similar cycle consistency loss into our framework could further improve the results. Figure 6 provides examples of visual comparison between our results and some supervised baselines over a variety of examples. One can see that although trained in an unsupervised manner, our results are comparable to that of the supervised baselines, and sometimes preserve the depth boundaries and thin structures such as trees and street lights better.
We show sample predictions made by our initial Cityscapes model and the ï¬nal model (pre-trained on Cityscapes and then ï¬ne-tuned on KITTI) in Figure 7. Due to the domain gap between the two datasets, our Cityscapes model sometimes has difï¬culty in recovering the complete shape of the car/bushes, and mistakes them with distant objects.
We also performed an ablation study of the explainability mod- eling (see Table 1), which turns out only offering a modest per- formance boost. This is likely because 1) most of the KITTI scenes are static without signiï¬cant scene motions, and 2) the oc- clusion/visibility effects only occur in small regions in sequences
Input Ground-truth Eigen et al. (depth sup.) Garg ef al. (pose sup.) Ours (unsupervised) a
{
Figure 6. Comparison of single-view depth estimation between Eigen et al. [7] (with ground-truth depth supervision), Garg et al. [14] (with ground-truth pose supervision), and ours (unsupervised). The ground-truth depth map is interpolated from sparse measurements for visualization purpose. The last two rows show typical failure cases of our model, which sometimes struggles in vast open scenes and objects close to the front of the camera.
across a short time span (3-frames), which make the explainabil- ity modeling less essential to the success of training. Nonetheless, our explainability prediction network does seem to capture the fac- tors like scene motion and visibility well (see Sec. 4.3), and could potentially be more important for other more challenging datasets.
Make3D To evaluate the generalization ability of our single- view depth model, we directly apply our model trained on Cityscapes + KITTI to the Make3D dataset unseen during train- ing. While there still remains a signiï¬cant performance gap be- tween our method and others supervised using Make3D ground- truth depth (see Table 2), our predictions are able to capture the global scene layout reasonably well without any training on the Make3D images (see Figure 8).
# 4.2. Pose estimation
To evaluate the performance of our pose estimation network, we applied our system to the ofï¬cial KITTI odometry split (con- taining 11 driving sequences with ground truth odometry obtained through the IMU/GPS readings, which we use for evaluation pur- pose only), and used sequences 00-08 for training and 09-10 for testing. In this experiment, we ï¬x the length of input image se- quences to our system to 5 frames. We compare our ego-motion estimation with two variants of monocular ORB-SLAM [37] (a well-established SLAM system): 1) ORB-SLAM (full), which recovers odometry using all frames of the driving sequence (i.e. allowing loop closure and re-localization), and 2) ORB-SLAM (short), which runs on 5-frame snippets (same as our input setting). Another baseline we compare with is the dataset mean
Method Dataset â Supervision Error metric Accuracy metric Depth Pose AbsRel SqRel RMSE RMSElog 6 <1.25 5 <1.25% 5 < 1.25% Train set mean K v 0.403 5.530 8.709 0.403 0.593 0.776 0.878 Eigen et al. [7] Coarse K v 0.214 1.605 6.563 0.292 0.673 0.884 0.957 Eigen et al. [7] Fine K v 0.203 1.548 6.307 0.282 0.702 0.890 0.958 Liu et al. [32] K v 0.202 1.614 6.523 0.275 0.678 0.895 0.965 Godard et al. [16] K v 0.148 1.344 5.927 0.247 0.803 0.922 0.964 Godard et al. [16] CS+K v 0.124 1.076 5.311 0.219 0.847 0.942 0.973 Ours (w/o explainability) K 0.221 2.226 = 7.527 0.294 0.676 0.885 0.954 Ours K 0.208 1.768 6.856 0.283 0.678 0.885 0.957 Ours cs 0.267 2.686 7.580 0.334 0.577 0.840 0.937 Ours CS+K 0.198 1.836 6.565 0.275 0.718 0.901 0.960 Garg et al. [14] cap 50m K v 0.169 1.080 5.104 0.273 0.740 0.904 0.962 Ours (w/o explainability) cap 50m. K 0.208 1.551 5.452 0.273 0.695 0.900 0.964 Ours cap 50m K 0.201 1.391 5.181 0.264 0.696 0.900 0.966 Ours cap 50m cs 0.260 2.232 6.148 0.321 0.590 0.852 0.945 Ours cap 50m CS+K 0.190 1.436 4.975 0.258 0.735 0.915 0.968
Table 1. Single-view depth results on the KITTI dataset [15] using the split of Eigen et al. [7] (Baseline numbers taken from [16]). For training, K = KITTI, and CS = Cityscapes [5]. All methods we compare with use some form of supervision (either ground-truth depth or calibrated camera pose) during training. Note: results from Garg et al. [14] are capped at 50m depth, so we break these out separately in the lower part of the table.
Input image Ours (CS) Ours (CS + KITT!)
Figure 7. Comparison of single-view depth predictions on the KITTI dataset by our initial Cityscapes model and the ï¬nal model (pre-trained on Cityscapes and then ï¬ne-tuned on KITTI). The Cityscapes model sometimes makes structural mistakes (e.g. holes on car body) likely due to the domain gap between the two datasets.
Input Ground-truth Ours 4 |
Method Supervision Error metric Depth Pose AbsRel SqRel RMSE RMSE log Train set mean v 0.876 = 13.98 12.27 0.307 Karsch et al.[25] Â¥ 0.428 5.079 8.389 0.149 Liu et al. [33] v 0.475 6.562 10.05 0.165 Laina er al. [31] v 0.204 1.840 5.683 0.084 Godard et al. [16] Yâ_O544 1094 11-76 0.193 Ours 0.383 5.321 1047 0.478
Table 2. Results on the Make3D dataset [42]. Similar to ours, Go- dard et al. [16] do not utilize any of the Make3D data during train- ing, and directly apply the model trained on KITTI+Cityscapes to the test set. Following the evaluation protocol of [16], the errors are only computed where depth is less than 70 meters in a central image crop.
Figure 8. Our sample predictions on the Make3D dataset. Note that our model is trained on KITTI + Cityscapes only, and directly tested on Make3D.
the scaling factor for the predictions made by each method to best align with the ground truth, and then measure the Absolute Trajec- tory Error (ATE) [37] as the metric. ATE is computed on 5-frame snippets and averaged over the full sequence.3 As shown in Table 3 and Fig. 9, our method outperforms both baselines (mean odome- try and ORB-SLAM (short)) that share the same input setting as ours, but falls short of ORB-SLAM (full), which leverages whole sequences (1591 for seq. 09 and 1201 for seq. 10) for loop closure and re-localization.
For better understanding of our pose estimation results, we show in Figure 9 the ATE curve with varying amount of side-
of car motion (using ground-truth odometry) for 5-frame snippets. To resolve scale ambiguity during evaluation, we ï¬rst optimize
3For evaluating ORB-SLAM (full) we break down the trajectory of the full sequence into 5-frame snippets with the reference coordinate frame adjusted to the central frame of each snippet.
Method Seq. 09 Seq. 10 ORB-SLAM (full) 0.014 ± 0.008 0.012 ± 0.011 ORB-SLAM (short) Mean Odom. Ours 0.064 ± 0.141 0.032 ± 0.026 0.021 ± 0.017 0.064 ± 0.130 0.028 ± 0.023 0.020 ± 0.015
Table 3. Absolute Trajectory Error (ATE) on the KITTI odome- try split averaged over all 5-frame snippets (lower is better). Our method outperforms baselines with the same input setting, but falls short of ORB-SLAM (full) that uses strictly more data.
° ° f= oe ° S o âMean Odom. ORB-SLAM (ull) ORB-SLAM (short) Ours ° fey Do ° Absolute Translation Error (m) ° ° Rg 0 0.4 0.2 0.3 0.4 05 Left/right turning magnitude (m)
Figure 9. Absolute Trajectory Error (ATE) at different left/right turning magnitude (coordinate difference in the side-direction be- tween the start and ending frame of a testing sequence). Our method performs signiï¬cantly better than ORB-SLAM (short) when side rotation is small, and is comparable with ORB-SLAM (full) across the entire spectrum.
rotation by the car between the beginning and the end of a se- quence. Figure 9 suggests that our method is signiï¬cantly bet- ter than ORB-SLAM (short) when the side-rotation is small (i.e. car mostly driving forward), and comparable to ORB-SLAM (full) across the entire spectrum. The large performance gap between ours and ORB-SLAM (short) suggests that our learned ego-motion could potentially be used as an alternative to the local estimation modules in monocular SLAM systems.
# 4.3. Visualizing the explainability prediction
We visualize example explainability masks predicted by our network in Figure 10. The ï¬rst three rows suggest that the network has learned to identify dynamic objects in the scene as unexplain- able by our model, and similarly, rows 4â5 are examples of ob- jects that disappear from the frame in subsequent views. The last two rows demonstrate the potential downside of explainability- weighted loss: the depth CNN has low conï¬dence in predicting thin structures well, and tends to mask them as unexplainable.
# 5. Discussion
We have presented an end-to-end learning pipeline that utilizes the task of view synthesis for supervision of single-view depth and camera pose estimation. The system is trained on unlabeled videos, and yet performs comparably with approaches that require ground-truth depth or pose for training. Despite good performance on the benchmark evaluation, our method is by no means close to solving the general problem of unsupervised learning of 3D scene structure inference. A number of major challenges are yet to be
Target view Explanability mask Source view
Figure 10. Sample visualizations of the explainability masks. Highlighted pixels are predicted to be unexplainable by the net- work due to motion (rows 1â3), occlusion/visibility (rows 4â5), or other factors (rows 7â8).
addressed: 1) our current framework does not explicitly estimate scene dynamics and occlusions (although they are implicitly taken into account by the explainability masks), both of which are crit- ical factors in 3D scene understanding. Direct modeling of scene dynamics through motion segmentation (e.g. [48, 40]) could be a potential solution; 2) our framework assumes the camera intrinsics are given, which forbids the use of random Internet videos with un- known camera types/calibration â we plan to address this in future work; 3) depth maps are a simpliï¬ed representation of the under- lying 3D scene. It would be interesting to extend our framework to learn full 3D volumetric representations (e.g. [46]).
Another interesting area for future work would be to investi- gate in more detail the representation learned by our system. In particular, the pose network likely uses some form of image cor- respondence in estimating the camera motion, whereas the depth estimation network likely recognizes common structural features of scenes and objects. It would be interesting to probe these, and investigate the extent to which our network already performs, or could be re-purposed to perform, tasks such as object detection and semantic segmentation.
Acknowledgments: We thank our colleagues, Sudheendra Vijaya- narasimhan, Susanna Ricco, Cordelia Schmid, Rahul Sukthankar, and Ka- terina Fragkiadaki for their help. We also thank the anonymous reviewers for their valuable comments. TZ would like to thank Shubham Tulsiani for helpful discussions, and Clement Godard for sharing the evaluation code. This work is also partially funded by Intel/NSF VEC award IIS-1539099.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. TensorFlow: Large-scale machine learning on heteroge- neous distributed systems. arXiv preprint arXiv:1603.04467, 2016. 5
[2] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In Int. Conf. Computer Vision, 2015. 2
[3] J. Bergen, P. Anandan, K. Hanna, and R. Hingorani. Hier- In Computer Vi- archical model-based motion estimation. sionECCVâ92, pages 237â252. Springer, 1992. 4
[4] S. E. Chen and L. Williams. View interpolation for image synthesis. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques, pages 279â 288. ACM, 1993. 2
[5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The Cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213â3223, 2016. 5, 7 [6] P. E. Debevec, C. J. Taylor, and J. Malik. Modeling and ren- dering architecture from photographs: A hybrid geometry- and image-based approach. In Proceedings of the 23rd an- nual conference on Computer graphics and interactive tech- niques, pages 11â20. ACM, 1996. 2
[7] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction In from a single image using a multi-scale deep network. Advances in Neural Information Processing Systems, 2014. 2, 5, 6, 7
[8] C. Fehn. Depth-image-based rendering (dibr), compression, and transmission for a new approach on 3d-tv. In Electronic Imaging 2004, pages 93â104. International Society for Op- tics and Photonics, 2004. 3
[9] A. Fitzgibbon, Y. Wexler, and A. Zisserman. Image-based Int. Journal of Com- rendering using image-based priors. puter Vision, 63(2):141â151, 2005. 2
[10] J. Flynn, I. Neulander, J. Philbin, and N. Snavely. Deep- Stereo: Learning to predict new views from the worldâs im- agery. In Computer Vision and Pattern Recognition, 2016. 1, 2, 3
[11] D. F. Fouhey, W. Hussain, A. Gupta, and M. Hebert. Single image 3D without a single 3D image. In Proceedings of the IEEE International Conference on Computer Vision, pages 1053â1061, 2015. 2
[12] Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski. To- wards internet-scale multi-view stereo. In Computer Vision and Pattern Recognition, pages 1434â1441. IEEE, 2010. 2
[13] M. Gadelha, S. Maji, and R. Wang. tion from 2d views of multiple objects. arXiv:1612.05872, 2016. 2 3d shape induc- arXiv preprint
[14] R. Garg, V. K. BG, G. Carneiro, and I. Reid. Unsupervised CNN for single view depth estimation: Geometry to the res- cue. In European Conf. Computer Vision, 2016. 1, 2, 3, 4, 5, 6, 7
[15] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? The KITTI vision benchmark suite.
In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3354â3361. IEEE, 2012. 2, 5, 7 [16] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with left-right consistency. In Computer Vision and Pattern Recognition, 2017. 1, 2, 3, 4, 5, 7
[17] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. Le- Cun. Unsupervised learning of spatiotemporally coherent metrics. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 4086â4093, 2015. 2 [18] X. Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. MatchNet: Unifying feature and metric learning for patch- based matching. In Computer Vision and Pattern Recogni- tion, pages 3279â3286, 2015. 2
[19] A. Handa, M. Bloesch, V. Patraucean, S. Stent, J. McCor- mac, and A. Davison. gvnn: Neural network library for ge- ometric computer vision. arXiv preprint arXiv:1607.07405, 2016. 2
[20] D. Hoiem, A. A. Efros, and M. Hebert . Automatic photo pop-up. In Proc. SIGGRAPH, 2005. 2
[21] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 5
In In- ternational Workshop on Vision Algorithms, pages 267â277. Springer, 1999. 2
[23] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial In Advances in Neural Information transformer networks. Processing Systems, pages 2017â2025, 2015. 3
[24] D. Jayaraman and K. Grauman. Learning image representa- tions tied to egomotion. In Int. Conf. Computer Vision, 2015. 2
[25] K. Karsch, C. Liu, and S. B. Kang. Depth transfer: Depth extraction from video using non-parametric sampling. IEEE transactions on pattern analysis and machine intelligence, 36(11):2144â2158, 2014. 7
[26] A. Kendall, M. Grimes, and R. Cipolla. PoseNet: A convo- lutional network for real-time 6-DOF camera relocalization. In Int. Conf. Computer Vision, pages 2938â2946, 2015. 2
[27] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry. End-to-end learning of geometry and context for deep stereo regression. arXiv preprint arXiv:1703.04309, 2017. 2
[28] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980, 2014. 5
[29] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, pages 2539â2547. Curran Associates, Inc., 2015. 2
[30] Y. Kuznietsov, J. St¨uckler, and B. Leibe. Semi-supervised deep learning for monocular depth map prediction. arXiv preprint arXiv:1702.02706, 2017. 2
[31] I. Laina, C. Rupprecht, V. Belagiannis, F. Tombari, and N. Navab. Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth Interna- tional Conference on, pages 239â248. IEEE, 2016. 7
[32] F. Liu, C. Shen, G. Lin, and I. Reid. Learning depth from sin- gle monocular images using deep convolutional neural ï¬elds. IEEE transactions on pattern analysis and machine intelli- gence, 38(10):2024â2039, 2016. 7
[33] M. Liu, M. Salzmann, and X. He. Discrete-continuous depth estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 716â723, 2014. 7
[34] M. M. Loper and M. J. Black. OpenDR: An approximate differentiable renderer. In European Conf. Computer Vision, pages 154â169. Springer, 2014. 2
[35] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train con- volutional networks for disparity, optical ï¬ow, and scene In Proceedings of the IEEE Conference ï¬ow estimation. on Computer Vision and Pattern Recognition, pages 4040â 4048, 2016. 4
[36] I. Misra, C. L. Zitnick, and M. Hebert. Shufï¬e and learn: unsupervised learning using temporal order veriï¬cation. In European Conference on Computer Vision, pages 527â544. Springer, 2016. 2
[37] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos. ORB- SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 2015. 6, 7
[38] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison. In Int. DTAM: Dense tracking and mapping in real-time. Conf. Computer Vision, pages 2320â2327. IEEE, 2011. 2
[39] D. Pathak, R. Girshick, P. Doll´ar, T. Darrell, and B. Hariha- ran. Learning features by watching objects move. In CVPR, 2017. 2
[40] R. Ranftl, V. Vineet, Q. Chen, and V. Koltun. Dense monoc- ular depth estimation in complex dynamic scenes. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4058â4066, 2016. 8
[41] D. J. Rezende, S. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised learning of 3d structure from images. In Advances In Neural Information Processing Systems, pages 4997â5005, 2016. 2
[42] A. Saxena, M. Sun, and A. Y. Ng. Make3D: Learning 3D scene structure from a single still image. Pattern Analysis and Machine Intelligence, 31(5):824â840, May 2009. 2, 5, 7 [43] S. M. Seitz and C. R. Dyer. View morphing. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 21â30. ACM, 1996. 2
[44] R. Szeliski. Prediction error as a quality metric for motion and stereo. In Int. Conf. Computer Vision, volume 2, pages 781â788. IEEE, 1999. 1
[45] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Multi-view 3d models from single images with a convolutional network. In European Conference on Computer Vision, pages 322â337. Springer, 2016. 2
[46] S. Tulsiani, T. Zhou, A. A. Efros, and J. Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Computer Vision and Pattern Recogni- tion, 2017. 2, 8
[47] B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox. DeMoN: Depth and mo-
tion network for learning monocular stereo. arXiv preprint arXiv:1612.02401, 2016. 4
[48] S. Vijayanarasimhan, S. Ricco, C. Schmid, R. Sukthankar, and K. Fragkiadaki. SfM-Net: Learning of structure and mo- tion from video. arXiv preprint, 2017. 2, 4, 8
[49] X. Wang and A. Gupta. Unsupervised learning of visual rep- resentations using videos. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 2794â2802, 2015. 2
[50] C. Wu. VisualSFM: A visual structure from motion system. http://ccwu.me/vsfm, 2011. 2
[51] J. Xie, R. B. Girshick, and A. Farhadi. Deep3D: Fully au- tomatic 2D-to-3D video conversion with deep convolutional neural networks. In European Conf. Computer Vision, 2016. 2
[52] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3d object reconstruc- tion without 3d supervision. In Advances in Neural Informa- tion Processing Systems, pages 1696â1704, 2016. 2
[53] J. Zbontar and Y. LeCun. Stereo matching by training a con- volutional neural network to compare image patches. Jour- nal of Machine Learning Research, 17(1-32):2, 2016. 2 [54] T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance ï¬ow. In European Conference on Computer Vision, pages 286â301. Springer, 2016. 2, 3 [55] C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski. High-quality video view interpolation using a In ACM Transactions on Graphics layered representation. (TOG), volume 23, pages 600â608. ACM, 2004. 2 | {
"id": "1502.03167"
} |
1704.07138 | Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search | We present Grid Beam Search (GBS), an algorithm which extends beam search to
allow the inclusion of pre-specified lexical constraints. The algorithm can be
used with any model that generates a sequence $ \mathbf{\hat{y}} =
\{y_{0}\ldots y_{T}\} $, by maximizing $ p(\mathbf{y} | \mathbf{x}) =
\prod\limits_{t}p(y_{t} | \mathbf{x}; \{y_{0} \ldots y_{t-1}\}) $. Lexical
constraints take the form of phrases or words that must be present in the
output sequence. This is a very general way to incorporate additional knowledge
into a model's output without requiring any modification of the model
parameters or training data. We demonstrate the feasibility and flexibility of
Lexically Constrained Decoding by conducting experiments on Neural
Interactive-Predictive Translation, as well as Domain Adaptation for Neural
Machine Translation. Experiments show that GBS can provide large improvements
in translation quality in interactive scenarios, and that, even without any
user input, GBS can be used to achieve significant gains in performance in
domain adaptation scenarios. | http://arxiv.org/pdf/1704.07138 | Chris Hokamp, Qun Liu | cs.CL | Accepted as a long paper at ACL 2017 | null | cs.CL | 20170424 | 20170502 | 7 1 0 2
y a M 2 ] L C . s c [
2 v 8 3 1 7 0 . 4 0 7 1 : v i X r a
# Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search
# Chris Hokamp ADAPT Centre Dublin City University chris.hokamp@computing.dcu.ie
# Qun Liu ADAPT Centre Dublin City University qun.liu@dcu.ie
# Abstract
We present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lex- ical constraints. The algorithm can be used with any model that generates a se- quence Â¥ = {yo... yr}, by maximizing plyls) = [I] purl {yo ---yeâ1}). Lex- ical constraints take the form of phrases or words that must be present in the out- put sequence. This is a very general way to incorporate additional knowledge into a modelâs output without requiring any modification of the model parameters or training data. We demonstrate the feasibil- ity and flexibility of Lexically Constrained Decoding by conducting experiments on Neural Interactive-Predictive Translation, as well as Domain Adaptation for Neural Machine Translation. Experiments show that GBS can provide large improvements in translation quality in interactive scenar- ios, and that, even without any user in- put, GBS can be used to achieve signifi- cant gains in performance in domain adap- tation scenarios.
time. Humans can provide corrections after view- ing a systemâs initial output, or separate classiï¬- cation models may be able to predict parts of the output with high conï¬dence. When the domain of the input is known, a domain terminology may be employed to ensure speciï¬c phrases are present in a systemâs predictions. Our goal in this work is to ï¬nd a way to force the output of a model to contain such lexical constraints, while still taking advan- tage of the distribution learned from training data. For Machine Translation (MT) usecases in par- ticular, ï¬nal translations are often produced by combining automatically translated output with user Examples include Post-Editing (PE) (Koehn, 2009; Specia, 2011) and Interactive- Predictive MT (Foster, 2002; Barrachina et al., 2009; Green, 2014). These interactive scenarios can be uniï¬ed by considering user inputs to be lex- ical constraints which guide the search for the op- timal output sequence.
In this paper, we formalize the notion of lexi- cal constraints, and propose a decoding algorithm which allows the speciï¬cation of subsequences that are required to be present in a modelâs out- put. Individual constraints may be single tokens or multi-word phrases, and any number of constraints may be speciï¬ed simultaneously.
# Introduction
The output of many natural language processing models is a sequence of text. Examples include automatic summarization (Rush et al., 2015), ma- chine translation (Koehn, 2010; Bahdanau et al., 2014), caption generation (Xu et al., 2015), and di- alog generation (Serban et al., 2016), among oth- ers.
Although we focus upon interactive applica- tions for MT in our experiments, lexically con- to any scenario strained decoding is relevant where a model is asked to generate a sequence Ëy = {y0 . . . yT } given both an input x, and a set {c0...cn}, where each ci is a sub-sequence {ci0 . . . cij}, that must appear somewhere in Ëy. This makes our work applicable to a wide range of text generation scenarios, including image de- scription, dialog generation, abstractive summa- rization, and question answering.
In some real-world scenarios, additional infor- mation that could inform the search for the opti- mal output sequence may be available at inference
The rest of this paper is organized as follows: Section 2 gives the necessary background for our
Constraint 1 Rechte <S> Thre miissen vor Start Continue Generate zz || > | el ihrer e . . Continue Continue Generate e <S> e Start Generate Constraint 2 Abreise | geschiitzt werden ° e e e e Start | Continue Generate Generate </S> Input: Rights protection should begin before their departure . 1: A visualization of the for actual from MT The
Figure 1: A visualization of the decoding process for an actual example from our English-German MT experiments. The output token at each timestep appears at the top of the ï¬gure, with lexical constraints enclosed in boxes. Generation is shown in blue, Starting new constraints in green, and Continuing constraints in red. The function used to create the hypothesis at each timestep is written at the bottom. Each box in the grid represents a beam; a colored strip inside a beam represents an individual hypothesis in the beamâs k-best stack. Hypotheses with circles inside them are closed, all other hypotheses are open. (Best viewed in colour).
discussion of GBS, Section 3 discusses the lex- ically constrained decoding algorithm in detail, Section 4 presents our experiments, and Section 5 gives an overview of closely related work.
and the already-generated symbols {y0 . . . yiât}. However, greedy selection of the most probable output at each timestep, i.e.:
# 2 Background: Beam Search for Sequence Generation
Ëyt = argmax yiâ{v} p(yi|x; {y0 . . . ytâ1}), (3)
Under a model parameterized by θ, let the best output sequence Ëy given input x be Eq. 1.
Ëy = argmax yâ{y[T]} pθ(y|x), (1)
where we use {y[T]} to denote the set of all se- quences of length T . Because the number of pos- sible sequences for such a model is |v|T , where |v| is the number of output symbols, the search for Ëy can be made more tractable by factorizing pθ(y|x) into Eq. 2:
T po(y|x) = |] po(uelxs {yo---yea}). 2) t=0
The standard approach is thus to generate the output sequence from beginning to end, condition- ing the output at each timestep upon the input x,
risks making locally optimal decisions which are actually globally sub-optimal. On the other hand, an exhaustive exploration of the output space would require scoring |v|T sequences, which is intractable for most real-world models. Thus, a search or decoding algorithm is often used as a compromise between these two extremes. A com- mon solution is to use a heuristic search to at- tempt to ï¬nd the best output efï¬ciently (Pearl, 1984; Koehn, 2010; Rush et al., 2013). The key idea is to discard bad options early, while trying to avoid discarding candidates that may be locally risky, but could eventually result in the best overall output.
Beam search (Och and Ney, 2004) is probably the most popular search algorithm for decoding se- quences. Beam search is simple to implement, and is ï¬exible in the sense that the semantics of the
(A) (B) © Er \ f | (©) Input Coverage Time f i
Figure 2: Different structures for beam search. Boxes repre- sent beams which hold k-best lists of hypotheses. (A) Chart Parsing using SCFG rules to cover spans in the input. (B) Source coverage as used in PB-SMT. (C) Sequence timesteps (as used in Neural Sequence Models), GBS is an extension of (C). In (A) and (B), hypotheses are ï¬nished once they reach the ï¬nal beam. In (C), a hypothesis is only complete if it has generated an end-of-sequence (EOS) symbol.
graph of beams can be adapted to take advantage of additional structure that may be available for speciï¬c tasks. For example, in Phrase-Based Sta- tistical MT (PB-SMT) (Koehn, 2010), beams are organized by the number of source words that are covered by the hypotheses in the beam â a hypoth- esis is âï¬nishedâ when it has covered all source words. In chart-based decoding algorithms such as CYK, beams are also tied to coverage of the input, but are organized as cells in a chart, which facili- tates search for the optimal latent structure of the output (Chiang, 2007). Figure 2 visualizes three common ways to structure search. (A) and (B) de- pend upon explicit structural information between the input and output, (C) only assumes that the output is a sequence where later symbols depend upon earlier ones. Note also that (C) corresponds exactly to the bottom rows of Figures 1 and 3.
With the recent success of neural models for text generation, beam search has become the de-facto choice for decoding optimal output se- quences (Sutskever et al., 2014). However, with neural sequence models, we cannot organize beams by their explicit coverage of the input. A simpler alternative is to organize beams by output timesteps from t0 · · · tN , where N is a hyperpa- rameter that can be set heuristically, for example by multiplying a factor with the length of the in- put to make an educated guess about the maximum length of the output (Sutskever et al., 2014). Out- put sequences are generally considered complete once a special âend-of-sentenceâ(EOS) token has been generated. Beam size in these models is also typically kept small, and recent work has shown
Constraint Coverage Time
Figure 3: Visualizing the lexically constrained decoderâs complete search graph. Each rectangle represents a beam containing k hypotheses. Dashed (diagonal) edges indicate starting or continuing constraints. Horizontal edges repre- sent generating from the modelâs distribution. The horizontal axis covers the timesteps in the output sequence, and the ver- tical axis covers the constraint tokens (one row for each token in each constraint). Beams on the top level of the grid contain hypotheses which cover all constraints.
that the performance of some architectures can ac- tually degrade with larger beam size (Tu et al., 2016).
# 3 Grid Beam Search
Our goal is to organize decoding in such a way that we can constrain the search space to outputs which contain one or more pre-speciï¬ed sub-sequences. We thus wish to use a modelâs distribution both to âplaceâ lexical constraints correctly, and to gener- ate the parts of the output which are not covered by the constraints.
Algorithm 1 presents the pseudo-code for lex- ically constrained decoding, see Figures 1 and 3 for visualizations of the search process. Beams in the grid are indexed by t and c. The t vari- able tracks the timestep of the search, while the c variable indicates how many constraint tokens are covered by the hypotheses in the current beam. Note that each step of c covers a single constraint token. In other words, constraints is an array of sequences, where individual tokens can be indexed as constraintsij, i.e. tokenj in constrainti. The numC parameter in Algorithm 1 represents the to- tal number of tokens in all constraints.
The hypotheses in a beam can be separated into two types (see lines 9-11 and 15-19 of Algo- rithm 1):
1. open hypotheses can either generate from the modelâs distribution, or start available con- straints,
2. closed hypotheses can only generate the next
Algorithm 1 Pseudo-code for Grid Beam Search, note that t and c indices are 0-based 1: procedure CONSTRAINEDSEARCH(model, input, constraints, maxLen, numC, k) startHyp < model.getStartHyp(input, constraints) Grid < initGrid(maz Len, numC, k) Grid(0][0] = startHyp fort=1, t++, t< mazLendo > initialize beams in grid for c = max(0, (numC +t) âmaxLen), c++, c¢< min(t,numC) do n,8,g9=0 for each hyp ⬠Grid{[t â 1][c] do if hyp.isOpen() then end if end for if c > 0 then if hyp.isOpen() then else : end if 20: end for 21: end if 22: Grid[{t][c] = k-argmax model.score(h) henUsUg 23: end for 24: end for 25: topLevelHyps = Grid|[:][numC] 26: finishedHyps = hasEOS(topLevelHyps) model.score(h) 27: bestHyp = â argmax 2: 3 4 5 6 7 8 9 0: g = g9U model.generate(hyp, input, constraints) 1 2: 3 4 5 6 7 8 s = sU model.continue(hyp, input, constraints) 9 > generate new open hyps for each hyp ⬠Grid{t â 1][c â 1] do n <= nl model:start(hyp, input, constraints) > start new constrained hyps > continue unfinished > k-best scoring hypotheses stay on the beam > get hyps in top-level beams > finished hyps have generated the EOS token
27: model.score(h) hâf inishedHyps
return bestHyp
# 28: 29: end procedure
token for in a currently unï¬nished constraint.
the search the beam at Grid[t][c] is ï¬lled with candidates which may be created in three ways:
1. the open hypotheses in the beam to the left (Grid[t â 1][c]) may generate con- tinuations from the modelâs distribution pθ(yi|x, {y0 . . . yiâ1}),
2. the open hypotheses in the beam to the left and below (Grid[tâ1][câ1]) may start new constraints,
3. the closed hypotheses in the beam to the left and below (Grid[t â 1][c â 1]) may continue constraints.
start, and continue, which build new hypotheses in each of the three ways. Note that the scoring function of the model does not need to be aware of the existence of constraints, but it may be, for ex- ample via a feature which indicates if a hypothesis is part of a constraint or not.
The beams at the top level of the grid (beams where c = numConstraints) contain hypothe- ses which cover all of the constraints. Once a hy- pothesis on the top level generates the EOS token, it can be added to the set of ï¬nished hypotheses. The highest scoring hypothesis in the set of ï¬n- ished hypotheses is the best sequence which cov- ers all constraints.1
the model in Algorithm 1 imple- Therefore, ments an interface with three functions: generate,
1Our implementation of GBS is available at https: //github.com/chrishokamp/constrained_ decoding
# 3.1 Multi-token Constraints
By distinguishing between open and closed hy- potheses, we can allow for arbitrary multi-token phrases in the search. Thus, the set of constraints for a particular output may include both individ- ual tokens and phrases. Each hypothesis main- tains a coverage vector to ensure that constraints cannot be repeated in a search path â hypotheses which have already covered constrainti can only generate, or start constraints that have not yet been covered.
Note also that discontinuous lexical constraints, such as phrasal verbs in English or German, are easy to incorporate into GBS, by adding filters to the search, which require that one or more con- ditions must be met before a constraint can be used. For example, adding the phrasal verb âask (someone) outâ as a constraint would mean using âaskâ as constraint and âoutâ as constraint, with two filters: one requiring that constraint, cannot be used before constraintg, and another requiring that there must be at least one generated token between the constraints.
# 3.2 Subword Units
Both the computation of the score for a hypoth- esis, and the granularity of the tokens (character, subword, word, etc...) are left to the underlying model. Because our decoder can handle arbitrary constraints, there is a risk that constraints will con- tain tokens that were never observed in the training data, and thus are unknown by the model. Espe- cially in domain adaptation scenarios, some user- speciï¬ed constraints are very likely to contain un- seen tokens. Subword representations provide an elegant way to circumvent this problem, by break- ing unknown or rare tokens into character n-grams which are part of the modelâs vocabulary (Sen- nrich et al., 2016; Wu et al., 2016). In the ex- periments in Section 4, we use this technique to ensure that no input tokens are unknown, even if a constraint contains words which never appeared in the training data.2
# 3.3 Efï¬ciency
Because the number of beams is multiplied by the number of constraints, the runtime complexity of a naive implementation of GBS is O(ktc). Stan- dard time-based beam search is O(kt); therefore,
2If a character that was not observed in training data is observed at prediction time, it will be unknown. However, we did not observe this in any of our experiments.
some consideration must be given to the efï¬ciency of this algorithm. Note that the beams in each col- umn c of Figure 3 are independent, meaning that GBS can be parallelized to allow all beams at each timestep to be ï¬lled simultaneously. Also, we ï¬nd that the most time is spent computing the states for the hypothesis candidates, so by keeping the beam size small, we can make GBS signiï¬cantly faster.
# 3.4 Models
The models used for our experiments are state- of-the-art Neural Machine Translation (NMT) sys- tems using our own implementation of NMT with attention over the source sequence (Bahdanau et al., 2014). We used Blocks and Fuel to im- plement our NMT models (van Merrinboer et al., 2015). To conduct the experiments in the fol- lowing section, we trained baseline translation models for EnglishâGerman (EN-DE), Englishâ French (EN-FR), and EnglishâPortuguese (EN- PT). We created a shared subword representation for each language pair by extracting a vocabulary of 80000 symbols from the concatenated source and target data. See the Appendix for more de- tails on our training data and hyperparameter con- ï¬guration for each language pair. The beamSize parameter is set to 10 for all experiments.
Because our experiments use NMT models, we can now be more explicit about the implemen- tations of the generate, start, and continue For an functions for this GBS instantiation. NMT model at timestep t, generate(hyptâ1) ï¬rst computes a vector of output probabilities ot = sof tmax(g(ytâ1, si, ci))3 using the state infor- mation available from hyptâ1. and returns the best k continuations, i.e. Eq. 4:
gt = k-argmax oti. (4)
i The start and continue functions simply index into the softmax output of the model, selecting speciï¬c tokens instead of doing a k-argmax over the entire target language vocabulary. For exam- ple, to start constraint ci, we ï¬nd the score of to- ken ci0 , i.e. otci0.
# 4 Experiments
# 4.1 Pick-Revise for Interactive Post Editing
Pick-Revise is an interaction cycle for MT Post- Editing proposed by Cheng et al. (2016). Starting
3we use the notation for the g function from Bahdanau et al. (2014)
ITERATION 0 1 2 3 Strict Constraints EN-DE EN-FR EN-PT* Relaxed Constraints EN-DE EN-FR EN-PT* 18.44 28.07 15.41 18.44 28.07 15.41 27.64 (+9.20) 36.71 (+8.64) 23.54 (+8.25) 26.43 (+7.98) 33.8 (+5.72) 23.22 (+7.80) 36.66 (+9.01) 44.84 (+8.13) 31.14 (+7.60) 34.48 (+8.04) 40.33 (+6.53) 33.82 (+10.6) 43.92 (+7.26) 45.48 +(0.63) 35.89 (+4.75) 41.82 (+7.34) 47.0 (+6.67) 40.75 (+6.93)
Table 1: Results for four simulated editing cycles using WMT test data. EN-DE uses newstest2013, EN-FR uses newstest2014, and EN-PT uses the Autodesk corpus discussed in Section 4.2. Improvement in BLEU score over the previous cycle is shown in parentheses. * indicates use of our test corpus created from Autodesk post-editing data.
with the original translation hypothesis, a (sim- ulated) user ï¬rst picks a part of the hypothesis which is incorrect, and then provides the correct translation for that portion of the output. The user- provided correction is then used as a constraint for the next decoding cycle. The Pick-Revise process can be repeated as many times as necessary, with a new constraint being added at each cycle.
data that contains the same placeholders which oc- cur in the test data (Crego et al., 2016). The MT system also loses any possibility to model the to- kens in the terminology, since they are represented by abstract tokens such as â(TERM_1)â. An at- tractive alternative is to simply provide term map- pings as constraints, allowing any existing system to adapt to the terminology used in a new test do- main.
We modify the experiments of Cheng et al. (2016) slightly, and assume that the user only pro- vides sequences of up to three words which are missing from the hypothesis.4 To simulate user interaction, at each iteration we chose a phrase of up to three tokens from the reference transla- tion which does not appear in the current MT hy- potheses. In the strict setting, the complete phrase must be missing from the hypothesis. In the re- laxed setting, only the ï¬rst word must be missing. Table 1 shows results for a simulated editing ses- sion with four cycles. When a three-token phrase cannot be found, we backoff to two-token phrases, then to single tokens as constraints. If a hypoth- esis already matches the reference, no constraints are added. By specifying a new constraint of up to three words at each cycle, an increase of over 20 BLEU points is achieved in all language pairs.
For the target domain data, we use the Autodesk Post-Editing corpus (Zhechev, 2012), which is a dataset collected from actual MT post-editing ses- sions. The corpus is focused upon software local- ization, a domain which is likely to be very dif- ferent from the WMT data used to train our gen- eral domain models. We divide the corpus into ap- proximately 100,000 training sentences, and 1000 test segments, and automatically generate a termi- nology by computing the Pointwise Mutual Infor- mation (PMI) (Church and Hanks, 1990) between source and target n-grams in the training set. We extract all n-grams from length 2-5 as terminology candidates.
pmi(x; y) = log p(x, y) p(x)p(y) (5)
# 4.2 Domain Adaptation via Terminology
npmi(x; y) = pmi(x; y) h(x, y) (6)
The requirement for use of domain-speciï¬c termi- nologies is common in real-world applications of MT (Crego et al., 2016). Existing approaches in- corporate placeholder tokens into NMT systems, which requires modifying the pre- and post- pro- cessing of the data, and training the system with
4NMT models do not use explicit alignment between source and target, so we cannot use alignment information to map target phrases to source phrases
Equations 5 and 6 show how we compute the normalized PMI for a terminology candidate pair. The PMI score is normalized to the range [â1, +1] by dividing by the entropy h of the joint prob- ability p(x, y). We then ï¬lter the candidates to only include pairs whose PMI is ⥠0.9, and where both the source and target phrases occur at least ï¬ve times in the corpus. When source phrases that match the terminology are observed in the test
data, the corresponding target phrase is added to the constraints for that segment. Results are shown in Table 2.
As a sanity check that improvements in BLEU are not merely due to the presence of the terms that the placement somewhere in the output, i.e. of the terms by GBS is reasonable, we also eval- uate the results of randomly inserting terms into the baseline output, and of prepending terms to the baseline output.
This simple method of domain adaptation leads to a signiï¬cant improvement in the BLEU score without any human intervention. Surprisingly, even an automatically created terminology com- bined with GBS yields performance improve- ments of approximately +2 BLEU points for En- De and En-Fr, and a gain of almost 14 points for En-Pt. The large improvement for En-Pt is probably due to the training data for this sys- tem being very different from the IT domain (see Appendix). Given the performance improve- ments from our automatically extracted terminol- ogy, manually created domain terminologies with good coverage of the test domain are likely to lead to even greater gains. Using a terminology with GBS is likely to be beneï¬cial in any setting where the test domain is signiï¬cantly different from the domain of the modelâs original training data.
System BLEU EN-DE Baseline Random Beginning GBS EN-FR Baseline Random Beginning GBS EN-PT Baseline Random Beginning GBS 26.17 25.18 (-0.99) 26.44 (+0.26) 27.99 (+1.82) 32.45 31.48 (-0.97) 34.51 (+2.05) 35.05 (+2.59) 15.41 18.26 (+2.85) 20.43 (+5.02) 29.15 (+13.73)
Table 2: BLEU Results for EN-DE, EN-FR, and EN-PT ter- minology experiments using the Autodesk Post-Editing Cor- pus. âRandomâ indicates inserting terminology constraints at random positions in the baseline translation. âBeginningâ indicates prepending constraints to baseline translations.
# 4.3 Analysis
Subjective analysis of decoder output shows that phrases added as constraints are not only placed correctly within the output sequence, but also have global effects upon translation quality. This is a desirable effect for user interaction, since it im- plies that users can bootstrap quality by adding the most critical constraints (i.e. those that are most essential to the output), ï¬rst. Table 3 shows several examples from the experiments in Table 1, where the addition of lexical constraints was able to guide our NMT systems away from initially quite low-scoring hypotheses to outputs which perfectly match the reference translations.
# 5 Related Work
Most related work to date has presented modiï¬ca- tions of SMT systems for speciï¬c usecases which constrain MT output via auxilliary inputs. The largest body of work considers Interactive Ma- chine Translation (IMT): an MT system searches for the optimal target-language sufï¬x given a com- plete source sentence and a desired preï¬x for the target output (Foster, 2002; Barrachina et al., 2009; Green, 2014). IMT can be viewed as sub- case of constrained decoding, where there is only one constraint which is guaranteed to be placed at the beginning of the output sequence. Wuebker et al. (2016) introduce preï¬x-decoding, which modiï¬es the SMT beam search to ï¬rst ensure that the target preï¬x is covered, and only then contin- ues to build hypotheses for the sufï¬x using beams organized by coverage of the remaining phrases in the source segment. Wuebker et al. (2016) and Knowles and Koehn (2016) also present a simple modiï¬cation of NMT models for IMT, enabling models to predict sufï¬xes for user-supplied pre- ï¬xes.
Recently, some attention has also been given to SMT decoding with multiple lexical constraints. The Pick-Revise (PRIMT) (Cheng et al., 2016) framework for Interactive Post Editing introduces the concept of edit cycles. Translators specify con- straints by editing a part of the MT output that is incorrect, and then asking the system for a new hypothesis, which must contain the user-provided correction. This process is repeated, maintain- ing constraints from previous iterations and adding new ones as needed. Importantly, their approach relies upon the phrase segmentation provided by the SMT system. The decoding algorithm can
EN-DE Source He was also an anti- smoking activist and took part in several campaigns . Original Hypothesis Es war auch ein Anti- Rauch- Aktiv- ist und nahmen an mehreren Kampagnen teil . Reference Ebenso setzte er sich gegen das Rauchen ein und nahm an mehreren Kampagnen teil . Constrained Hypothesis Ebenso setzte er sich gegen das Rauchen ein und nahm an mehreren Kampagnen teil . Constraints (1) Ebenso setzte er (2) gegen das Rauchen (3) nahm EN-FR Source At that point I was no longer afraid of him and I was able to love him . Original Hypothesis Je nâavais plus peur de lui et jâ`etais capable de lâaimer . Reference L´a je nâai plus eu peur de lui et jâai pu lâaimer . Constrained Hypothesis L´a je nâai plus eu peur de lui et jâai pu lâaimer . Constraints (1) L´a je nâai (2) jâai pu (3) eu EN-PT Source Mo- dif- y drain- age features by selecting them individually . Original Hypothesis - J´a temos as caracter´ısticas de extracc¸ Ëao de idade , com eles individualmente . Reference Modi- ï¬que os recursos de drenagem ao selec- ion- ´a-los individualmente . Constrained Hypothesis Modi- ï¬que os recursos de drenagem ao selec- ion- ´a-los individualmente . Constraints (1) drenagem ao selec- (2) Modi- ï¬que os (3) recursos
Table 3: Manual analysis of examples from lexically constrained decoding experiments. â-â followed by whitespace indicates the internal segmentation of the translation model (see Section 3.2)
only make use of constraints that match phrase boundaries, because constraints are implemented as ârulesâ enforcing that source phrases must be translated as the aligned target phrases that have been selected as constraints. In contrast, our ap- proach decodes at the token level, and is not de- pendent upon any explicit structure in the underly- ing model.
Domingo et al. (2016) also consider an interac- tive scenario where users ï¬rst choose portions of an MT hypothesis to keep, then query for an up- dated translation which preserves these portions. The MT system decodes the source phrases which are not aligned to the user-selected phrases un- til the source sentence is fully covered. This ap- proach is similar to the system of Cheng et al., and uses the âXML inputâ feature in Moses (Koehn et al., 2007).
organized by coverage of the input.
# 6 Conclusion
Lexically constrained decoding is a ï¬exible way to incorporate arbitrary subsequences into the out- put of any model that generates output sequences token-by-token. A wide spectrum of popular text generation models have this characteristic, and GBS should be straightforward to use with any model that already uses beam search.
In translation interfaces where translators can provide corrections to an existing hypothesis, these user inputs can be used as constraints, gener- ating a new output each time a user ï¬xes an error. By simulating this scenario, we have shown that such a workï¬ow can provide a large improvement in translation quality at each iteration.
Some recent work considers the inclusion of soft lexical constraints directly into deep models for dialog generation, and special cases, such as recipe generation from a list of ingredients (Wen et al., 2015; Kiddon et al., 2016). Such constraint- aware models are complementary to our work, and could be used with GBS decoding without any change to the underlying models.
To the best of our knowledge, ours is the ï¬rst work which considers general lexically con- strained decoding for any model which outputs sequences, without relying upon alignments be- tween input and output, and without using a search
By using a domain-speciï¬c terminology to gen- erate target-side constraints, we have shown that a general domain model can be adapted to a new domain without any retraining. Surprisingly, this simple method can lead to signiï¬cant performance gains, even when the terminology is created auto- matically.
In future work, we hope to evaluate GBS with models outside of MT, such as automatic sum- marization, image captioning or dialog genera- tion. We also hope to introduce new constraint- aware models, for example via secondary attention mechanisms over lexical constraints.
# Acknowledgments
This project has received funding from Science Foundation Ireland in the ADAPT Centre for Dig- ital Content Technology (www.adaptcentre.ie) at Dublin City University funded under the SFI Re- search Centres Programme (Grant 13/RC/2106) co-funded under the European Regional Develop- ment Fund and the European Union Horizon 2020 research and innovation programme under grant agreement 645452 (QT21). We thank the anony- mous reviewers, as well as Iacer Calixto, Peyman Passban, and Henry Elder for helpful feedback on early versions of this work.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473 .
Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes´us Tom´as, En- rique Vidal, and Juan-Miguel Vilar. 2009. Sta- tistical approaches to computer-assisted transla- Computational Linguistics 35(1):3â28. tion. https://doi.org/10.1162/coli.2008.07-055-R2-06-29.
OndËrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statisti- cal Machine Translation. Association for Compu- tational Linguistics, Lisbon, Portugal, pages 1â46. http://aclweb.org/anthology/W15-3001.
Shanbo Cheng, Shujian Huang, Huadong Chen, Xinyu Dai, and Jiajun Chen. 2016. PRIMT: A pick- revise framework for interactive machine trans- In NAACL HLT 2016, The 2016 Con- lation. ference of the the North American Chapter of Association for Computational Linguistics: Hu- man Language Technologies, San Diego Califor- nia, USA, June 12-17, 2016. pages 1240â1249. http://aclweb.org/anthology/N/N16/N16-1148.pdf.
David Chiang. 2007. Hierarchical phrase-based Comput. Linguist. 33(2):201â228. translation. https://doi.org/10.1162/coli.2007.33.2.201.
Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderâdecoder for statistical machine translation. In Proceedings of
the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pages 1724â1734. http://www.aclweb.org/anthology/D14- 1179.
Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and Comput. Linguist. 16(1):22â29. lexicography. http://dl.acm.org/citation.cfm?id=89086.89095.
Josep Maria Crego, Jungi Kim, Guillaume Klein, An- abel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leid- iana Martins, Dang-Chuan Nguyen, Alexandra Pri- ori, Thomas Riccardi, Natalia Segal, Christophe Ser- van, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Systranâs Jing Zhou, and Peter Zoldan. 2016. pure neural machine translation systems. CoRR abs/1610.05540. http://arxiv.org/abs/1610.05540.
Miguel Domingo, Alvaro Peris, and Francisco Casacu- berta. 2016. Interactive-predictive translation based on multiple word-segments. Baltic J. Modern Com- puting 4(2):282â291.
George F. Foster. 2002. Text Prediction for Transla- tors. Ph.D. thesis, Montreal, P.Q., Canada, Canada. AAINQ72434.
Spence Green. 2014. Mixed-Initiative Natural Lan- Ph.D. thesis, Stanford, CA, guage Translation. United States.
Chlo´e Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. text generation with In Proceedings of the neural checklist models. 2016 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 329â339. http://aclweb.org/anthology/D/D16/D16-1032.pdf.
Rebecca Knowles and Philipp Koehn. 2016. Neural interactive translation prediction. AMTA 2016, Vol. page 107.
Philipp Koehn. 2009. A process study of computer- aided translation. Machine Translation 23(4):241â 263. https://doi.org/10.1007/s10590-010-9076-3.
Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press, New York, NY, USA, 1st edition.
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, OndËrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Ses- sions. Association for Computational Linguistics,
Stroudsburg, PA, USA, ACL â07, pages 177â180. http://dl.acm.org/citation.cfm?id=1557769.1557821.
The alignment template approach to statistical machine Comput. Linguist. 30(4):417â449. translation. https://doi.org/10.1162/0891201042544884.
Intelligent Search Strategies for Computer Problem Solving. Addison- Wesley Longman Publishing Co., Inc., Boston, MA, USA.
and Michael Collins. 2013. Optimal beam search for machine In Proceedings of the 2013 Confer- translation. ence on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Seattle, Washington, USA, pages 210â221. http://www.aclweb.org/anthology/D13-1022.
Alexander M. Rush, Sumit Chopra, and Jason We- ston. 2015. A neural attention model for abstrac- tive sentence summarization. In Llus Mrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, EMNLP. The Association for Com- putational Linguistics, pages 379â389.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words the with subword units. 54th Annual Meeting of the Association for Com- putational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P16-1162.pdf.
Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building systems using generative end-to-end dialogue hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artiï¬cial Intel- ligence. AAAI Press, AAAIâ16, pages 3776â3783. http://dl.acm.org/citation.cfm?id=3016387.3016435.
Jason R. Smith, Herve Saint-amand, Chris Callison- burch, Magdalena Plamada, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common In In Proceedings of the Conference of the crawl. Association for Computational Linguistics (ACL.
Lucia Specia. 2011. Exploiting objective annotations for measuring translation post-editing effort. In Pro- ceedings of the European Association for Machine Translation. May.
Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Toma Erjavec, and Dan Tuï¬. 2006. The jrc-acquis: A multilingual aligned parallel cor- pus with 20+ languages. In In Proceedings of the 5th International Conference on Language Resources and Evaluation (LRECâ2006. pages 2142â2147.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with the 27th In Proceedings of
International Conference on Neural Informa- tion Processing Systems. MIT Press, Cam- bridge, MA, USA, NIPSâ14, pages 3104â3112. http://dl.acm.org/citation.cfm?id=2969033.2969173.
Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2016. Neural machine translation with reconstruction. arXiv preprint arXiv:1611.01874 .
Bart van Merrinboer, Dzmitry Bahdanau, Vincent Du- moulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. 2015. Blocks and fuel: Frameworks for deep learning. CoRR abs/1506.00619.
Tsung-Hsien Wen, Milica GaËsi´c, Nikola MrkËsi´c, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguis- tics.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Googleâs neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144.
Joern Wuebker, Spence Green, John DeNero, Sasa Hasan, and Minh-Thang Luong. 2016. Models and inference for preï¬x-constrained machine trans- In Proceedings of the 54th Annual Meet- lation. ing of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Compu- tational Linguistics, Berlin, Germany, pages 66â75. http://www.aclweb.org/anthology/P16-1007.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In David Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). JMLR Workshop and Conference Proceedings, pages 2048â2057. http://jmlr.org/proceedings/papers/v37/xuc15.pdf.
Matthew D. Zeiler. 2012. ADADELTA: an adap- tive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701.
Ventsislav Zhechev. 2012. Machine Translation Infras- tructure and Post-editing Performance at Autodesk.
In AMTA 2012 Workshop on Post-Editing Technol- ogy and Practice (WPTP 2012). Association for Ma- chine Translation in the Americas (AMTA), San Diego, USA, pages 87â96.
# A NMT System Conï¬gurations
We train all systems for 500000 iterations, with validation every 5000 steps. The best single model from validation is used in all of the experiments for a language pair. We use £2 regularization on all pa- rameters with a = le~°. Dropout is used on the output layers with p(drop) = 0.5. We sort mini- batches by source sentence length, and reshuffle training data after each epoch.
All systems use a bidirectional GRUs (Cho et al., 2014) to create the source representation and GRUs for the decoder transition. We use AdaDelta (Zeiler, 2012) to update gradients, and clip large gradients to 1.0.
Training Conï¬gurations EN-DE Embedding Size Recurrent Layers Size Source Vocab Size Target Vocab Size Batch Size EN-FR Embedding Size Recurrent Layers Size Source Vocab Size Target Vocab Size Batch Size EN-PT Embedding Size Recurrent Layers Size Source Vocab Size Target Vocab Size Batch Size
300 1000 80000 90000 50 300 1000 66000 74000 40 200 800 60000 74000 40
# A.1 English-German
Our English-German training corpus consists of 4.4 Million segments from the Europarl (Bojar et al., 2015) and CommonCrawl (Smith et al., 2013) corpora.
# A.2 English-French
Our English-French training corpus consists of 4.9 Million segments from the Europarl and Com- monCrawl corpora.
# A.3 English-Portuguese
Our English-Portuguese training corpus consists of 28.5 Million segments from the Europarl, JRC-
Aquis (Steinberger et al., 2006) and OpenSubti- tles5 corpora.
5http://www.opensubtitles.org/ | {
"id": "1611.01874"
} |
1704.06369 | NormFace: L2 Hypersphere Embedding for Face Verification | Thanks to the recent developments of Convolutional Neural Networks, the
performance of face verification methods has increased rapidly. In a typical
face verification method, feature normalization is a critical step for boosting
performance. This motivates us to introduce and study the effect of
normalization during training. But we find this is non-trivial, despite
normalization being differentiable. We identify and study four issues related
to normalization through mathematical analysis, which yields understanding and
helps with parameter settings. Based on this analysis we propose two strategies
for training using normalized features. The first is a modification of softmax
loss, which optimizes cosine similarity instead of inner-product. The second is
a reformulation of metric learning by introducing an agent vector for each
class. We show that both strategies, and small variants, consistently improve
performance by between 0.2% to 0.4% on the LFW dataset based on two models.
This is significant because the performance of the two models on LFW dataset is
close to saturation at over 98%. Codes and models are released on
https://github.com/happynear/NormFace | http://arxiv.org/pdf/1704.06369 | Feng Wang, Xiang Xiang, Jian Cheng, Alan L. Yuille | cs.CV | camera-ready version | null | cs.CV | 20170421 | 20170726 | 7 1 0 2
l u J 6 2 ] V C . s c [ 4 v 9 6 3 6 0 . 4 0 7 1 : v i X r a
# NormFace: L2 Hypersphere Embedding for Face Verification
Feng Wangâ University of Electronic Science and Technology of China 2006 Xiyuan Ave. Chengdu, Sichuan 611731 feng.wff@gmail.com
Xiang Xiang Johns Hopkins University 3400 N. Charles St. Baltimore, Maryland 21218 xxiang@cs.jhu.edu
Jian Cheng University of Electronic Science and Technology of China 2006 Xiyuan Ave. Chengdu, Sichuan 611731 chengjian@uestc.edu.cn
Alan L. Yuille Johns Hopkins University 3400 N. Charles St. Baltimore, Maryland 21218 alan.yuille@jhu.edu
ABSTRACT Thanks to the recent developments of Convolutional Neural Net- works, the performance of face verification methods has increased rapidly. In a typical face verification method, feature normalization is a critical step for boosting performance. This motivates us to introduce and study the effect of normalization during training. But we find this is non-trivial, despite normalization being differen- tiable. We identify and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings. Based on this analysis we propose two strategies for training using normalized features. The first is a modification of softmax loss, which optimizes cosine similarity instead of inner-product. The second is a reformulation of metric learning by introducing an agent vector for each class. We show that both strategies, and small variants, consistently improve per- formance by between 0.2% to 0.4% on the LFW dataset based on two models. This is significant because the performance of the two models on LFW dataset is close to saturation at over 98%.
[19] and so on. In the field of face verification, CNNs have already surpassed humansâ abilities on several benchmarks[20, 33].
The most common pipeline for a face verification application involves face detection, facial landmark detection, face alignment, feature extraction, and finally feature comparison. In the feature comparison step, the cosine similarity or equivalently Lz normalized Euclidean distance is used to measure the similarities between features. The cosine similarity Go) isa similarity measure which is independent of magnitude. It can be seen as the normalized version of inner-product of two vectors. But in practice the inner product without normalization is the most widely-used similarity measure when training a CNN classification models [12, 29, 32]. In other words, the similarity or distance metric used during training is different from that used in the testing phase. To our knowledge, no researcher in the face verification community has clearly explained why the features should be normalized to calculate the similarity in the testing phase. Feature normalization is treated only as a trick to promote the performance during testing.
CCS CONCEPTS ⢠Computing methodologies â Object identification; Super- vised learning by classification; Neural networks; Regulariza- tion;
To illustrate this, we performed an experiment which compared the face features without normalization, i.e. using the unnormalized inner-product or Euclidean distance as the similarity measurement. The features were extracted from an online available model [36]1. We followed the standard protocol of unrestricted with labeled out- side data[9] and test the model on the Labeled Faces in the Wild (LFW) dataset[10]. The results are listed in Table 1.
# KEYWORDS Face Verification, Metric Learning, Feature Normalization
# Table 1: Effect of Feature Normalization
1 INTRODUCTION In recent years, Convolutional neural networks (CNNs) achieve state-of-the-art performance for various computer vision tasks, such as object recognition [12, 29, 32], detection [5], segmentation
Similarity Inner-Product Euclidean Before Normalization After Normalization 98.27% 98.35% 98.98% 98.95%
âAlan L. Yuilleâs visiting student.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. MM â17, October 23â27, 2017, Mountain View, CA, USA. © 2017 ACM. ISBN 978-1-4503-4906-2/17/10. . . $15.00 DOI: https://doi.org/10.1145/3123266.3123359
As shown in the table, feature normalization promoted the per- formance by about 0.6% â¼ 0.7%, which is a significant improvement since the accuracies are already above 98%. Feature normalization seems to be a crucial step to get good performance during testing. Noting that the normalization operation is differentiable, there is no reason that stops us importing this operation into the CNN model to perform end-to-end training.
# 1https://github.com/ydwen/caffe-face
Tame >| wetgns| Training: | aligned âne classification image inner- product CNN normalize thresholding Identity Testing: different Identity
Figure 1: Pipeline of face verification model training and testing using a classification loss function. Previous works did not use the normalization after feature extraction dur- ing training. But in the testing phase, all methods used a normalized similarity, e.g. cosine, to compare two features.
Some previous works[23, 28] successfully trained CNN models with the features being normalized in an end-to-end fashion. How- ever, both of them used the triplet loss, which needs to sample triplets of face images during training. It is difficult to train be- cause we usually need to implement hard mining algorithms to find non-trivial triplets[28]. Another route is to train a classification network using softmax loss[31, 38] and regularizations to limit the intra-class variance[16, 36]. Furthermore, some works combine the classification and metric learning loss functions together to train CNN models[31, 41]. All these methods that used classification loss functions, e.g. softmax loss, did not apply feature normalization, even though they all used normalized similarity measure, e.g. co- sine similarity, to get the confidence of judging two samples being of the same identity at testing phase(Figure 1).
We did an experiment by normalizing both the features and the weights of the last inner-product layer to build a cosine layer in an ordinary CNN model. After sufficient iterations, the network still did not converge. After observing this phenomenon, we deeply dig into this problem. In this paper, we will find out the reason and propose methods to enable us to train the normalized features.
To sum up, in this work, we analyze and answer the questions mentioned above about the feature normalization and the model training: (1) Why is feature normalization so efficient when comparing the CNN features trained by classification loss, especially for soft- max loss?
(2) Why does directly optimizing the cosine similarity using soft- max loss cause the network to fail to converge?
(3) How to optimize a cosine similarity when using softmax loss? (4) Since models with softmax loss fail to converge after normaliza- tion, are there any other loss functions suitable for normalized features?
For the first question, we explain it through a property of softmax loss in Section 3.1. For the second and third questions, we provide a bound to describe the difficulty of using softmax loss to optimize a cosine similarity and propose using the scaled cosine similarity in Section 3.3. For the fourth question, we reformulate a set of loss functions in metric learning, such as contrastive loss and triplet loss to perform the classification task by introducing an âagentâ
strategy (Section 4). Utilizing the âagentâ strategy, there is no need to sample pairs and triplets of samples nor to implement the hard mining algorithm.
We also propose two tricks to improve performance for both static and video face verification. The first is to merge features ex- tracted from both original image and mirror image by summation, while previous works usually merge the features by concatenation[31, 36]. The second is to use histogram of face similarities between video pairs instead of the mean[23, 36] or max[39] similarity when making classification.
Finally, by experiments, we show that normalization during training can promote the accuracies of two publicly available state- of-the-art models by 0.2 â¼ 0.4% on LFW[10] and about 0.6% on YTF[37].
2 RELATED WORKS Normalization in Neural Network. Normalization is a common operation in modern neural network models. Local Response Nor- malization and Local Contrast Normalization are studied in the AlexNet model[12], even though these techniques are no longer common in modern models. Batch normalization[11] is widely used to accelerate the speed of neural network convergence by reducing the internal covariate shift of intermediate features. Weight normal- ization [27] was proposed to normalize the weights of convolution layers and inner-product layers, and also lead to faster convergence speed. Layer normalization [1] tried to solve the batch size depen- dent problem of batch normalization, and works well on Recurrent Neural Networks. Face Verification. Face verification is to decide whether two im- ages containing faces represent the same person or two different people, and thus is important for access control or re-identification tasks. Face verification using deep learning techniques achieved a series of breakthroughs in recent years [20, 23, 28, 33, 36]. There are mainly two types of methods according to their loss functions. One type uses metric learning loss functions, such as contrastive loss[4, 40] and triplet loss[23, 28, 34]. The other type uses soft- max loss and treats the problem as a classification task, but also constrains the intra-class variance to get better generalization for comparing face features [16, 36]. Some works also combine both kinds of loss functions[40, 41]. Metric Learning. Metric learning[4, 25, 34] tries to learn semantic distance measures and embeddings such that similar samples are nearer and different samples are further apart from each other on a manifold. With the help of neural networksâ enormous ability of representation learning, deep metric learning[3, 19] can do even better than the traditional methods. Recently, more complicated loss functions were proposed to get better local embedding structures[8, 22, 30]. Recent Works on Normalization. Recently, cosine similarity [17] was used instead of the inner-product for training a CNN for person recognition, which is quite similar with face verification. The Cosine Loss proposed in [17] is quite similar with the one described in Section 3.3, normalizing both the features and weights. L2-softmax[24] shares a similar analysis about the convergence problem described in Section 3.3. In [24], the authors also propose to add a scale parameter after normalization, but they only normal- ize the features. SphereFace[35] improves the performance of Large
Figure 2: Left: The optimized 2-dimensional feature distribu- tion using softmax loss on MNIST[14] dataset. Note that the Euclidean distance between f1 and f2 is much smaller than the distance between f2 and f3, even though f2 and f3 are from the same class. Right: The softmax probability for class 0 on the 2-dimension plane. Best viewed in color.
Margin Softmax[16] by normalizing the weights of the last inner- product layer only. Von Mises-Fisher Mixture Model(vMFMM)[21] interprets the hypersphere embedding as a mixture of von Mises- Fisher distributions. To sum up, the Cosine Loss[17], vMFMM[21] and our proposed loss functions optimize both features and weights, while the L2-softmax[24] normalizes the features only and the SphereFace[35] normalizes the weights only.
3 L2 NORMALIZATION LAYER In this section, we answer the question why we should normalize the features when the loss function is softmax loss and why the network does not converge if we directly put a softmax loss on the normalized features.
3.1 Necessity of Normalization In order to give an intuitive feeling about the softmax loss, we did a toy experiment of training a deeper LeNet[13] model on the MNIST dataset[14]. We reduced the number of the feature dimension to 2 and plot 10,000 2-dimensional features from the training set on a plane in Figure 2. From the figure, we find that f, can be much closer to f; than to f3 if we use Euclidean distance as the metric. Hence directly using the features for comparison may lead to bad performance. At the same time, we find that the angles between feature vectors seem to be a good metric compared with Euclidean distance or inner-product operations. Actually, most previous work takes the cosine of the angle between feature vectors as the similarity [31, 36, 38], even though they all use softmax loss to train the network. Since the most common similarity metric for softmax loss is the inner-product with unnormalized features, there is a gap between the metrics used in the training and testing phases. The reason why the softmax loss tends to create a âradialâ feature distribution (Figure 2) is that the softmax loss actually acts as the soft version of max operator. Scaling the feature vectorsâ magnitude does not affect the assignment of its class. Formally speaking, we recall the definition of the softmax loss, 8 rec oN jfitby; (1) Whi +b;
where m is the number of training samples, n is the number of classes, fi is the feature of the i-th sample, yi is the corresponding
Figure 3: Two selected scatter diagrams when bias term is added after inner-product operation. Please note that there are one or two clusters that are located near the zero point. If we normalize the features of the center clusters, they would spread everywhere on the unit circle, which would cause misclassification. Best viewed in color.
label in range [1, n], W and b are the weight matrix and the bias vector of the last inner-product layer before the softmax loss, Wj is the j-th column of W , which is corresponding to the j-th class. In the testing phase, we classify a sample by
Class(f) = i = arg max i (W T i f + bi ). (2)
In this case, we can infer that (Wi f +bi ) â (Wj f +bj ) ⥠0, âj â [1, n]. Using this inequality, we obtain the following proposition.
Proposition 1. For the softmax loss with no-bias inner-product similarity as its metric, let Pi (f) =
Te ei St denote the probability dye similarity as its metric, let P(f£) =
of x being classified as class i. For any given scale s > 1, if i = arg maxj (W T
The proof is given in Appendix 8.1. This proposition implies that softmax loss always encourages well-separated features to have bigger magnitudes. This is the reason why the feature distribution of softmax is âradialâ. However, we may not need this property as shown in Figure2. By normalization, we can eliminate its effect. Thus, we usually use the cosine of two feature vectors to measure the similarity of two samples.
However, Proposition 1 does not hold if a bias term is added after the inner-product operation. In fact, the weight vector of the two classes could be the same and the model still could make a decision via the biases. We found this kind of case during the MNIST experiments and the scatters are shown in Figure 3. It can be discovered from the figure that the points of some classes all locate around the zero point, and after normalization the points from each of these classes may be spread out on the unit circle, overlapping with other classes. In these cases, feature normalization may destroy the discrimination ability of the specific classes. To avoid this kind of risk, we do not add the bias term before the softmax loss in this work, even though it is commonly used for classification tasks.
# 3.2 Layer Definition
# «fDi
In this paper, we define ||x|l2 = «fDi x} +e, where ¢ is a small positive value to prevent dividing zero. For an input vector x ⬠Râ,
loss within decrease the when
However, after normalization, the network fails to converge. The loss only decreases a little and then converges to a very big value within a few thousands of iterations. After that the loss does not decrease no matter how many iterations we train and how small the learning rate is.
This is mainly because the range of d(f, Wi) is only [â1, 1] after normalization, while it is usually between (â20, 20) and (â80, 80) when we use an inner-product layer and softmax loss. This low range problem may prevent the probability Pyi (f; W) = e WT yi WT j f j e where yi is fâs label, from getting close to 1 even when the samples are well-separated. In the extreme case, e 1+(nâ1)e â1 is very small (0.45 when n = 10; 0.007 when n = 1000), even though in this condition the samples of all other classes are on the other side of the unit hypersphere. Since the gradient of softmax loss w.r.t. the ground truth label is 1 â Pyi , the model will always try to give large gradients to the well separated samples, while the harder samples may not get sufficient gradients.
,
Figure 4: Left: The normalization operation and i its gradient in 2-dimensional space. Please note that Ix+a9£ oe L Whi is always bigger than ||x|| for all a > 0 because of the Pythagoras the- orem. Right: An example of the gradients w.r.t. the weight vector. All the gradients are in the tangent space of the unit sphere (denoted as the blue plane). The red, yellow and green points are normalized features from 3 different classes. The blue point is the normalized weight corresponding to the red class. Here we assume that the model tries to make features get close to their corresponding classes and away from other classes. Even though we illustrate the gradients applied on the normalized weight only, please note that opposite gra- dients are also applied on the normalized features (red, yel- low, green points). Finally, all the gradients are accumulated together to decide which direction the weight should be up- dated. Best viewed in color, zoomed in.
To better understand this problem, we give a bound to clarify
how small the softmax loss can be in the best case.
Proposition 2. (Softmax Loss Bound After Normalization) Assume that every class has the same number of samples, and all the samples are well-separated, i.e. each sampleâs feature is exactly same with its corresponding classâs weight. If we normalize both the features and every column of the weights to have a norm of â¬, the softmax loss will have a lower bound, log {1 + (nâ 1) em), where n is the class number.
an L2 normalization layer outputs the normalized vector,
a (3) IIxll2 [S,x2 +e
The proof is given in Appendix 8.2. Even though reading the proof need patience, we still encourage readers to read it because you may get better understanding about the hypersphere manifold from it.
Here x can be either the feature vector f or one column of the weight matrix Wi . In backward propagation, the gradient w.r.t. x can be obtained by the chain-rule,
This bound implies that if we just normalize the features and weights to 1, the softmax loss will be trapped at a very high value on training set, even if no regularization is applied. For a real example, if we train the model on the CASIA-Webface dataset (n = 10575), the loss will decrease from about 9.27 to about 8.50. The bound for this condition is 8.27, which is very close to the real value. This suggests that our bound is very tight. To give an intuition for the ound, we also plot the curve of the bound as a function of the norm ¢ in Figure 5.
AL _ dL dx; x AL 9% _Allxlle Ox; 0%; Ox; OX; O\|x\l2 Ox; (4) OL _= OL oe, i Lj Ox, Ilxlle
It is noteworthy that vector x and 9£ are orthogonal with each other, i.e. (x, 34 of is the projection of 2 o£ onto the tangent space of the unit hypersphere at normal vector x (see Figure 4). From Figure 4 left, it can be inferred that after update, ||x||2 always increases. In order to prevent ||x||z growing infinitely, weight decay is necessary on vector x. £) = =0. From. a geometric perspective, the gradient
After we obtain the bound, the solution to the convergence roblem is clear. By normalizing the features and columns of weight to a bigger value ¢ instead of 1, the softmax loss can continue to decrease. In practice, we may implement this by directly appending a scale layer after the cosine layer. The scale layer has only one learnable parameter s = ¢7. We may also fix it to a value that is large enough referring to Figure 5, say 20 or 30 for different class number. However, we prefer to make the parameter automatically learned y back-propagation instead of introducing a new hyper-parameter for elegance. Finally, the softmax loss with cosine distance is defined as
3.3 Reformulating Softmax Loss Using the normalization layer, we can directly optimize the cosine similarity,
as
Wai fi ai =) log ââ_â_ Sa (6) m
f, Wj _4f, Wi) (5) liflllWille d(f, Wi) =
where f is the feature and Wi represents the i-th column of the weight matrix of the inner-product layer before softmax loss layer.
where Ëx is the normalized x.
Loss Bound 0 2 4 6 8 10 12 14 16 18 20 Squared Norm (2
Figure 5: The softmax lossâ lower bound as a function of fea- tures and weightsâ norm. Note that the x axis is the squared norm (â because we add the scale parameter directly on the cosine distance in practice.
4 REFORMULATING METRIC LEARNING Metric Learning, or specifically deep metric learning in this work, usually takes pairs or triplets of samples as input, and outputs the distance between them. In deep metric models, it is a common strategy to normalize the final features[22, 23, 28]. It seems that normalization does not cause any problems for metric learning loss functions. However, metric learning is more difficult to train than classification because the possible input pairs or triplets in metric 2) combinations for learning models are very large, namely O(N 3) combinations for triplets, where N is the amount pairs and O(N of training samples. It is almost impossible to deal with all possi- ble combinations during training, so sampling and hard mining algorithms are usually necessary[28], which are tricky and time- consuming. By contrast, in a classification task, we usually feed the data iteratively into the model, namely the input data is in order of O(N ). In this section, we attempt to reformulate some metric learning loss functions to do the classification task, while keeping their compatibility with the normalized features.
The most widely used metric learning methods in the face verifi-
cation community are the contrastive loss[31, 40],
co={ HE - HI, cj = cj max(0, m â ||f; â £)||3). (7) ci # Cj
and the triplet loss[23, 28],
Le = max(0,m + |Ifi ~ fly ~ Ilfi ~ fella). ci = cj, C; # Ck, (8)
where the two mâs are the margins. Both of the two loss functions optimize the normalized Euclidean distance between feature pairs. Note that after normalization, the reformulated softmax loss can
imension 4 imension 4 @:feature :agent O:classcenter ââ»: gradient
Figure 6: Illustration of how the C-contrastive loss works with two classes on a 3-d sphere (projected on a 2-d plane). Left: The special case of m = 0. In this case, the agents are only influenced by features from their own classes. The agents will finally converge to the centers of their corre- sponding classes. Right: Normal case of m = 1. In this case, the agents are influenced by all the features in the same classes and other classesâ features in the margin. Hence the agents are shifted away from the boundary of the two classes. The features will follow their agents through the intra-class term ||f; â Willd. ci = j as the gradients shown in the figure. Best viewed in color.
also be seen as optimizing the normalized Euclidean distance,
# Ë fi
Win fi £s/=-7 >) log â are peti i (9) 1 elit, I 5 | M 9S Gt yr e FIRE
because ||% â |? = 2 - 2x". Inspired by this formulation, we modify one of the features to be one column of a weight matrix We Raxn where d is the dimension of the feature and n is the class number. We call column W; as the âagentâ of the i-th class. The weight matrix W can be learned through back-propagation just as the inner-product layer. In this way, we can get a classification version of the contrastive loss, fi - Wil, ci =i L -| âlas 8 en) Cr | max(0,m â [li â Wy), ci
and the triplet loss, Ly, = max(0,m+||fjâWyllp - llfi- Well), cr = i-cr #k. (11) To distinguish these two loss functions from their metric learning versions, we call them C-contrastive loss and C-triplet loss respec- tively, denoting that these loss functions are designed for classifica- tion.
Intuitively, Wj acts as a summarizer of the features in j-th class. If all classes are well-separated by the margin, the Wj âs will roughly correspond to the means of features in each class (Figure 6 left). In more complicated tasks, features of different classes may be overlapped with each other. Then the Wj âs will be shifted away from the boundaries. The marginal features (hard examples) are
contrastive loss: triplet loss: e: feature X: agent >âââ«: minimize <â>: maximize
Figure 7: Classification version of contrastive loss (Left) and triplet loss (Right). The shadowed points are the marginal features that got omitted due to the âagentâ strategy. In the original version of the two losses, the shadowed points are also optimized. Best viewed in color.
guided to have bigger gradients in this case (Figure 6 right), which means they move further than easier samples during update.
However, there are some side effect of the agent strategy. After reformulation, some of the marginal features may not be optimized if we still use the same margin as the original version (Figure 7). Thus, we need larger margins to make more features get optimized. Mathematically, the error caused by the agent approximation is given by the following proposition.
Proposition 3. Using an agent for each class instead of a specific sample would cause a distortion of ne Lye, (d(fo, fi) - d(fo, wi)â, where Wj is the agent of the ith-class. The distortion is bounded by na Dec, 4Cf. Wi).
# 1 nCi
The proof is given in Appendix 8.3. This bound gives us a theoret- ical guidance of setting the margins. We can compute it on-the-fly during training using moving-average and display it to get better feelings about the progress. Empirically, the bound 1 nCi is usually 0.5 â¼ 0.6. The recommendation value of the margins of the modified contrastive loss and triplet loss is 1 and 0.8 respec- tively.
Note that setting the margin used to be complicated work[40]. Following their work, we have to suspend training and search for a new margin for every several epochs. However, we no longer need to perform such a searching algorithm after applying normalization. Through normalization, the scale of featuresâ magnitude is fixed, which makes it possible to fix the margin, too. In this strategy, we will not try to train models using the C-contrastive loss or the C-triplet loss without normalization because this is difficult.
5 EXPERIMENT In this section, we first describe the experiment settings in Section 5.1. Then we evaluate our method on two different datasets with two different models in Section 5.2 and 5.3. Codes and models are released at https://github.com/happynear/NormFace.
5.1 Implementation Details Baseline works. To verify our algorithmâs universality, we choose two works as our baseline, Wu et. al.âs model [38]2 (Wuâs model,
2https://github.com/AlfredXiangWu/face_verification_experiment
for short) and Wen et. al.âs model [36]3 (Wenâs model, for short). Wuâs model is a 10-layer plain CNN with Maxout[6] activation unit. Wenâs model is a 28-layer ResNet[7] trained with both softmax loss and center loss. Neither of these two models apply feature normalization or weight normalization. We strictly follow all the experimental settings as their papers, including the datasets4, the image resolution, the pre-processing methods and the evaluation criteria. Training. The proposed loss functions are appended after the fea- ture layer, i.e. the second last inner-product layer. The features and columns of weight matrix are normalized to make their L2 norm to be 1. Then the features and columns of the weight matrix are sent into a pairwise distance layer, i.e. inner-product layer to pro- duce a cosine similarity or Euclidean distance layer to produce a normalized Euclidean distance. After calculating all the similarities or distances between each feature and each column, the proposed loss functions will give the final loss and gradients to the distances. The whole network models are trained end to end. To speed up the training procedure, we fine-tune the networks from the baseline models. Thus, a relatively small learning rate, say 1e-4 for Wuâs model and 1e-3 for Wenâs model, are applied to update the network through stochastic gradient descent (SGD) with momentum of 0.9. Evaluation. Two datasets are utilized to evaluate the performance, one is Labeled Face in the Wild (LFW)[10] and another is Youtube Face (YTF)[37]. 10-fold validation is used to evaluate the perfor- mance for both datasets. After the training models converge, we continue to train them for 5, 000 iterations5, during which we save a snapshot for every 1, 000 iterations. Then we run the evaluation codes on the five saved snapshots separately and calculate an av- erage score to reduce disturbance. We extract features from both the frontal face and its mirror image and merge the two features by element-wise summation. Principle Component Analysis (PCA) is then applied on the training subset of the evaluation dataset to fit the features to the target domain. Similarity score is computed by the cosine distance of two sampleâs features after PCA. All the evaluations are based on the similarity scores of image pairs.
5.2 Experiments on LFW The LFW dataset[10] contains 13, 233 images from 5, 749 identi- ties, with large variations in pose, expression and illumination. All the images are collected from Internet. We evaluate our methods through two different protocols on LFW, one is the standard unre- stricted with labeled outside data [9], which is evaluated on 6, 000 image pairs, and another is BLUFR [15] which utilize all 13, 233 images. It is noteworthy that there are three same identities in CASIA-Webface[40] and LFW[10]. We delete them during training to build a complete open-set validation.
We carefully test almost all combinations of the loss functions on the standard unrestricted with labeled outside data protocol. The results are listed in Table 2. Cosine similarity is used by softmax + any loss functions. The distance used by C-contrastive and C-triplet loss is the squared normalized Euclidean distance. The C-triplet
3https://github.com/ydwen/caffe-face 4 Since the identity label of the Celebrity+[18] dataset is not publicly available, we follow Wenâs released model which is trained on CASIA-Webface [40] only. Wuâs model is also trained on CASIA-Webface [40] only. 5In each iteration we train 256 samples, i.e. the batch size is 256.
faces in video 1 fi 1 EEEEEE ost osm ol histogramming 92) 10s: || 02905 |f 0072 |] oxses |] 1850 |] ose Q â fed) â Zoapin.ut sare} our || 0352 | 00625 |] os00 |] oscr | esa og face similarities histogram feature same identity / 008 4 ~~ different identity o
99.25 Accuracy 99 =F sottmax + C-contrastive )âEsottmax + center 3 sottmax only 98.95} |â$-c.contrastve only baseline 98.9 10% 10? 10° 10? Loss Weight
Figure 8: LFW accuracies as a function of the loss weight of C-contrastive loss or center loss with error bars. All these methods use the normalization strategy except for the base- line.
Table 2: Results on LFW 6,000 pairs using Wenâs model[36]
Figure 9: (a): Illustration of how to generate a histogram feature for a pair of videos. We firstly create a pairwise score matrix by computing the cosine similarity between two face images from different video sequences. Then we ac- cumulate all the scores in the matrix to create a histogram. (b): Visualization of histogram features extracted from 200 video pairs with both same identities and different identi- ties. After collecting all histogram features, support vector machine (SVM) using histogram intersection kernel(HIK) is utilized to make a binary classification.
loss function Normalization Accuracy softmax softmax + dropout softmax + center[36] softmax softmax softmax softmax + center C-contrasitve C-triplet C-triplet + center softmax + C-contrastive No No No feature only weight only Yes Yes Yes Yes Yes Yes 98.28% 98.35% 99.03% 98.72% 98.95% 99.16% ± 0.025% 99.17% ± 0.017% 99.15% ± 0.017% 99.11% ± 0.008% 99.13% ± 0.017% 99.19% ± 0.008%
normalizing the weights only will cause the network collapse, while normalizing the features only will lead to a worse accuracy, 98.45%, which is better than the conventional softmax loss, but much worse than state-of-the-art loss functions.
+ center loss is implemented by forcing to optimize ||x; â W;|| even if m + ||x; -â wyll3 = ||xi - Weld is less than 0. From Table 2 we can conclude that the loss functions have minor influence on the accuracy, and the normalization is the key factor to promote the performance. When combining the softmax loss with the C- contrastive loss or center loss, we need to add a hyper-parameter to make balance between the two losses. The highest accuracy, 99.2167%, is obtained by softmax + 0.01 * C-contrastive. However, pure softmax with normalization already works reasonably well. We have also designed two ablation experiments of normalizing the features only or normalizing the columns of weight matrix only. During experiments we find that the scale parameter is necessary when normalizing the feature, while normalizing the weight does not need it. We cannot explain it so far. This is tricky but the network will collapse if the scale parameter is not properly added. From Table 2 we can conclude that normalizing the feature causes performance degradation, while normalizing the weight has little influence on the accuracy. Note that these two special cases of softmax loss are also fine-tuned based on Wenâs model. When training from scratch,
In Figure 8, we show the effect of the loss weights when using two loss functions. As shown in the figure, the C-contrastive loss is more robust to the loss weight. This is not surprising because C- contrastive loss can train a model by itself only, while the center loss, which only optimizes the intra-class variance, should be trained with other supervised losses together.
To make our experiment more convincing, we also train some of the loss functions on Wuâs model[38]. The results are listed in Table 4. Note that in [38], Wu et. al. did not perform face mirroring when they evaluated their methods. In Table 4, we also present the result of their model after face mirroring and feature merging. As is shown in the table, the normalization operation still gives a significant boost to the performance.
On BLUFR protocol, the normalization technique works even better. Here we only compare some of the models with the baseline (Table 3). From Table 3 we can see that normalization could boost the performance significantly, which reveals that normalization technique could perform much better when the false alarm rate (FAR) is low.
Table 3: Results on LFW BLUFR[15] protocol
model loss function Normalization TPR@FAR=0.1% DIR@FAR=1% ResNet ResNet ResNet ResNet softmax + center[36] softmax C-triplet + center softmax + C-contrastive No Yes Yes Yes 93.35% 95.77% 95.73% 95.83% 67.86% 73.92% 76.12% 77.18% MaxOut MaxOut MaxOut softmax[38] softmax C-contrastive No Yes Yes 89.12% 90.64% 90.32% 61.79% 65.22% 68.14%
# Table 4: Results on LFW 6,000 pairs using Wuâs model[38]
loss function Normalization Accuracy softmax softmax + mirror softmax C-contrastive softmax + C-contrastive No No Yes Yes Yes 98.13% 98.41% 98.75% ± 0.008% 98.78% ± 0.017% 98.71% ± 0.017%
# Table 5: Results on YTF with Wenâs model[36]
loss function Normalization Accuracy softmax + center[36] softmax softmax + HIK-SVM C-triplet + center C-triplet + center + HIK-SVM softmax + C-contrastive softmax + C-contrastive + HIK-SVM No Yes Yes Yes Yes Yes Yes 93.74% 94.24% 94.56% 94.3% 94.58% 94.34% 94.72%
The results are listed in Table 5. The models that perform better on LFW also show superior performance on YTF. Moreover, the newly proposed score histogram technique (HIK-SVM in the table) can improve the accuracy further by a significant gap.
6 CONCLUSION AND FUTURE WORK In this paper, we propose to apply L2 normalization operation on the features and the weight of the last inner-product layer when training a classification model. We explain the necessity of the nor- malization operation from both analytic and geometric perspective. Two kinds of loss functions are proposed to effectively train the normalized feature. One is a reformulated softmax loss with a scale layer inserted between the cosine score and the loss. Another is designed inspired by metric learning. We introduce an agent strat- egy to avoid the need of hard sample mining, which is a tricky and time-consuming work. Experiments on two different models both show superior performance over models without normalization. From three theoretical propositions, we also provide some guidance on the hyper-parameter setting, such as the bias term (Proposition 1), the scale parameter (Proposition 2) and the margin (Proposition 3).
5.3 Experiments on YTF The YTF dataset[37] consists of 3,425 videos of 1,595 different peo- ple, with an average of 2.15 videos per person. We follow the unre- stricted with labeled outside data protocol, which takes 5, 000 video pairs to evaluate the performance.
Previous works usually extract face features from all frames or some selected frames in a video. Then two videos can construct a confidence matrix C in which each element Ci j is the cosine distance of face features extracted from the i-th frame of the first video and j-th frame of the second video. The final score is computed by the average of all all elements in C. The one dimension score is then used to train a classifier, say SVM, to get the threshold of same identity or different identity.
Here we propose to use the histogram of elements in C as the feature to train the classifier. The bin of the histogram is set to 100 (Figure 9(a)). Then SVM with histogram intersection kernel (HIK-SVM)[2] is utilized to make a two-class classification (Figure 9(b)). This method encodes more information compared to the one dimensional mean value, and leads to better performance on video face verification.
Currently we can only fine-tune the network with normalization techniques based on other models. If we train a model with C- contrastive loss function, the final result is just as good as center loss[36]. But if we fine-tune a model, either Wenâs model[36] or Wuâs model[38], the performance could be further improved as shown in Table 2 and Table 4. More efforts are needed to find a way to train a model from scratch, while preserving at least a similar performance as fine-tuning.
Our methods and analysis in this paper are general. They can be used in other metric learning tasks, such as person re-identification or image retrieval. We will apply the proposed methods on these tasks in the future.
7 ACKNOWLEDGEMENT This paper is funded by Office of Naval Research (N00014-15-1- 2356), National Science Foundation (CCF-1317376), the National Natural Science Foundation of China (61671125, 61201271, 61301269) and the State Key Laboratory of Synthetical Automation for Process Industries (NO. PAL-N201401).
We thank Chenxu Luo and Hao Zhu for their assistance in for-
mula derivation.
REFERENCES [1]
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normaliza- tion. arXiv preprint arXiv:1607.06450 (2016).
[2] Annalisa Barla, Francesca Odone, and Alessandro Verri. 2003. Histogram in- tersection kernel for image classification. In Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on, Vol. 3. IEEE, IIIâ513.
[3] Xinyuan Cai, Chunheng Wang, Baihua Xiao, Xue Chen, and Ji Zhou. 2012. Deep nonlinear metric learning with independent subspace analysis for face verifica- tion. In ACM international conference on Multimedia. ACM, 749â752.
[4] Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1. IEEE, 539â546.
[5] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jagannath Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition. 580â587. Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. 2013. Maxout Networks. International Conference on Machine Learning 28 (2013), 1319â1327.
[6]
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770â778.
[8] Chen Huang, Chen Change Loy, and Xiaoou Tang. 2016. Local similarity-aware deep feature embedding. In Advances in Neural Information Processing Systems. 1262â1270.
[9] Gary B Huang and Erik Learned-Miller. 2014. Labeled faces in the wild: Updates and new reporting procedures. Dept. Comput. Sci., Univ. Massachusetts Amherst, Amherst, MA, USA, Tech. Rep (2014), 14â003.
[10] Gary B Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. 2007. La- beled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report. Technical Report 07-49, University of Mas- sachusetts, Amherst.
[11] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).
[12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classifica- tion with deep convolutional neural networks. In Advances in neural information processing systems. 1097â1105.
[13] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient- based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278â 2324.
[14] Yann LeCun, Corinna Cortes, and Christopher Burges. 1998. The mnist database of handwritten digits. (1998). http://yann.lecun.com/exdb/mnist/
[15] Shengcai Liao, Zhen Lei, Dong Yi, and Stan Z Li. 2014. A benchmark study of large-scale unconstrained face recognition. In IEEE International Joint Conference on Biometrics. IEEE, 1â8.
[16] Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. 2016. Large-Margin Softmax Loss for Convolutional Neural Networks. In International Conference on Machine Learning. 507â516.
[17] Yu Liu, Hongyang Li, and Xiaogang Wang. 2017. Learning Deep Features via Congenerous Cosine Loss for Person Recognition. arXiv preprint arXiv:1702.06890 (2017).
[18] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision. 3730â3738. Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition. 3431â3440.
[19]
[20] Chaochao Lu and Xiaoou Tang. 2014. Surpassing human-level face verification performance on LFW with GaussianFace. arXiv preprint arXiv:1404.3840 (2014). Jonathan Milgram StÃľphane Gentric Liming Chen Md. Abul Hasnat, Julien BohnÃľ. 2017. von Mises-Fisher Mixture Model-based Deep learning: Application to Face Verification. arXiv preprint arXiv:1706.04264 (2017). [22] Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016. Deep metric learning via lifted structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition. 4004â4012.
[21]
# [23] Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep Face
Recognition.. In BMVC, Vol. 1. 6.
[24] Rajeev Ranjan, Carlos D. Castillo, and Rama Chellappa. 2017. L2-constrained Softmax Loss for Discriminative Face Verification. arXiv preprint arXiv:1703.09507 (2017).
[25] Sam Roweis, Geoffrey Hinton, and Ruslan Salakhutdinov. 2004. Neighbourhood component analysis. Advances in Neural Information Processing Systems 17 (2004), 513â520.
[26] Walter Rudin and others. 1964. Principles of mathematical analysis, Chapter 10. Vol. 3. McGraw-Hill New York.
[27] Tim Salimans and Diederik P Kingma. 2016. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems. 901â901.
[28] Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition. 815â823.
[29] Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Net- works for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556 (2014). [30] Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems. 1849â1857. [31] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. 2014. Deep learning face representation by joint identification-verification. In Advances in neural information processing systems. 1988â1996.
[32] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition. 1â9.
[33] Yaniv Taigman, Ming Yang, MarcâAurelio Ranzato, and Lior Wolf. 2014. Deep- face: Closing the gap to human-level performance in face verification. In IEEE Conference on Computer Vision and Pattern Recognition. 1701â1708.
[34] Kilian Q Weinberger and Lawrence K Saul. 2009. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research 10, Feb (2009), 207â244.
[35] Zhiding Yu Ming Li Bhiksha Raj Weiyang Liu, Yandong Wen and Le Song. 2017. SphereFace: Deep Hypersphere Embedding for Face Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition.
[36] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. 2016. A Discriminative Feature Learning Approach for Deep Face Recognition. In European Conference on Computer Vision. Springer, 499â515.
[37] Lior Wolf, Tal Hassner, and Itay Maoz. 2011. Face recognition in unconstrained videos with matched background similarity. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 529â534.
[38] Xiang Wu, Ran He, and Zhenan Sun. 2015. A Lightened CNN for Deep Face Representation. arXiv preprint arXiv:1511.02683 (2015).
[39] Xiang Xiang and Trac D Tran. 2016. Pose-Selective Max Pooling for Measuring Similarity. Lecture Notes in Computer Science 10165 (2016).
[40] Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. 2014. Learning face representa- tion from scratch. arXiv preprint arXiv:1411.7923 (2014).
[41] Xiao Zhang, Zhiyuan Fang, Yandong Wen, Zhifeng Li, and Yu Qiao. 2016. Range Loss for Deep Face Recognition with Long-tail. arXiv preprint arXiv:1611.08976 (2016).
8 APPENDIX 8.1 Proof of Proposition 1 Proposition 1. For the softmax loss with no-bias inner-product similarity as its metric, let Pi (f) =
wit similarity as its metric, let Pj(f) = â*âTpr; denote the proba- jae 7
bility of f being classified as class i. For a given scale s > 1, if i = arg maxj (W T j f), then Pi (sf) ⥠Pi (f) always holds. Proof: Let t = s â 1, after scaling, we have,
eWla+of] P; (sf) = âââ;â_ yn ie [a+e)f] j= ewe (12) n Wl fee(wi twit)â j=l
Recall that Wi f âWj f ⥠0 if i = arg maxj (Wj f), so t(W T 0 always holds. Then j f âW T i f) â¤
wit P; (sf) > âââ rn ewt (13)
= Pi (f). The equality holds if W T f = 0 or Wi = Wj , âi, j â [1, n], which is almost impossible in practice.
8.2 Proof of Proposition 2 Proposition 2. (Loss Bound After Normalization) Assume that every class has the same number of samples, and all the samples are well-separated, i.e. each sampleâs feature is exactly the same with its corresponding classâs weight. If we normalize both the features and every column of the weights to have a norm of â¬, the softmax loss will have a lower bound, log (1 +(n-1) ome), where n is the class number. Proof: Assume ||W;|| = ¢, Vi ⬠[1,n] for convenience. Since we have already assumed that all samples are well-separated, we di- rectly use W; to represent the i-th classâ feature.
The definition of the softmax loss is,
The definition of the softmax loss is,
Ly ei Wi =â= ) log ââ a (14) neo ry ew Wj
This formula is different from Equation (1) because we assume that 2 every class has the same sample number. By dividing Wi Wi = of from both the numerator and denominator,
1x 1 's =-= ) log $e né eWiw-e i=l 1+ Dien jai & al (15) = 5 1+ y ew W-e j=l jti
ew W-e 5x D7, e%! >
j=l jti : = pki | Since f(x) = e* isa convex function, 7 we have,
: = pki | lyn 5x LEX Since f(x) = e* isa convex function, 7 D7, e%! > ew n Dist âthen we have,
12 n 2 Ls2 hn » log (1 +(n- emt Dyan jei ww), (16) i=l
i=1
The equality holds if and only if all W T i Wj , 1 ⤠i < j ⤠n have the same value, i.e., features from different classes have the same distance. Unfortunately, in d-dimension space, there are only d + 1 unique vertices to ensure that every two vertices have the same distance. All these vertices will form a regular d-simplex[26], e.g., a regular 2-simplex is an equilateral triangle and a regular 3-simplex is a regular tetrahedron. Since the class number is usually much bigger than the dimension of feature in face verification datasets, this equality actually cannot hold in practice. One improvement over this inequality is taking the feature dimension into consideration because we actually have omitted the feature dimension term in this step.
Similar with f(x) = e*, the softplus function s(x) = log(1+Ce*) is also a convex function when C > 0,so that + D7, log(1 + Ce*!) > log(1 + Cen Li: *!), then we have
Ly > log (1 4+ (nâ 1) emt Lies Djs, jai (wwe) » (17) vigils 5 Ella De ser WE Wy) a
This equality holds if and only if VW;, the sums of distances to other classâ weight )â wiw; are all the same. (j=l, j#i
# Note that
I wie = = nl? + y y wi wy, (18) i=1 j=l, j#i
# so
noon DD wiw; 2 -ne®. (19) i=1 j=1j#i
The equality holds if and only if )7_,
)7_, Wi = 0. Thus, an! _ pn md
The equality holds if and only if )7_, Wi = 0. Thus,
2
an! _ pn Lg = log{1+(n-1)e md (20) = log (1 +(n- perme).
8.3 Proof of Proposition 3 Proposition 3. Using an agent for each class instead of a specific sample would cause a distortion of ag Lye, (d(fo. fi) - d(fo, w))â, where W; is the agent of the ith-class. The distortion is bounded by
1 nCi Proof: Since d(x,y) is a metric, through the triangle inequality we have
d(f0,Wi ) â d(fj ,Wi ) ⤠d(f0, fj ) ⤠d(f0,Wi ) + d(fj ,Wi ), (21)
# so
â d(fj ,Wi ) ⤠d(f0, fj ) â d(f0,Wi ) ⤠d(fj ,Wi ), (22)
and thus,
(d(fo. fi) â d(fo. Wi)? < df, Wi. (23)
# As a result, 1 n Ci
result â Â¥) (a. fd. wi)? s â Yahwn® ey Ci jet; Ci jee,
# 8.4 Inference of Equation 4 Equation 4:
IL _x,y, PL aL _ 82s HN 03) OXi lIxlle
Inference: Here we treat ||x||z as an independent variable. Note that ¥ = Ap a and ||x|lz = [Xj x7 + ⬠. We have, aL _ OL AR , x OL Ak Allxllz Ox; Oj OX; OX; O\|x\l2 Ox;
# and ||x|lz = [Xj _ OL AR , x Oj OX;
+ ϵ . We have,
aL _ OL AR , x OL Ak Allxllz Ox; Oj OX; OX; O\|x\l2 Ox; OL 1 OL -&} 141 =. = oa + 2x, OX ||xll2 » 5; |[x|IF 2 Illa â . (26) ~9£ OL X& Ox i Ixie oy OX; |Ix\l3 OL HLH IIxlle
8.5 Proof of (x, a) =0 Proof: The vectorized version of Equation 4 is
a) =0
x x aL _ 36 -x(36.%) en ox Tle
So,
x ay _ (x, 9£) â (x, (9L,%) Ox lIxlla (x, aL) _ &, Aes x) = Ixll2 (x, aL) ~ x)(2£,x) (28) - an xll2 (x. gL) _ (9£, x) x a lIxlle | {
"id": "1502.03167"
} |
1704.06440 | Equivalence Between Policy Gradients and Soft Q-Learning | Two of the leading approaches for model-free reinforcement learning are
policy gradient methods and $Q$-learning methods. $Q$-learning methods can be
effective and sample-efficient when they work, however, it is not
well-understood why they work, since empirically, the $Q$-values they estimate
are very inaccurate. A partial explanation may be that $Q$-learning methods are
secretly implementing policy gradient updates: we show that there is a precise
equivalence between $Q$-learning and policy gradient methods in the setting of
entropy-regularized reinforcement learning, that "soft" (entropy-regularized)
$Q$-learning is exactly equivalent to a policy gradient method. We also point
out a connection between $Q$-learning methods and natural policy gradient
methods. Experimentally, we explore the entropy-regularized versions of
$Q$-learning and policy gradients, and we find them to perform as well as (or
slightly better than) the standard variants on the Atari benchmark. We also
show that the equivalence holds in practical settings by constructing a
$Q$-learning method that closely matches the learning dynamics of A3C without
using a target network or $\epsilon$-greedy exploration schedule. | http://arxiv.org/pdf/1704.06440 | John Schulman, Xi Chen, Pieter Abbeel | cs.LG | null | null | cs.LG | 20170421 | 20181014 | 8 1 0 2
t c O 4 1 ] G L . s c [
4 v 0 4 4 6 0 . 4 0 7 1 : v i X r a
# Equivalence Between Policy Gradients and Soft Q-Learning
John Schulman1, Xi Chen1,2, and Pieter Abbeel1,2
# 1OpenAI
2UC Berkeley, EECS Dept.
# joschu, peter, pieter {
# @openai.com } Abstract
{joschu, peter, pieter} @openai.com
Two of the leading approaches for model-free reinforcement learning are policy gradient methods and Q-learning methods. Q-learning methods can be eï¬ective and sample-eï¬cient when they work, however, it is not well-understood why they work, since empirically, the Q-values they estimate are very inaccurate. A partial explanation may be that Q-learning methods are secretly implementing policy gradient updates: we show that there is a precise equivalence between Q-learning and policy gradient methods in the setting of entropy-regularized reinforcement learning, that âsoftâ (entropy-regularized) Q-learning is exactly equivalent to a policy gradient method. We also point out a connection between Q-learning methods and natural policy gradient methods.
Experimentally, we explore the entropy-regularized versions of Q-learning and policy gradients, and we find them to perform as well as (or slightly better than) the standard variants on the Atari benchmark. We also show that the equivalence holds in practical settings by constructing a Q-learning method that closely matches the learning dynamics of A3C without using a target network or e-greedy exploration schedule.
# 1 Introduction
Policy gradient methods (PG) and Q-learning (QL) methods perform updates that are qualitatively similar. In both cases, if the return following an action at is high, then that action is reinforced: in policy gradient st) is increased; whereas in Q-learning methods, the Q-value Q(st, at) is methods, the probability Ï(at increased. The connection becomes closer when we add entropy regularization to these algorithms. With an entropy cost added to the returns, the optimal policy has the form Ï(a exp(Q(s, a)); hence policy gradient methods solve for the optimal Q-function, up to an additive constant (Ziebart [2010]). OâDonoghue et al. [2016] also discuss the connection between the ï¬xed points and updates of PG and QL methods, though the discussion of ï¬xed points is restricted to the tabular setting, and the discussion comparing updates is informal and shows an approximate equivalence. Going beyond past work, this paper shows that under appropriate conditions, the gradient of the loss function used in n-step Q-learning is equal to the gradient of the loss used in an n-step policy gradient method, including a squared-error term on the value function. Altogether, the update matches what is typically done in âactor-criticâ policy gradient methods such as A3C, which explains why Mnih et al. [2016] obtained qualitatively similar results from policy gradients and n-step Q-learning.
Section 2 uses the bandit setting to provide the reader with a simpliï¬ed version of our main calculation. (The main calculation applies to the MDP setting.) Section 3 discusses the entropy-regularized formulation of RL, which is not original to this work, but is included for the readerâs convenience. Section 4 shows that the soft Q-learning loss gradient can be interpreted as a policy gradient term plus a baseline-error-gradient term, corresponding to policy gradient instantiations such as A3C [Mnih et al., 2016]. Section 5 draws a connection between QL methods that use batch updates or replay-buï¬ers, and natural policy gradient methods.
Some previous work on entropy regularized reinforcement learning (e.g., OâDonoghue et al. [2016], Nachum et al. [2017]) uses entropy bonuses, whereas we use a penalty on Kullback-Leibler (KL) diver- gence, which is a bit more general. However, in the text, we often refer to âentropyâ terms; this refers to ârelative entropyâ, i.e., the KL divergence.
# 2 Bandit Setting
Letâs consider a bandit problem with a discrete or continuous action space: at each timestep the agent a), where P is unknown to the agent. Let chooses an action a, and the reward r is sampled according to P (r
|
1
r(a) =E|[r | a], and let m denote a policy, where 7(a) is the probability of action a. Then, the expected per- timestep reward of the policy 7 is Eavx [r] = 0, 7(a)r(a) or fda 7(a)r(a). Letâs suppose we are maximizing n(m), an entropy-regularized version of this objective:
η(Ï) = Eaâ¼Ï,r [r] Ï DKL [Ï Ï] (1)
â
where Ï is some âreferenceâ policy, Ï is a âtemperatureâ parameter, and DKL is the Kullback-Leibler divergence. Note that the temperature Ï can be eliminated by rescaling the rewards. However, we will leave it so that our calculations are checkable through dimensional analysis, and to make the temperature- dependence more explicit.
First, let us calculate the policy Ï that maximizes η. We claim that η(Ï) is maximized by ÏB
defined as
mp (a) = F(a) exp(F(a)/7)/ Ea'~x [exp(F(aâ)/7)] normalizing constant.
(2)
To derive this, consider the KL divergence between m and 78: Dxx [x || 18] = Eqxz [log m(a) â log 78 (a)]
[x || 18] = Eqxz [log m(a) â log 78 (a)] (3) = Eqn [log m(a) â log 7(a) â 7(a)/7 + log Eaaz [exp(r(a)/7)]] (4) = Dx [rr || 7] â Eons [7(@)/T] + log Ea~z [exp(7(a)/7)] (5)
# DKL
â
Rearranging and multiplying by Ï ,
Ean [7(a)] â TDxt [7 || 7] = 7 log Egnz [exp(r(a)/7)] â TDK [7 | a] (6)
â
â
Clearly the left-hand side is maximized (with respect to Ï) when the KL term on the right-hand side is minimized (as the other term does not depend on Ï), and DKL
The preceding calculation gives us the optimal policy when ¯r is known, but in the entropy-regularized bandit problem, it is initially unknown, and the agent learns about it by sampling. There are two approaches for solving the entropy-regularized bandit problem:
1. A direct, policy-based approach, where we incrementally update the agentâs policy Ï based on stochastic gradient ascent on η.
2. An indirect, value-based approach, where we learn an action-value function qθ that estimates and approximates ¯r, and we deï¬ne Ï based on our current estimate of qθ.
For the policy-based approach, we can obtain unbiased estimates the gradient of η. For a parameterized policy Ïθ, the gradient is given by
θη(Ïθ) = Eaâ¼Ïθ,r [ θ log Ïθ(a)r Ï Î¸DKL [Ïθ Ï]] . (7)
â
â
â
â
We can obtain an unbiased gradient estimate using a single sample (a, r).
In the indirect, value-based approach approach, it is natural to use a squared-error loss:
Lg(8) = $Eannir [(go(a) ~ 7)°| (8)
Taking the gradient of this loss, with respect to the parameters of qθ, we get
θLÏ(θ) = Eaâ¼Ï,r [ θqθ(a)(qθ(a) r)] (9)
â Soon, we will calculate the relationship between this loss gradient and the policy gradient from Equation (7). In the indirect, value-based approach, a natural choice for policy Ï is the one that would be optimal if
â
â
qθ = ¯r. Letâs denote this policy, called the Boltzmann policy, by ÏB qθ , where
Tho (2) = 7(a) exp(qo(a)/7)/Ew x [exp(qo(aâ)/7)] - (10)
2
It will be convenient to introduce a bit of notation for the normalizing factor; namely, we deï¬ne the scalar
vθ = Ï log Eaâ¼Ï [exp(qθ(a))/Ï ] (11)
Then the Boltzmann policy can be written as
# ÏB qθ (a) = Ï(a) exp((qθ(a)
vθ)/Ï ). (12)
â
Note that the term 7 log E,~ [exp(7(a)/7)], appeared earlier in Equation (6)). Repeating the calculation from Equation (2) through Equation (6), but with gg instead of 7, v9 = Eqank, (go(a)] â TD [x8 || =]. (13)
â qθ ), plugging in qθ for ¯r.
Hence, vθ is an estimate of η(ÏB
Now we shall show the connection between the gradient of the squared-error loss (Equation (9)) and the policy gradient (Equation (7)). Rearranging Equation (12), we can write qθ in terms of vθ and the Boltzmann policy ÏB qθ :
7B qo(a) = vo + roe ( +S) (14)
Letâs substitute this expression for qθ into the squared-error loss gradient (Equation (9)).
# Vol (qo) = Eaxn,r
r)]
# [Voqo(a)(qo(a) my (@) log aa)
# â = Eaâ¼Ï,r
# â vθ + Ï log
- my (@) mh (a) . Eunnr | Vo| ve + 7 log aa) vg +7 log aa) âr (16)
_âE B : Tap (2) , ; 749 (2) = Eaan.r |TVo log 7, (a) { ve + 7 log stay J â 7) A Vove| vo + T log aay r (17)
Note that we have not yet decided on a sampling distribution Ï. Henceforth, weâll assume actions were sampled by Ï = ÏB
(â ay (@ ) tr)
VoDxx [x8 || 7] = Vo [vo nr (a yon (â ay (@ ) (18)
# (a yon
â
= [ro Vor (a)(1oe( tr) taf & (=r) (19)
moving gradient inside and using identity da â θÏB qθ (a)=0
- | da 78 (a)Vo log 78 (a) Woe( anh ) (20)
# (a) Woe( B (a) Woe( rae
B = Eanns, [v0 log 8, (a) Woe( rae â)] (21)
Continuing from Equation (17) but setting Ï = ÏB qθ ,
â
(q)| = Eaank wr [tVo log coe (a)(ve â 1) + 7?Vo Dei [rs I 7] nan, + VoE annâ wr [ve (ve + TDxx [7 B |] 7) â r)| (22) =-T Voz awn r [r â TDi [x8 | 7] | + Tiles [4 (vo â(râ TDxu Ir || #))7] | pan8 (23) ââ 90 policy gradient value error gradient
# VoL (q)|
Hence, the gradient of the squared error for our action-value function can be broken into two parts: the first part is the policy gradient of the Boltzmann policy corresponding to gg, the second part arises mt a Ta error objective, where we are fitting vg to the entropy-augmented expected reward 7(a) â 7Dxu [7 B || 7] T Soon we will derive an equivalent interpretation of Q-function regression in the MDP setting, where we are approximating the state-value function Q7'7. However, we first need to introduce an entropy-regularized version of the reinforcement learning problem.
â
3
(15)
# 3 Entropy-Regularized Reinforcement Learning
We shall consider an entropy-regularized version of the reinforc prior work (Ziebart [2010], Fox et al. [2015], Haarnoja et al. [20 us define the entropy-augmented return to be ran V(r, â7 KL ement learning problem, following various 7], Nachum et al. [2017]). Specifically, let 1) where r; is the reward, y ⬠[0,1] is the discount factor, 7 is a scalar temperature coefficient, and KL; is the Kullback-Leibler divergence between the current policy 7 and a reference policy 7 at timestep t: KL; = Dxx [7(-| s¢) || 7(-| sz)]. We will sometimes use the notation KL(s) = Dx [7 || 7] (s) = Dxx [x(-|s) || 7(- entropy bonus (up to a constant), one can define 7 to be the uni will generalize some of the concepts from reinforcement learning entropy-augmented discounted return. s)|. To emulate the effect of a standard form distribution. The subsequent sections o the setting where we are maximizing the
# 3.1 Value Functions
We are obliged to alter our deï¬nitions of value functions to include the new KL penalty terms. We shall deï¬ne the state-value function as the expected return:
V,(s) =E oo Ss o' (re â T KL) t=0 80 (24)
and we shall deï¬ne the Q-function as
co To+ Svi(re â7TkKll;) t=1 Qx(s,a) =E 5 (25) 80
Note that this Q-function does not include the ï¬rst KL penalty term, which does not depend on the action a0. This deï¬nition makes some later expressions simpler, and it leads to the following relationship between QÏ and VÏ:
VÏ(s) = Eaâ¼Ï [QÏ(s, a)] Ï KL(s), (26)
â
which follows from matching terms in the sums in Equations (24) and (25).
# 3.2 Boltzmann Policy
Q](s) = arg maxa Q(s, a). With In standard reinforcement learning, the âgreedy policyâ for Q is deï¬ned as [ entropy regularization, we need to alter our notion of a greedy policy, as the optimal policy is stochastic. Since QÏ omits the ï¬rst entropy term, it is natural to deï¬ne the following stochastic policy, which is called the Boltzmann policy, and is analogous to the greedy policy:
TH: |s)= arg max{Ea~ {Q(s,a)] â TDxx [7 || 7] (s)} (27)
# Ï = Ï(a
= 7(a| s) exp(Q(s, @)/7)/Eu'xz [exp(Q(s,aâ)/7)] . (28)
normalizing constant
where the second equation is analogous to Equation (2) from the bandit setting.
Also analogously to the bandit setting, it is natural to deï¬ne VQ (a function of Q) as
Vo(s) = 7 log Eurax [ex(Q(s, a')/r)] (29)
so that
ÏB Q(a | s) = Ï(a | s) exp((Q(s, a) â VQ(s))/Ï )
Under this deï¬nition, it also holds that
(31) Va(s) = Eanat(s) (Q(s,@)] â rDxx [76 || 7] (s)
â
4
(30)
in analogy with Equation (13). Hence, VQ(s) can be interpreted as an estimate of the expected entropy- augmented return, under the Boltzmann policy ÏB Q.
Another way to interpret the Boltzmann policy is as the exponentiated advantage function. Deï¬ning the advantage function as AQ(s, a) = Q(s, a) VQ(s), Equation (30) implies that ÏB Q(a | s) Ï(a | s) = exp(AQ(s, a)/Ï ).
â
# 3.3 Fixed-Policy Backup Operators
The Ï operators (for Q and V ) in standard reinforcement learning correspond to computing the expected return with a one-step lookahead: they take the expectation over one step of dynamics, and then fall back on the value function at the next timestep. We can easily generalize these operators to the entropy-regularized setting. We deï¬ne
~m,(r,8!)~P(r,s! | s,a) [7 â TKL(s) + V(s')] (32) [TrQ](s, a) = E(r,sâ)~P(r,8! | 8,0) [r + y(Ea'wa [Q(sâ, aâ)] â 7 KL(sâ))] . (33)
Repeatedly applying the 7, operator (7,"V = Tx(Tx(.-.. Tz(V)))) corresponds to computing the expected e~Yââ n times
return with a multi-step lookahead. That is, repeatedly expanding the deï¬nition of
7,, we obtain
# T
[n-1 TEV \(s) =E | oye: â 7 KL) +9"V (Sn) t=0 a= | (34)
# [ T
n-1 YE re = FKL) + Y"(Q(Sn an) â TKLy) t=0 [T7QI(s,a) â rKL(s) =E 89 = 8,49 = | . (35)
As a sanity check, note that in both equations, the left-hand side and right-hand side correspond to estimates of the total discounted return ran V(r, â 7 KL).
â
The right-hand side of these backup formulas can be rewritten using âBellman errorâ terms δt. To rewrite the state-value (V ) backup, deï¬ne
δt = (rt Ï KLt) + γV (st+1) V (st) (36)
â
â
Then we have
n=1 [Tr V\(s) =E Ss Vb +7"V (Sn) | 80 = | : (37) t=0
# 3.4 Boltzmann Backups
We can deï¬ne another set of backup operators corresponding to the Boltzmann policy, Ï(a | We deï¬ne the following Boltzmann backup operator: s) â Ï(a |
# s) exp(Q(s, a)/Ï ).
[
[TQ](s, @) = Eq,s)~P(r,s' |s,a) {7 + YEa~ge [Q(s,a)] â TDxx [GQ || 7] (sâ) (38) (*) = E(rs')~P(r,s! | s,a) [7 +77 log Earwz [exp(Q(sâ, aâ)/7)] (39) (+*)
where the simpliï¬cation from ( ) to ( â setting (Equations (11) and (13)). ââ ) follows from the same calculation that we performed in the bandit
n Ï for Q-functions also simpliï¬es in the case that we are executing the Boltzmann Q, and then using Equation (31) The n-step operator T n Ï Q (Equation (35)) and setting Ï = ÏB policy. Starting with the equation for
# T
5
to rewrite the expected Q-function terms in terms of VQ, we obtain
1 V(r â T KL) + Y"(Q(Sn; an) â TKLn) n [(Tag)"Q\(s, a) â 7 KL(s) = E 50 = $,a9 = | (40) IL HO 3 | =E a (re â TKLt) + "VQ (Sn) 80 = 8,49 = | . (41) °
# [( TÏB
From now on, letâs denote this n-step backup operator by J,,6,,. (Note T,8 0 #T"Q, even though Tr8 1 = TQ, because Tr8 depends on Q.)
# TÏB
# T
# Q
One can similarly deï¬ne the TD(λ) version of this backup operator
[ TÏB Q,λQ] = (1 â λ)(1 + λ TÏB Q + (λ TÏB Q )2 + . . . ) TÏB Q Q. (42)
One can straightforwardly verify by comparing terms that it satisï¬es
Vows t=0 where 4; = (r¢ â T KLy) + WV (St41) â Vo(st)- (43) [Tez xQl(s.) = Q(s,a) +E So = 5,40 4 ;
# 3.5 Soft Q-Learning
The Boltzmann backup operators deï¬ned in the preceding section can be used to deï¬ne practical variants of Q-learning that can be used with nonlinear function approximation. These methods, which optimize the entropy-augmented return, will be called soft Q-learning. Following Mnih et al. [2015], modern implemen- tations of Q-learning, and n-step Q-learning (see Mnih et al. [2016]) update the Q-function incrementally to compute the backup against a ï¬xed target Q-function, which weâll call Q. In the interval between each target network update, the algorithm is approximately performing the backup operation Q Q (1-step) or Q Q,nQ (n-step). To perform this approximate minimization, the algorithms minimize the least â TÏB squares loss
[$(Q(s:, a)
# m)?| , where
# L(Q) = Et,st,at
yt = rt + γVQ(st+1) â 1-step Q-learning (44) (45)
n=1
yt = Ï KLt + d=0 γd(rt+d â Ï KLt+d) + γnVQ(st+n) n-step Q-learning (46)
n=1 =7TKL; +VQ(s1) + Ss Ota d=0 where 6; = (r¢ â TKLy) + YVa(st41) â Vo(se) (47)
â
â
Q](st, at), regardless of what In one-step Q-learning (Equation (45)), yt is an unbiased estimator of [ behavior policy was used to collect the data. In n-step Q-learning (Equation (46)), for n > 1, yt is only an unbiased estimator of [
# TÏB
# 3.6 Policy Gradients
Entropy regularization is often used in policy gradient algorithms, with gradient estimators of the form
Ex,s:,a, | Vo log 79 (az | se) Ss ry âTVoDxx [76 || 7] (st) (48) v>t
(Williams [1992], Mnih et al. [2016]).
6
However, these are not proper estimators of the entropy-augmented return
Ï KLt), since they donât account for how actions aï¬ect entropy at future timesteps. Intuitively, one can think of the KL terms as a cost for âmental eï¬ortâ. Equation (48) only accounts for the instantaneous eï¬ect of actions on mental eï¬ort, not delayed eï¬ects.
To compute proper gradient estimators, we need to include the entropy terms in the return. We will deï¬ne the discounted policy gradient in the following two equivalent waysâï¬rst, in terms of the empirical return; second, in terms of the value functions VÏ and QÏ:
& 94 (7) =! t=0 d=1 | ico] » Vo log mo(az | 8¢)(Qx(Se, a2) â Va (St)) â TVeDxx [76 || 7] (se) t=0 » Vo log 7o(az | se) (~ + 0" (re¢a = TKLiga) â TVoD xx [700 || #1] (se) | J (50)
In the special case of a ï¬nite-horizon problemâi.e., rt = KLt = 0 for all t T âthe undiscounted (γ = 1) return is ï¬nite, and it is meaningful to compute its gradient. In this case, g1(Ïθ) equals the undiscounted policy gradient:
gi(7) = VoE Sin -rkts] (51) t=0
This result is obtained directly by considering the stochastic computation graph for the loss (Schul- [2015a]), shown in the ï¬gure on the man et al. The edges from θ to the KL loss terms right. lead to the Ï] (st) terms in the gradi- ent; the edges to the stochastic actions at lead to the Ï KLt+d) terms in the
Since g1(Ïθ) computes the gradient of the entropy-regularized return, one interpretation of gγ(Ïθ) is that it is an approximation of the undiscounted policy gradient g1(Ïθ), but that it allows for lower-variance gradient estimators by ignoring some long-term dependencies. A diï¬erent interpretation of gγ(Ï) is that it gives a gradient ï¬ow such that Ïâ = ÏB
As in the standard MDP setting, one can deï¬ne approximations to gγ that use a value function to truncate the returns for variance reduction. These approximations can take the form of n-step methods (Mnih et al. [2016]) or TD(λ)-like methods (Schulman et al. [2015b]), though we will focus on n-step returns here. Based on the deï¬nition of gγ above, the natural choice of variance-reduced estimator is
Et,s:.a. n-1 Vo log ma(at | st) Ss Ota (52) d=0
where δt was deï¬ned in Equation (36).
The state-value function V we use in return oy Â¥' (re â TKLz). We can fit V minimizing a squared-error loss he above formulas should approximate the entropy augmented iteratively by approximating the n-step backup V <+ 7,;"V, by
L(V) = Era, [(V(s1) â )2], (53)
[(V(s1) n-1
n-1 n-1 where y= Ss ria + ¥!V (St4a) = V (se) + Ss 1 t4a- (54) d=0 d=0
7
# 4 Soft Q-learning Gradient Equals Policy Gradient
This section shows that the gradient of the squared-error loss from soft Q-learning (Section 3.5) equals the policy gradient (in the family of policy gradients described in Section 3.6) plus the gradient of a squared-error term for ï¬tting the value function. We will not make any assumption about the parameterization of the Q-function, but we deï¬ne Vθ and Ïθ as the following functions of the parameterized Q-function Qθ:
Vθ(s) := Ï log Ea [exp(Qθ(s, a)/Ï )] (55)
Ïθ(a s) := Ï(a s) exp((Qθ(s, a) Vθ(s))/Ï )
|
|
â
Here, Ïθ is the Boltzmann policy for Qθ, and Vθ is the normalizing factor we described above. From these deï¬nitions, it follows that the Q-function can be written as
Qθ(s, a) = Vθ(s) + Ï log Ïθ(a | s) Ï(a | s) (57)
We will substitute this expression into the squared-error loss function. First, for convenience, let us define Ae = ng Moe4a-
Now, letâs consider the gradient of the n-step soft Q-learning objective:
= 1 2 VoEt,s:a.7 | 31|Qo(se, ae) â yell | | me
â
swap gradient and expectation, treating state-action distribution as ï¬xed:
# = Et,st,atâ¼Ï [
â
# θQθ(st, at)(Qθ(st, at)
â
â
# Yt)]
# |p âaey
replace Qθ using Equation (57), and replace Q-value backup yt by Equation (46):
= Er.syarrn |V0Qo( se, ax) (7 log E4424 + Vo(se) â (Vol st) + Da [no || 7] (82) + 44))]| F(a se) (60) T=T9 cancel out Vo(s):
Er.srainn | VoQo (sr, ae)(7 log $4424 â rDxcx [ro || 7] (sx) â Av)]| (61) T=T9 replace the other Qg by Equation (57):
replace the other Qθ by Equation (57):
= Et.s,,a,~| (7 Vo log m9(ax | 82) + VoVa(se)) « (rlog B12. â Dic, [ro |) 7] (Se) â Av)| | Wat | se) (62) T=T9 expand out terms:
To(ae | se) (ae | st) = Ex,s,,a,~n|Tâ Vo log 79 (at | 81) log â 1? Vo log m6 (ae | 82)Ducx [re || 7] (se) â FO (*) â TV 9 log mo (a | 81)Ar + TV @Vo(sz) log lords) â TVoVo(s1)Dxx [76 || 7] (s2) âVoVols1)4r] |-n (63) a (**)
(*) vanishes because Eg. 79(. | s,) [Vo log 6 (at | s) - const] = 0 (**) vanishes because Eqwg(- | s,) [see? | = Dyx [ro || 7 (se) (64) T=T9 Exsxayrn [T? VoD [ro || Fi] (51) + 0 â TV o log moar | 51) At +0 â VoVol(sx)u] | rearrange terms:
# ââ = Et,st,atâ¼Ï
(65) T=T9 2 = Et.s,ay~n | â7V9 log ro(ar | se) Ae + 7?°VoDux [ro ll FIl(se) + Vod||VoCse) â Vil) JI Ke , SS policy grad value function grad
Note that the equivalent policy gradient method multiplies the policy gradient by a factor of Ï , relative to the value function error. Eï¬ectively, the value function error has a coeï¬cient of Ï â1, which is larger than what is typically used in practice (Mnih et al. [2016]). We will analyze this choice of coeï¬cient in the experiments.
8
(56)
(58)
(59)
(61)
# 5 Soft Q-learning and Natural Policy Gradients
The previous section gave a ï¬rst-order view on the equivalence between policy gradients and soft Q-learning; this section gives a second-order, coordinate-free view. As previous work has pointed out, the natural gradient is the solution to a regression problem; here we will explore the relation between that problem and the nonlinear regression in soft Q-learning.
The natural gradient is defined as F~1!g, where F is the average Fisher information matrix, F = Es,axn [(Vo log m6(a| s))? (Vo log 79(a| s))], and g is the policy gradient estimate g « E[V¢ log m9(a| s)A], where A is an estimate of the advantage function. As pointed out by Kakade [2002], the natural gradient step can be computed as the solution to a least squares problem. Given timesteps t = 1,2,...,7, define a, = Vo log 79(a; | s;). Define © as the matrix whose t'" row is ;, let A denote the vector whose t*® ele- ment is the advantage estimate A;, and let ⬠denote a scalar stepsize parameter. Consider the least squares problem
<4 _ 2 a min }|\Yw â eA (66)
The least-squares solution is w = e(W7W)-!W7A. Note that E [wre] is the Fisher information matrix Fâ, and E [wrA] is the policy gradient g, so w is the estimated natural gradient.
Now let us interpret the least-squares problem in Equation (66). Ww is the vector whose Vo log 7g(a|s)-w. According to the definition of the gradient, if we perform a parameter update with 6 â O14 = ew, the change in log 7@(a| s) is as follows, to first order in e: t*â row is
â
|
log m9(a|s) â log 79,,,(a | s) © Vo log 79(a| s)- ew = ep -w (67)
|
â
|
â â
|
Thus, we can interpret the least squares problem (Equation (66)) as solving
T min >? 3 (log 79 (a, | s,) â log m,,, (at | 8.) â eA,)? (68) t=1
That is, we are adjusting each log-probility log 79,,, (az | s:) by the advantage function Ay;, scaled by e. In entropy-regularized reinforcement learning, we have an additional term for the gradient of the KL- divergence:
# g
# E[Vo
θ log Ïθ(at Ï
s:) Ar
TVo KL [79,77] (s2)] r[log( e429)
â = E
|
â ât
=E [Ve log 7(at | si)(Ai - r[log( e429) â KL[ro, 7 (s:)]) | (70)
where the second line used the formula for the KL-divergence (Equation (21)) and the identity that Eatâ¼Ïθ [ st) least squares problem (to compute F â1g) is
min } (log mo (a, | 84) â log 79,,, (ae | se) â (A: - 7 [log ( S412?) â KU fre, 7(s0)})) - (71) t=1
Now letâs consider Q-learning. Letâs assume that the value function is unchanged by optimization, so Vθ = Vθold. (Otherwise, the equivalence will not hold, since the value function will try to explain the measured advantage â, shrinking the advantage update.)
3(Qo(sr, a4) â ws) = $((Vo(si,ar) + log (Gels) ) â (Vana (se) +7 KU[r 9,14, 7] (82) + Ay) (72) = 3 (rlog (Beye?) - (Ar 4 FKL fro, 7(st))) (73)
st) + ât/Ï + KL[Ïθold , Ï](st). This loss is not Evidently, we are regressing log Ïθ(at equivalent to the natural policy gradient loss that we obtained above. st) towards log Ïθold (at | |
9
(69)
We can recover the natural policy gradient by instead solving a damped version of the Q-function regres- sion problem. Define Qf = (1 â â¬)Qo.,4(S¢, a) + â¬Q), ie., we are interpolating between the old value and the backed-up value.
sion problem. Define Qf backed-up value. Qi = (1 = â¬)Qoara (se, a0)
Qi = (1 = â¬)Qoara (se, a0) + â¬Qt = Qonra (Se, ae) + â¬(Qt â Qoera(Ses a4) (74) Qt â Qoara (S15 a2) = (Vase) + 7 KU [toque 7H(S1) + At) â (Vaua(se) + 7 log (Hawise) ) ) (75) = Art 7][KL[re,4,7l(s1) â log (Hes) (76) Qo(se- ar) ~ QF = Qolsi.a2) ~ (Qaua(se.ae) + â¬(Qe ~ Qo. (Se ae))) (77) = Vo(s1) + log (ee?) â {Vaua(se) + log (a5) + (A +7 | KD fro... 71(s2) â log (ae?) |) } = log m6(az | 8) â log To, (ae | $2) â (A - [log (ae?) - KL [roa 7)(se)]) (78)
which exactly matches the expression in the least squares problem in Equation (71), corresponding to entropy- regularized natural policy gradient. Hence, the âdampedâ Q-learning update corresponds to a natural gradient step.
# 6 Experiments
To complement our theoretical analyses, we designed experiments to study the following questions:
1. Though one-step entropy bonuses are used in PG methods for neural network policies (Williams [1992], Mnih et al. [2016]), how do the entropy-regularized RL versions of policy gradients and Q-learning described in Section 3 perform on challenging RL benchmark problems? How does the âproperâ entropy-regularized policy gradient method (with entropy in the returns) compare to the naive one (with one-step entropy bonus)? (Section 6.1)
2. How do the entropy-regularized versions of Q-learning (with logsumexp) compare to the standard DQN of Mnih et al. [2015]? (Section 6.2)
3. The equivalence between PG and soft Q-learning is established in expectation, however, the actual gradient estimators are slightly diï¬erent due to sampling. Furthermore, soft Q-learning is equivalent to PG with a particular penalty coeï¬cient on the value function error. Does the equivalence hold under practical conditions? (Section 6.3)
# 6.1 A2C on Atari: Naive vs Proper Entropy Bonuses
Here we investigated whether there is an empirical eï¬ect of including entropy terms when computing returns, as described in Section 3. In this section, we compare the naive and proper policy gradient estimators:
n=1 naive / 1-step: V log 79 (a: | 81) (= ra â v9) âTVo6Dxx [7 || 7 (82) (79) d=0
n=1
n=1 proper: V log 79 (a¢ | sz) (= 7" (riaa â TDxx [70 || 7] (si+a)) â veo) âTVoDxt [70 || 7] (sz) (80) d=0
In the experiments on Atari, we take Ï to be the uniform distribution, which gives a standard entropy bonus up to a constant.
We start with a well-tuned (synchronous, deterministic) version of A3C (Mnih et al. [2016]), henceforth called A2C (advantage actor critic), to optimize the entropy-regularized return. We use the parameter Ï = 0.01 and train for 320 million frames. We did not tune any hyperparameters for the âproperâ algorithmâ we used the same hyperparameters that had been tuned for the ânaiveâ algorithm.
As shown in Figure 1, the âproperâ version yields performance that is the same or possibly greater than the ânaiveâ version. Hence, besides being attractive theoretically, the entropy-regularized formulation could lead to practical performance gains.
10
Spacelnvaders Breakout BeamRider 1750 â A2C (1-step) 5000 | ââ A2C (1-step) â A2C (proper) 400 â A2C (proper) 1500 4000 1250 300 1000 3000 200 750 2000 500 100 1000 250 â A2C (1-step) 0 â A2C (proper) 0 320M 0 320M 0 320M Frames Frames Frames Pon bert Seaquest 9 a 2000 4 20 6000 |} ââ A2C (1-step) â 1750 4000 A2C (proper) 10 2000 1500 000 1250 0 8000 1000 6000 750 -10 4000 500 â A2C (1-step) 2000 250 â A2C (1-step) -20 â A2C (proper) 0 â A2C (proper) 0 320M 0 320M 0 320M Frames Frames Frames
Figure 1: Atari performance with diï¬erent RL objectives. EntRL is A2C modiï¬ed to optimize for return augmented with entropy (instead of KL penalty). Solid lines are average evaluation return over 3 random seeds and shaded area is one standard deviation.
# 6.2 DQN on Atari: Standard vs Soft
Here we investigated whether soft Q-learning (which optimizes the entropy-augmented return) performs diï¬erently from standard âhardâ Q-learning on Atari. We made a one-line change to a DQN implementation:
Q(s:41,0â)
Ye =, + ymax Q(s:41,0â) Standard (81) a
y=ret ylog S> exp(Q(si41,0')/7) â log|A| âSoftâ: KL penalty (82) a
# a
Ye =r, + log Ss exp(Q(s141,@â)/T) âSoftâ: Entropy bonus (83) a
The diï¬erence between the entropy bonus and KL penalty (against uniform) is simply a constant, however, this constant made a big diï¬erence in the experiments, since a positive constant added to the reward encourages longer episodes. Note that we use the same epsilon-greedy exploration in all conditions; the only diï¬erence is the backup equation used for computing yt and deï¬ning the loss function.
The results of two runs on each game are shown in Figure 2. The entropy-bonus version with Ï = 0.1 seems to perform a bit better than standard DQN, however, the KL-bonus version performs worse, so the beneï¬t may be due to the eï¬ect of adding a small constant to the reward. We have also shown the results for 5-step Q-learning, where the algorithm is otherwise the same. The performance is better on Pong and Q-bert but worse on other gamesâthis is the same pattern of performance found with n-step policy gradients. (E.g., see the A2C results in the preceding section.)
# 6.3 Entropy Regularized PG vs Online Q-Learning on Atari
Next we investigate if the equivalence between soft Q-learning and PG is relevant in practiceâwe showed above that the gradients are the same in expectation, but their variance might be diï¬erent, causing diï¬erent
11
EnduroNoFrameskip-v3
BeamRiderNoFrameskip-v3 BreakoutNoFrameskip-v3 EnduroNoFrameskip-v3 600 â standard, n=5 â standard, n=5 â standard, n=5 q q 2000 q 4000) __ soft (ent), = 0.1 â soft (ent), r= 0.1 ADL â soft (ent), r=0.1 ae 2000 | â soft (KL), r=0.2 500) ___ soft (KL), T$0.1 soft tO. â soft (ent), t= 0.01 â soft (ent), r= 0.01 1500 | â soft (ent), r=0.02 0000 | ââ soft (KL), r= 0.01 400) ___ soft (KL), = 0.01 soft (KL), t= 0.01 â standard â standard standard 8000 D fo 300 VV 1000 6000 200 4000 500 100 2000 0 0 0 0 40M fd 40M 0 40m Frames Frames Frames PongNoFrameskip-v3 QbertNoFrameskip-v3 SeaquestNoFrameskip-v3 204 â standard, n=5 == goo | Standard, n=5 â standard, n=5 â soft (ent), = 0.1 â soft (ent), r=0.1 0000 7 ââ soft (ent), r= 0.1 â soft (KL), r=0.1 12000 | â soft (KL), r=0.1 â soft (KL), 101 â soft (ent), r=0.01 â soft (ent), r= 0.01 000 | â soft (ent), â soft (KL), r= 0.01 10000} ââ soft (KL), r= 0.01 ââ soft (KL), t= 0.01 A lees standard sooo | standard 6000 | â standard 6000 4000 =10 4000 2000 2000 ~20 0 0 fd 40M fd 40M fd 40M Frames Frames Frames
Figure 2: Diï¬erent variants of soft Q-learning and standard Q-learning, applied to Atari games. Note that 4 frames = 1 timestep.
learning dynamics. For these experiments, we modiï¬ed the gradient update rule used in A2C while making no changes to any algorithmic component, i.e. parallel rollouts, updating parameters every 5 steps, etc. The Q-function was represented as: Qθ(s, a) = Vθ(s) + Ï log Ïθ(a s), which can be seen as a form of dueling architecture with Ï log Ïθ(a s) being the âadvantage streamâ (Wang et al. [2015]). Vθ, Ïθ are parametrized as the same neural network as A2C, where convolutional layers and the ï¬rst fully connected layer are shared. Ïθ(a
|
A2C can be seen as optimizing a combination of a policy surrogate loss and a value function loss, weighted by hyperparameter c:
Lpolicy = log Ïθ(at Ï]](st) (84)
Lyolicy = â log m6 (at | 81) Ae + TD xxx [79 2 Lvaue = 3||Va(sx) â Vil Daze = Lpoticy + CLvatue
Lvaue = 3||Va(sx)
# | ËVt
â (85)
(86)
In normal A2C, we have found c = 0.5 to be a robust setting that works across multiple environments. On the other hand, our theory suggests that if we use this Q-function parametrization, soft Q-learning has the same expected gradient as entropy-regularized A2C with a speciï¬c weighting c = 1 Ï . Hence, for the usual entropy bonus coeï¬cient setting Ï = 0.01, soft Q-learning is implicitly weighting value function loss a lot more than usual A2C setup (c = 100 versus c = 0.5). We have found that such emphasis on value function (c = 100) results in unstable learning for both soft Q-learning and entropy-regularized A2C. Therefore, to make Q-learning exactly match known good hyperparameters used in A2C, we scale gradients that go into advantage stream by 1
advantage stream by + and scale gradients that go into value function stream by c = 0.5. With the same default A2C hyperparameters, learning curves of PG and QL are a games (Figure 3), which indicates that the learning dynamics of both update rules a: 5 most identical in most re essentially the same even when the gradients are approximated with a small number of samples. Notably, here demonstrates stable learning without the use of target network or ⬠schedule. he Q-learning method
12
Spacelnvaders Breakout BeamRider 1750 â PG 5001 â pc ââ eG 7 @ â a 5000; ââ QL 1500 400 1250 4000 300 1000 3000 200 750 2000 500 100 1000 250 0 0 0 320M ) 320M Vy) 320M Frames Frames Frames Pong Qbert Seaquest 20 7500} â PG 1750, ââ PG âa âa 5000 1500 10 2500 1250 0000 0 1000 7500 750 = 5000 10 500 2500 â PG 250 20 âa 0 0 320M ) 320M Vy) 320M Frames Frames Frames
Figure 3: Atari performance with policy gradient vs Q-learning update rules. Solid lines are average evalu- ation return over 3 random seeds and shaded area is one standard deviation.
# 7 Related Work
Three recent papers have drawn the connection between policy-based methods and value-based methods, which becomes close with entropy regularization.
OâDonoghue et al. [2016] begin with a similar motivation as the current paper: that a possible expla- nation for Q-learning and SARSA is that their updates are similar to policy gradient updates. They decompose the Q-function into a policy part and a value part, inspired by dueling Q-networks (Wang et al. [2015]):
Q(s, a) = V (s) + Ï (log Ï(a s) + Ï S[Ï( s)]) (87)
|
|
This form is chosen so that the term multiplying Ï has expectation zero under Ï, which is a property that the true advantage function satisï¬es: EÏ [AÏ] = 0. Note that our work omits that S term, because it is most natural to deï¬ne the Q-function to not include the ï¬rst entropy term. The authors show that taking the gradient of the Bellman error of the above Q-function leads to a result similar to the policy gradient. They then propose an algorithm called PGQ that mixes together the updates from diï¬erent prior algorithms.
Nachum et al. [2017] also discuss the entropy-regularized reinforcement learning setting, and develop an oï¬-policy method that applies in this setting. Their argument (modiï¬ed to use our notation and KL penalty instead of entropy bonus) is as follows. The advantage function AÏ(s, a) = QÏ(s, a) VÏ(s) lets us deï¬ne a multi-step consistency equation, which holds even if the actions were sampled from a diï¬erent (suboptimal) policy. In the setting of deterministic dynamics, QÏ(st, at) = rt + γVÏ(st+1), hence
n-1 nâ1 n-1 Yo Aa (82,40) = S07 (re + Wa (St41) â Velse)) = SO 7'te+7"Ve(Sn) â Velso) (88) t=0 t=0 t=0
13
If Ï is the optimal policy (for the discounted, entropy-augmented return), then it is the Boltzmann policy for QÏ, thus
Ï (log Ï(a s) log Ï(a s)) = AQÏ (s, a) (89)
|
â
|
This expression for the advantage can be substituted into Equation (88), giving the consistency equation
nâ-1 n-1 Ss â7 (log 7 (se, ae) â log (sz, a2)) = Ss yt +7" Van) â Vr(S0), (90) t=0 t=0
which holds when Ï is optimal. The authors deï¬ne a squared error objective formed from by taking LHS - RHS in Equation (90), and jointly minimize it with respect to the parameters of Ï and V . The resulting algorithm is a kind of Bellman residual minimizationâit optimizes with respect to the future target values, rather than treating them as ï¬xed Scherrer [2010].
Haarnoja et al. [2017] work in the same setting of soft Q-learning as the current paper, and they are concerned with tasks with high-dimensional action spaces, where we would like to learn stochastic policies that are multi-modal, and we would like to use Q-functions for which there is no closed-form s) exp(Q(s, a)/Ï ). Hence, they use way of sampling from the Boltzmann distribution Ï(a a method called Stein Variational Gradient Descent to derive a procedure that jointly updates the Q- function and a policy Ï, which approximately samples from the Boltzmann distributionâthis resembles variational inference, where one makes use of an approximate posterior distribution.
# 8 Conclusion
We study the connection between two of the leading families of RL algorithms used with deep neural net- In a framework of entropy-regularized RL we show that soft Q-learning is equivalent to a policy works. gradient method (with value function ï¬tting) in terms of expected gradients (ï¬rst-order view). In addi- tion, we also analyze how a damped Q-learning method can be interpreted as implementing natural policy gradient (second-order view). Empirically, we show that the entropy regularized formulation considered in our theoretical analysis works in practice on the Atari RL benchmark, and that the equivalence holds in a practically relevant regime.
# 9 Acknowledgements
We would like to thank Matthieu Geist for pointing out an error in the ï¬rst version of this manuscript, Chao Gao for pointing out several errors in the second version, and colleagues at OpenAI for insightful discussions.
# References
Roy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1512.08562, 2015.
Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017.
Sham Kakade. A natural policy gradient. Advances in neural information processing systems, 2:1531â1538, 2002.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
14
Oï¬r Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. arXiv preprint arXiv:1702.08892, 2017.
Brendan OâDonoghue, Remi Munos, Koray Kavukcuoglu, and Volodymyr Mnih. Pgq: Combining policy gradient and q-learning. arXiv preprint arXiv:1611.01626, 2016.
Bruno Scherrer. Should one compute the temporal diï¬erence ï¬x point or minimize the bellman residual? the uniï¬ed oblique projection view. arXiv preprint arXiv:1011.4362, 2010.
John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528â3536, 2015a.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b.
Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015.
Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992.
Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy.
2010.
15 | {
"id": "1602.01783"
} |
1704.05426 | A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference | This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation. | http://arxiv.org/pdf/1704.05426 | Adina Williams, Nikita Nangia, Samuel R. Bowman | cs.CL | 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL2018 | null | cs.CL | 20170418 | 20180219 | 2018:
8 1 0 2
b e F 9 1 ] L C . s c [
4 v 6 2 4 5 0 . 4 0 7 1 : v i X r a
# A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Adina Williams1 adinawilliams@nyu.edu
Nikita Nangia2 nikitanangia@nyu.edu
# Samuel R. Bowman1,2,3 bowman@nyu.edu
# 1Department of Linguistics New York University
2Center for Data Science New York University 3Department of Computer Science New York University
# Abstract
which current models extract reasonable represen- tations of language meaning in these settings.
This paper introduces the Multi-Genre Natu- ral Language Inference (MultiNLI) corpus, a dataset designed for use in the development and evaluation of machine learning models for sentence understanding. At 433k examples, this resource is one of the largest corpora avail- able for natural language inference (a.k.a. rec- ognizing textual entailment), improving upon available resources in both its coverage and difï¬culty. MultiNLI accomplishes this by of- fering data from ten distinct genres of written and spoken English, making it possible to eval- uate systems on nearly the full complexity of the language, while supplying an explicit set- ting for evaluating cross-genre domain adap- tation. In addition, an evaluation using exist- ing machine learning models designed for the Stanford NLI corpus shows that it represents a substantially more difï¬cult task than does that corpus, despite the two showing similar levels of inter-annotator agreement.
The task of natural language inference (NLI) is well positioned to serve as a benchmark task for research on NLU. In this task, also known as recognizing textual entailment (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos and Mark- ert, 2005; Dagan et al., 2006; MacCartney and Manning, 2009), a model is presented with a pair of sentencesâlike one of those in Figure 1â and asked to judge the relationship between their meanings by picking a label from a small set: typi- cally ENTAILMENT, NEUTRAL, and CONTRADIC- TION. Succeeding at NLI does not require a sys- tem to solve any difï¬cult machine learning prob- lems except, crucially, that of extracting an effec- tive and thorough representations for the mean- ings of sentences (i.e., their lexical and compo- sitional semantics). In particular, a model must handle phenomena like lexical entailment, quan- tiï¬cation, coreference, tense, belief, modality, and lexical and syntactic ambiguity.
# 1 Introduction
Many of the most actively studied problems in NLP, including question answering, translation, and dialog, depend in large part on natural lan- guage understanding (NLU) for success. While there has been a great deal of work that uses rep- resentation learning techniques to pursue progress on these applied NLU problems directly, in or- der for a representation learning model to fully succeed at one of these problems, it must simul- taneously succeed both at NLU, and at one or more additional hard machine learning problems like structured prediction or memory access. This makes it difï¬cult to accurately judge the degree to
As the only large human-annotated corpus for NLI currently available, the Stanford NLI Cor- pus (SNLI; Bowman et al., 2015) has enabled a good deal of progress on NLU, serving as a ma- jor benchmark for machine learning work on sen- tence understanding and spurring work on core representation learning techniques for NLU, such as attention (Wang and Jiang, 2016; Parikh et al., 2016), memory (Munkhdalai and Yu, 2017), and the use of parse structure (Mou et al., 2016b; Bow- man et al., 2016; Chen et al., 2017). However, SNLI falls short of providing a sufï¬cient testing ground for machine learning models in two ways.
Met my ï¬rst girlfriend that way. FACE-TO-FACE contradiction C C N C I didnât meet my ï¬rst girlfriend until later. 8 million in relief in the form of emergency housing. GOVERNMENT neutral N N N N The 8 million dollars for emergency hous- ing was still not enough to solve the prob- lem. Now, as children tend their gardens, they have a new ap- preciation of their relationship to the land, their cultural heritage, and their community. LETTERS neutral N N N N All of the children love working in their gardens. At 8:34, the Boston Center controller received a third transmission from American 11 9/11 entailment E E E E The Boston Center controller got a third transmission from American 11. I am a lacto-vegetarian. SLATE neutral N N E N I enjoy eating cheese too much to abstain from dairy. someone else noticed it and i said well i guess thatâs true and it was somewhat melodious in other words it wasnât just you know it was really funny TELEPHONE contradiction C C C C No one noticed and it wasnât funny at all.
Table 1: Randomly chosen examples from the development set of our new corpus, shown with their genre labels, their selected gold labels, and the validation labels (abbreviated E, N, C) assigned by individual annotators.
First, the sentences in SNLI are derived from only a single text genreâimage captionsâand are thus limited to descriptions of concrete visual scenes, rendering the hypothesis sentences used to de- scribe these scenes short and simple, and ren- dering many important phenomenaâlike tempo- ral reasoning (e.g., yesterday), belief (e.g., know), and modality (e.g., should)ârare enough to be ir- relevant to task performance. Second, because of these issues, SNLI is not sufï¬ciently demanding to serve as an effective benchmark for NLU, with the best current model performance falling within a few percentage points of human accuracy and limited room left for ï¬ne-grained comparisons be- tween strong models.
techniques have made it possible to train general- purpose feature extractors that, with no or min- imal retraining, can extract useful features for a variety of styles of data (Krizhevsky et al., 2012; Zeiler and Fergus, 2014; Donahue et al., 2014). However, attempts to bring this kind of general purpose representation learning to NLU have seen only very limited success (see, for example, Mou et al., 2016a). Nearly all successful applications of representation learning to NLU have involved models that are trained on data that closely resem- bles the target evaluation data, both in task and style. This fact limits the usefulness of these tools for problems involving styles of language not rep- resented in large annotated training sets.
This paper introduces a new challenge dataset, the Multi-Genre NLI Corpus (MultiNLI), whose chief purpose is to remedy these limitations by making it possible to run large-scale NLI evalua- tions that capture more of the complexity of mod- ern English. While its size (433k pairs) and mode of collection are modeled closely on SNLI, unlike that corpus, MultiNLI represents both written and spoken speech in a wide range of styles, degrees of formality, and topics.
Our chief motivation in creating this corpus is to provide a benchmark for ambitious machine learn- ing research on the core problems of NLU, but we are additionally interested in constructing a cor- pus that facilitates work on domain adaptation and cross-domain transfer learning. In many applica- tion areas outside NLU, artiï¬cial neural network
With this in mind, we construct MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representa- tions within the training domain and on their abil- ity to derive reasonable representations in unfa- miliar domains. The corpus is derived from ten different genres of written and spoken English, which are collectively meant to approximate the full diversity of ways in which modern standard American English is used. All of the genres ap- pear in the test and development sets, but only ï¬ve are included in the training set. Models thus can be evaluated on both the matched test examples, which are derived from the same sources as those in the training set, and on the mismatched exam- ples, which do not closely resemble any of those seen at training time.
This task will involve reading a line from a non-ï¬ction article and writing three sentences that relate to it. The line will describe a situation or event. Using only this description and what you know about the world:
⢠Write one sentence that is deï¬nitely correct about the situation or event in the line.
⢠Write one sentence that might be correct about the situation or event in the line.
⢠Write one sentence that is deï¬nitely incorrect about the situation or event in the line.
Figure 1: The main text of a prompt (truncated) that was presented to our annotators. This version is used for the written non-ï¬ction genres.
# 2 The Corpus
# 2.1 Data Collection
The data collection methodology for MultiNLI is similar to that of SNLI: We create each sentence pair by selecting a premise sentence from a preex- isting text source and asking a human annotator to compose a novel sentence to pair with it as a hy- pothesis. This section discusses the sources of our premise sentences, our collection method for hy- potheses, and our validation (relabeling) strategy.
Premise Text Sources The MultiNLI premise sentences are derived from ten sources of freely available text which are meant to be maximally diverse and roughly represent the full range of American English. We selected nine sources from the second release of the Open American National Corpus (OANC; Fillmore et al., 1998; Macleod et al., 2000; Ide and Macleod, 2001; Ide and Su- derman, 2006, downloaded 12/20161), balancing the volume of source text roughly evenly across genres, and avoiding genres with content that would be too difï¬cult for untrained annotators.
OANC data constitutes the following nine gen- transcriptions from the Charlotte Narrative res: and Conversation Collection of two-sided, in- person conversations that took place in the early 2000s (FACE-TO-FACE); reports, speeches, letters, and press releases from public domain govern- ment websites (GOVERNMENT); letters from the Indiana Center for Intercultural Communication of Philanthropic Fundraising Discourse written in the late 1990sâearly 2000s (LETTERS); the public re-
1 http://www.anc.org/
port from the National Commission on Terrorist Attacks Upon the United States released on July 22, 20042 (9/11); ï¬ve non-ï¬ction works on the textile industry and child development published by the Oxford University Press (OUP); popular culture articles from the archives of Slate Maga- zine (SLATE) written between 1996â2000; tran- scriptions from University of Pennsylvaniaâs Lin- guistic Data Consortium Switchboard corpus of two-sided, telephone conversations that took place in 1990 or 1991 (TELEPHONE); travel guides pub- lished by Berlitz Publishing in the early 2000s (TRAVEL); and short posts about linguistics for non-specialists from the Verbatim archives written between 1990 and 1996 (VERBATIM).
For our tenth genre, FICTION, we compile sev- eral freely available works of contemporary ï¬ction written between 1912 and 2010, spanning genres including mystery, humor, western, science ï¬c- tion, and fantasy by authors Isaac Asimov, Agatha Christie, Ben Essex (Elliott Gesswell), Nick Name (Piotr Kowalczyk), Andre Norton, Lester del Ray, and Mike Shea.
We construct premise sentences from these ten source texts with minimal preprocessing; unique the sentences within genres, exclude very short sentences (under eight characters), and manu- ally remove certain types of non-narrative writing, such as mathematical formulae, bibliographic ref- erences, and lists.
Although SNLI is collected in largely the same way as MultiNLI, and is also permissively li- censed, we do not include SNLI in the MultiNLI corpus distribution. SNLI can be appended and treated as an unusually large additional CAPTIONS genre, built on image captions from the Flickr30k corpus (Young et al., 2014).
Hypothesis Collection To collect a sentence pair, we present a crowdworker with a sentence from a source text and ask them to compose three novel sentences (the hypotheses): one which is necessarily true or appropriate whenever the premise is true (paired with the premise and la- beled ENTAILMENT), one which is necessarily false or inappropriate whenever the premise is true (CONTRADICTION), and one where neither condi- tion applies (NEUTRAL). This method of data col- lection ensures that the three classes will be repre- sented equally in the raw corpus.
2https://9-11commission.gov/
Statistic SNLI MultiNLI Pairs w/ unanimous gold label Individual label = gold label Individual label = authorâs label Gold label = authorâs label Gold label 6= authorâs label No gold label (no 3 labels match) 58.3% 58.2% 89.0% 85.8% 88.7% 85.2% 91.2% 6.8% 2.0% 92.6% 5.6% 1.8%
Table 2: Key validation statistics for SNLI (copied from Bowman et al., 2015) and MultiNLI.
The prompts that surround each premise sen- tence during hypothesis collection are slightly tai- lored to ï¬t the genre of that premise sentence. We pilot these prompts prior to data collection to ensure that the instructions are clear and that they yield hypothesis sentences that ï¬t the in- tended meanings of the three classes. There are ï¬ve unique prompts in total: one for written non-ï¬ction genres (SLATE, OUP, GOVERNMENT, VERBATIM, TRAVEL; Figure 1), one for spoken genres (TELEPHONE, FACE-TO-FACE), one for each of the less formal written genres (FICTION, LETTERS), and a specialized one for 9/11, tai- lored to ï¬t its potentially emotional content. Each prompt is accompanied by example premises and hypothesis that are speciï¬c to each genre.
Below the instructions, we present three text ï¬eldsâone for each labelâfollowed by a ï¬eld for reporting issues, and a link to the frequently asked questions (FAQ) page. We provide one FAQ page per prompt. FAQs are modeled on their SNLI counterparts (supplied by the authors of that work) and include additional curated examples, answers to genre-speciï¬c questions arising from our pilot phase, and information about logistical concerns like payment.
For both hypothesis collection and validation, we present prompts to annotators using Hybrid (gethybrid.io), a crowdsoucring platform similar to the Amazon Mechanical Turk platform used for SNLI. We used this platform to hire an organized group of workers. 387 annotators con- tributed through this group, and at no point was any identifying information about them, including demographic information, available to the authors.
Validation We perform an additional round of annotation on test and development examples The validation to ensure accurate labelling. phase follows the same procedure used for SICK (Marelli et al., 2014b) and SNLI: Workers are pre-
sented with pairs of sentences and asked to supply a single label (ENTAILMENT, CONTRADICTION, NEUTRAL) for the pair. Each pair is relabeled by four workers, yielding a total of ï¬ve labels per example. Validation instructions are tailored by genre, based on the main data collection prompt (Figure 1); a single FAQ, modeled after the valida- tion FAQ from SNLI, is provided for reference. In order to encourage thoughtful labeling, we manu- ally label one percent of the validation examples and offer a $1 bonus each time a worker selects a label that matches ours.
For each validated sentence pair, we assign a gold label representing a majority vote between the initial label assigned to the pair by the original annotator, and the four additional labels assigned by validation annotators. A small number of ex- amples did not receive a three-vote consensus on any one label. These examples are included in the distributed corpus, but are marked with â-â in the gold label ï¬eld, and should not be used in stan- dard evaluations. Table 2 shows summary statis- tics capturing the results of validation, alongside corresponding ï¬gures for SNLI. These statistics indicate that the labels included in MultiNLI are about as reliable as those included in SNLI, de- spite MultiNLIâs more diverse text contents.
# 2.2 The Resulting Corpus
Table 1 shows randomly chosen development set examples from the collected corpus. Hypotheses tend to be ï¬uent and correctly spelled, though not all are complete sentences. Punctuation is often omitted. Hypotheses can rely heavily on knowl- edge about the world, and often donât correspond closely with their premises in syntactic structure. Unlabeled test data is available on Kaggle for both matched and mismatched sets as competi- tions that will be open indeï¬nitely; Evaluations on a subset of the test set have previously been conducted with different leaderboards through the RepEval 2017 Workshop (Nangia et al., 2017).
The corpus is available in two formatsâtab sep- arated text and JSON Lines (jsonl), following SNLI. For each example, premise and hypothesis strings, unique identiï¬ers for the pair and prompt, and the following additional ï¬elds are speciï¬ed:
⢠gold label: label used for classiï¬cation. In examples rejected during the validation process, the value of this ï¬eld will be â-â.
⢠sentence{1,2} parse: Each sentence
Genre Train #Examples Dev. Test #Wds. Prem. âSâ parses Prem. Hyp. Agrmt. Model Acc. ESIM CBOW SNLI 550,152 10,000 10,000 14.1 74% 88% 89.0% 86.7% 80.6 % FICTION GOVERNMENT SLATE TELEPHONE TRAVEL 77,348 77,350 77,306 83,348 77,350 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 14.4 24.4 21.4 25.9 24.9 94% 97% 90% 97% 94% 98% 71% 97% 97% 98% 89.4% 73.0% 87.4% 74.8% 87.1% 67.9% 88.3% 72.2% 89.9% 73.7% 67.5% 67.5% 60.6% 63.7% 64.6% 9/11 FACE-TO-FACE LETTERS OUP VERBATIM 0 0 0 0 0 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 2,000 20.6 18.1 20.0 25.7 28.3 98% 99% 91% 96% 95% 98% 96% 98% 93% 97% 90.1% 71.9% 89.5% 71.2% 90.1% 74.7% 88.1% 71.7% 87.3% 71.9% 63.2% 66.3% 68.3% 62.8% 62.7% MultiNLI Overall 392,702 20,000 20,000 22.3 91% 98% 88.7% 72.2% 64.7%
Table 3: Key statistics for the corpus by genre. The ï¬rst ï¬ve genres represent the matched section of the develop- ment and test sets, and the remaining ï¬ve represent the mismatched section. The ï¬rst three statistics provide the number of examples in each genre. #Wds. Prem. is the mean token count among premise sentences. âSâ parses is the percentage of sentences for which the Stanford Parser produced a parse rooted with an âSâ (sentence) node. Agrmt. is the percent of individual labels that match the gold label in validated examples. Model Acc. gives the test accuracy for ESIM and CBOW models (trained on either SNLI or MultiNLI), as described in Section 3.
as parsed by the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003).
⢠sentence{1,2} binary parse: parses in unlabeled binary-branching format.
⢠label[1]: The label assigned during the creation of the sentence pair. In rare cases this may be different from gold label, if a consensus of annotators chose a different label during the validation phase.
⢠label[2...5]: The four labels assigned during validation by individual annotators to each development and test example. These ï¬elds will be empty for training examples.
the The corpus at freely nyu.edu/projects/bowman/multinli/ for typical machine learning uses, and may be modiï¬ed and redistributed. The majority of the corpus is released under the OANCâs license, which allows all content to be freely used, modi- ï¬ed, and shared under permissive terms. The data in the FICTION section falls under several per- missive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of ï¬ction are in the public domain in the United States (but may be licensed differently elsewhere).
Partition The distributed corpus comes with an explicit train/test/development split. The test and development sets contain 2,000 randomly selected examples each from each of the genres, resulting in a total of 20,000 examples per set. No premise sentence occurs in more than one set.
Statistics Table 3 shows some additional statis- tics. Premise sentences in MultiNLI tend to be longer (max 401 words, mean 22.3 words) than their hypotheses (max 70 words), and much longer, on average, than premises in SNLI (mean 14.1 words); premises in MultiNLI also tend to be parsed as complete sentences at a much higher rate on average (91%) than their SNLI counter- parts (74%). We observe that the two spoken gen- res differ in thisâwith FACE-TO-FACE showing more complete sentences (91%) than TELEPHONE (71%)âand speculate that the lack of visual feed- back in a telephone setting may result in a high incidence of interrupted or otherwise incomplete sentences.
Hypothesis sentences in MultiNLI generally cannot be derived from their premise sentences us- ing only trivial editing strategies. While 2.5% of the hypotheses in SNLI differ from their premises by deletion, only 0.9% of those in MultiNLI (170 examples total) are constructed in this way. Sim- ilarly, in SNLI, 1.6% of hypotheses differ from their premises by addition, substitution, or shuf- ï¬ing a single word, while in MultiNLI this only happens in 1.2% of examples. The percentage of
MNLI Train Model SNLI Match. Mis. Most freq. 34.3 36.5 35.6 SNLI CBOW BiLSTM ESIM 80.6 81.5 86.7 - - - - - - MNLI CBOW BiLSTM ESIM 51.5 50.8 60.7 64.8 66.9 72.3 64.5 66.9 72.1 MNLI+ SNLI CBOW BiLSTM ESIM 74.7 74.0 79.7 65.2 67.5 72.4 64.6 67.1 71.9
Table 4: Test set accuracies (%) for all models; Match. represents test set performance on the MultiNLI genres that are also represented in the training set, Mis. repre- sents test set performance on the remaining ones; Most freq. is a trivial âmost frequent classâ baseline.
hypothesis-premise pairs with high token overlap (>37%) was comparable between MultiNLI (30% of pairs) and SNLI (29%). These statistics sug- gest that MultiNLIâs annotations are comparable in quality to those of SNLI.
# 3 Baselines
To test the difï¬culty of the corpus, we experiment with three neural network models. The ï¬rst is a simple continuous bag of words (CBOW) model in which each sentence is represented as the sum of the embedding representations of its words. The second computes representations by averag- ing the states of a bidirectional LSTM RNN (BiL- STM; Hochreiter and Schmidhuber, 1997) over words. For the third, we implement and evalu- ate Chen et al.âs Enhanced Sequential Inference Model (ESIM), which is roughly tied for the state of the art on SNLI at the time of writing. We use the base ESIM without ensembling with a TreeL- STM (as in the âHIMâ runs in that work).
The ï¬rst two models produce separate vec- tor representations for each sentence and com- pute label predictions for pairs of representations. To do this, they concatenate the representations for premise and hypothesis, their difference, and their element-wise product, following Mou et al. (2016b), and pass the result to a single tanh layer followed by a three-way softmax classiï¬er.
All models are initialized with 300D reference GloVe vectors (840B token version; Pennington et al., 2014). Out-of-vocabulary (OOV) words are initialized randomly and word embeddings are ï¬ne-tuned during training. The models use 300D
hidden states, as in most prior work on SNLI. We use Dropout (Srivastava et al., 2014) for regular- ization. For ESIM, we use a dropout rate of 0.5, following the paper. For CBOW and BiLSTM models, we tune Dropout on the SNLI dev. set and ï¬nd that a drop rate of 0.1 works well. We use the Adam (Kingma and Ba, 2015) optimizer with default parameters.
We train models on SNLI, MultiNLI, and a mix- ture; Table 4 shows the results. In the mixed set- ting, we use the full MultiNLI training set and ran- domly select 15% of the SNLI training set at each epoch, ensuring that each available genre is seen during training with roughly equal frequency.
We also train a separate CBOW model on each individual genre to establish the degree to which simple models already allow for effective transfer across genres, using a dropout rate of 0.2. When training on SNLI, a single random sample of 15% of the original training set is used. For each genre represented in the training set, the model that per- forms best on it was trained on that genre; a model trained only on SNLI performs worse on every genre than comparable models trained on any genre from MultiNLI.
Models trained on a single genre from MultiNLI perform well on similar genres; for example, the model trained on TELEPHONE attains the best accuracy (63%) on FACE-TO-FACE, which was nearly one point better than it received on itself. SLATE seems to be a difï¬cult and relatively un- usual genre and performance on it is relatively poor in this setting; when averaging over runs trained on SNLI and all genres in the matched section of the training set, average performance on SLATE was only 57.5%. Sentences in SLATE cover a wide range of topics and phenomena, mak- ing it hard to do well on, but also forcing models trained on it be broadly capable; the model trained on SLATE achieves the highest accuracy of any model on 9/11 (55.6%) and VERBATIM (57.2%), and relatively high accuracy on TRAVEL (57.4%) and GOVERNMENT (58.3%). We also observe that our models perform similarly on both the matched and mismatched test sets of MultiNLI. We expect genre mismatch issues to become more conspic- uous as models are developed that can better ï¬t MultiNLIâs training genres.
# 4 Discussion and Analysis
# 4.1 Data Collection
In data collection for NLI, different annotator de- cisions about the coreference between entities and events across the two sentences in a pair can lead to very different assignments of pairs to labels (de Marneffe et al., 2008; Marelli et al., 2014a; Bowman et al., 2015). Drawing an example from Bowman et al., the pair âa boat sank in the Paciï¬c Oceanâ and âa boat sank in the Atlantic Oceanâ can be labeled either CONTRADICTION or NEU- TRAL depending on (among other things) whether the two mentions of boats are assumed to refer to the same entity in the world. This uncertainty can present a serious problem for inter-annotator agreement, since it is not clear that it is possible to deï¬ne an explicit set of rules around coreference that would be easily intelligible to an untrained an- notator (or any non-expert).
Bowman et al. attempt to avoid this problem by using an annotation prompt that is highly depen- dent on the concreteness of image descriptions; but, as we engage with the much more abstract writing that is found in, for example, government documents, there is no reason to assume a pri- ori that any similar prompt and annotation strat- egy can work. We are surprised to ï¬nd that this is not a major issue. Through a relatively straight- forward trial-and-error piloting phase, followed by discussion with our annotators, we manage to de- sign prompts for abstract genres that yield high inter-annotator agreement scores nearly identical to those of SNLI (see Table 2). These high scores suggest that our annotators agreed on a single task deï¬nition, and were able to apply it consistently across genres.
# 4.2 Overall Difï¬culty
As expected, both the increase in the diver- sity of linguistic phenomena in MultiNLI and its longer average sentence length conspire to make MultiNLI dramatically more difï¬cult than SNLI. Our three baseline models perform better on SNLI than MultiNLI by about 15% when trained on the respective datasets. All three models achieve accuracy above 80% on the SNLI test set when trained only on SNLI. However, when trained on MultiNLI, only ESIM surpasses 70% accuracy on MultiNLIâs test sets. When we train mod- els on MultiNLI and downsampled SNLI, we see an expected signiï¬cant improvement on SNLI,
but no signiï¬cant change in performance on the MultiNLI test sets, suggesting including SNLI in training doesnât drive substantial improvement. These results attest to MultiNLIâs difï¬culty, and with its relatively high inter-annotator agreement, suggest that it presents a problem with substantial headroom for future work.
# 4.3 Analysis by Linguistic Phenomenon
To better understand the types of language un- derstanding skills that MultiNLI tests, we analyze the collected corpus using a set of annotation tags chosen to reï¬ect linguistic phenomena which are known to be potentially difï¬cult. We use two methods to assign tags to sentences. First, we use the Penn Treebank (PTB; Marcus et al., 1993) part-of-speech tag set (via the included Stanford Parser parses) to automatically isolate sentences containing a range of easily-identiï¬ed phenomena like comparatives. Second, we isolate sentences that contain hand-chosen key words indicative of additional interesting phenomena.
The hand-chosen tag set covers the follow- ing phenomena: QUANTIFIERS contains single words with quantiï¬cational force (see, for exam- ple, Heim and Kratzer, 1998; Szabolcsi, 2010, e.g., many, all, few, some); BELIEF VERBS con- tains sentence-embedding verbs denoting mental states (e.g., know, believe, think), including irregu- lar past tense forms; TIME TERMS contains single words with abstract temporal interpretation, (e.g., then, today) and month names and days of the week; DISCOURSE MARKERS contains words that facilitate discourse coherence (e.g., yet, however, but, thus, despite); PRESUPPOSITION TRIGGERS contains words with lexical presuppositions (Stal- naker, 1974; Schlenker, 2016, e.g., again, too, anymore3); CONDITIONALS contains the word if. Table 5 presents the frequency of the tags in SNLI and MultiNLI, and model accuracy on MultiNLI (trained only on MultiNLI).
The distributions of labels within each tagged subset of the corpus roughly mirrors the balanced overall distribution. The most frequent class over- all (in this case, ENTAILMENT) occurs with a fre- quency of roughly one third (see Table 4) in most. Only two annotation tags differ from the baseline percentage of the most frequent class in the cor- pus by at least 5%: sentences containing negation,
3Because their high frequency in the corpus, extremely common triggers like the were excluded from this tag.
Dev. Freq. Most Frequent Label Model Acc. Tag SNLI MultiNLI Diff. Label % CBOW BiLSTM ESIM Entire Corpus 100 100 0 entailment â¼35 â¼65 â¼67 â¼72 Pronouns (PTB) Quantiï¬ers Modals (PTB) Negation (PTB) WH terms (PTB) Belief Verbs Time Terms Discourse Mark. Presup. Triggers Compr./Supr.(PTB) Conditionals Tense Match (PTB) Interjections (PTB) >20 words 34 33 <1 5 5 <1 19 <1 8 3 4 62 <1 <1 68 63 28 31 30 19 36 14 22 17 15 69 5 5 34 30 28 26 25 18 17 14 14 14 11 7 5 5 entailment contradiction entailment contradiction entailment entailment neutral neutral neutral neutral neutral entailment entailment entailment 34 36 35 48 35 34 35 34 34 39 35 37 36 42 66 66 65 67 64 64 64 62 65 61 65 67 67 65 68 68 67 70 65 67 66 64 67 63 68 68 70 67 73 73 72 75 72 71 71 70 73 69 73 73 75 76
Table 5: Dev. Freq. is the percentage of dev. set examples that include each phenomenon, ordered by greatest difference in frequency of occurrence (Diff.) between MultiNLI and SNLI. Most Frequent Label speciï¬es which label is the most frequent for each tag in the MultiNLI dev. set, and % is its incidence. Model Acc. is the dev. set accuracy (%) by annotation tag for each baseline model (trained on MultiNLI only). (PTB) marks a tag as derived from Penn Treebank-style parser output tags (Marcus et al., 1993).
and sentences exceeding 20 words. Sentences that contain negation are slightly more likely than av- erage to be labeled CONTRADICTION, reï¬ecting a similar ï¬nding in SNLI, while long sentences are slightly more likely to be labeled ENTAILMENT.
None of the baseline models perform substan- tially better on any tagged set than they do on the corpus overall, with average model accuracies on sentences containing speciï¬c tags falling within about 3 points of overall averages. Using base- line model test accuracy overall as a metric (see Table 4), our baseline models had the most trouble on sentences containing comparatives or superla- tives (losing 3-4 points each). Despite the fact that 17% of sentence pairs in the corpus contained at least one instance of comparative or superlative, our baseline models donât utilize the information present in these sentences to predict the correct la- bel for the pair, although presence of a compara- tive or superlative is slightly more predictive of a NEUTRAL label.
Moreover, the baseline models perform below average on discourse markers, such as despite and however, losing roughly 2 to 3 points each. Un- surprisingly, the attention-based ESIM model per- forms better than the other two on sentences with greater than 20 words. Additionally, our baseline models do show slight improvements in accuracy on negation, suggesting that they may be tracking it as a predictor of CONTRADICTION.
# 5 Conclusion
Natural language inference makes it easy to judge the degree to which neural network models for sentence understanding capture the full meanings for natural Existing NLI datasets like SNLI have facilitated substantial ad- vances in modeling, but have limited headroom and coverage of the full diversity of meanings ex- pressed in English. This paper presents a new dataset that offers dramatically greater linguistic difï¬culty and diversity, and also serves as a bench- mark for cross-genre domain adaptation.
improves upon SNLI in its empirical coverageâbecause it in- cludes a representative sample of text and speech from ten different genres, as opposed to just sim- ple image captionsâand its difï¬culty, containing a much higher percentage of sentences tagged with one or more elements from our tag set of thir- teen difï¬cult linguistic phenomena. This greater diversity is reï¬ected in the dramatically lower baseline model performance on MultiNLI than on SNLI (see Table 5) and comparable inter- annotator agreement, suggesting that MultiNLI has a lot of headroom remaining for future work. The MultiNLI corpus was ï¬rst released in draft form in the ï¬rst half of 2017, and in the time since its initial release, work by others (Conneau et al., 2017) has shown that NLI can also be an effective source task for pre-training and transfer learning in the context of sentence-to-vector models, with
models trained on SNLI and MultiNLI substan- tially outperforming all prior models on a suite of established transfer learning benchmarks. We hope that this corpus will continue to serve for many years as a resource for the development and evaluation of methods for sentence understanding.
# Acknowledgments
This work was made possible by a Google Faculty Research Award to SB and AL. SB also gratefully acknowledges gift support from Tencent Holdings. We also thank George Dahl, the organizers of the RepEval 2016 and RepEval 2017 workshops, An- drew Drozdov, Angeliki Lazaridou, and our other colleagues at NYU for their help and advice.
# References
Recog- infer- nising textual the 2005 Confer- In Proceedings of ence. ence on Empirical Methods in Natural Lan- guage Processing (EMNLP). pages 628â635. http://www.aclweb.org/anthology/H05-1079.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Associ- ation for Computational Linguistics, pages 632â642. https://doi.org/10.18653/v1/D15-1075.
Samuel R. Bowman, Jon Gauthier, Abhinav Ras- togi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast uniï¬ed model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1466â1477. https://doi.org/10.18653/v1/P16-1139.
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural In the 55th Annual Meeting of Proceedings of the Association for Computational Linguis- tics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1657â1668. https://doi.org/10.18653/v1/P17-1152.
Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein- hard Stolle, and Daniel G. Bobrow. 2003. Entail- ment, intensionality and text understanding. In Pro- ceedings of the Human Language Technology-North American Association for Computational Linguis- tics 2003 Workshop on Text Meaning.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from arXiv preprint natural language inference data. arXiv:1705.02364 .
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evalu- ating predictive uncertainty, visual object classiï¬ca- tion, and recognising textual entailment, Springer, pages 177â190.
Marie-Catherine de Marneffe, Anna N. Rafferty, and Christopher D. Manning. 2008. Finding In Proceedings of As- contradictions in text. sociation Linguistics-08: Human Language Technology. Association for Computational Linguistics, pages 1039â1047. http://www.aclweb.org/anthology/P08-1118.
Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoff- man, Ning Zhang, Eric Tzeng, and Trevor Darrell. 2014. DeCAF: A deep convolutional activation fea- ture for generic visual recognition. In Proceedings of the International Conference on Machine Learn- ing (ICML).
Charles Fillmore, Nancy Ide, Daniel Jurafsky, and Catherine Macleod. 1998. An American National Corpus: A proposal. In Proceedings of the First An- nual Conference on Language Resources and Eval- uation. pages 965â969.
Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. 2000. A natural logic inference system. In Proceed- ings of the 2nd Workshop on Inference in Computa- tional Semantics.
Irene Heim and Angelika Kratzer. 1998. Semantics in generative grammar. Blackwell Publishers.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation Long short-term memory. 9(8):1735â1780.
Nancy Ide and Catherine Macleod. 2001. The Amer- ican National Corpus: A standardized resource of American English. In Proceedings of Corpus Lin- guistics. Lancaster University Centre for Computer Corpus Research on Language, volume 3, pages 1â 7.
Inte- Nancy Ide and Keith Suderman. 2006. national The the Fifth In Proceedings of on Language Re- (LREC). European (ELRA).
Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations (ICLR).
Dan Klein and Christopher D. Manning. 2003. In Proc. ACL. Accurate unlexicalized parsing. https://doi.org/10.3115/1075096.1075150.
Nikita Nangia, Adina Williams, Angeliki Lazaridou, and Samuel R Bowman. 2017. The repeval 2017 shared task: Multi-genre natural language inference In Proceedings of with sentence representations. RepEval 2017: The Second Workshop on Evaluat- ing Vector Space Representations for NLP..
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in Neural In- formation Processing Systems 25, pages 1097â1105.
Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2249â2255. https://doi.org/10.18653/v1/D16-1244.
Bill MacCartney and Christopher D Manning. 2009. In Proceed- An extended model of natural logic. ings of the of the Eighth International Conference on Computational Semantics. pages 140â156.
Catherine Macleod, Nancy Ide, and Ralph Grishman. 2000. The American National Corpus: A standard- ized resource for American English. In Conference on Language Resources and Evaluation (LREC).
Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn treebank. Computa- tional linguistics 19(2):313â330.
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors In Proceedings of the for word representation. 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP). Association for Computational Linguistics, pages 1532â1543. https://doi.org/10.3115/v1/D14-1162.
Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014a. Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). Associ- ation for Computational Linguistics, pages 1â8. https://doi.org/10.3115/v1/S14-2001.
Philippe Schlenker. 2016. Formal Press, Interface, The Cambridge Cam- Se- pages 664â727. Handbook bridge University mantics/Pragmatics https://doi.org/10.1017/CBO9781139236157.023. of Semantics, chapter The
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Re- search (JMLR) 15:1929â1958.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014b. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC).
Robert Stalnaker. 1974. Semantics and Philosophy, New York, NY: New York University Press, chap- ter Pragmatic Presupposition, pages 329â355.
Anna Szabolcsi. 2010. Quantiï¬cation. Cambridge University Press.
Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2016a. How transferable are neural networks in NLP applications? In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Associ- ation for Computational Linguistics, pages 479â489. https://doi.org/10.18653/v1/D16-1046.
Learn- inference with LSTM. ing natural the 2016 Conference of In Proceedings of the Asso- the North American Chapter of ciation for Computational Linguistics: Hu- man Language Technologies. Association for pages 1442â1451. Computational Linguistics, https://doi.org/10.18653/v1/N16-1170.
Lili Mou, Men Rui, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016b. Natural language inference by tree-based convolution and heuristic the 54th Annual In Proceedings of matching. Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 130â136. https://doi.org/10.18653/v1/P16-2022.
Peter Young, Alice Lai, Micah Hodosh, and Julia From image descrip- New similarity event the Associa- 2:67â78.
Tsendsuren Munkhdalai and Hong Yu. 2017. Neu- In Proceedings of the ral semantic encoders. the European Chapter of 15th Conference of for Computational Linguis- the Association tics: Volume 1, Long Papers. Association for Computational 397â407. http://www.aclweb.org/anthology/E17-1038.
Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Pro- ceedings of the European Conference on Computer Vision (ECCV). pages 818â833.
This figure "sentence_dist.png" is available in "png" format from:
http://arxiv.org/ps/1704.05426v4 | {
"id": "1705.02364"
} |
1704.05179 | SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine | We publicly release a new large-scale dataset, called SearchQA, for machine
comprehension, or question-answering. Unlike recently released datasets, such
as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to
reflect a full pipeline of general question-answering. That is, we start not
from an existing article and generate a question-answer pair, but start from an
existing question-answer pair, crawled from J! Archive, and augment it with
text snippets retrieved by Google. Following this approach, we built SearchQA,
which consists of more than 140k question-answer pairs with each pair having
49.6 snippets on average. Each question-answer-context tuple of the SearchQA
comes with additional meta-data such as the snippet's URL, which we believe
will be valuable resources for future research. We conduct human evaluation as
well as test two baseline methods, one simple word selection and the other deep
learning based, on the SearchQA. We show that there is a meaningful gap between
the human and machine performances. This suggests that the proposed dataset
could well serve as a benchmark for question-answering. | http://arxiv.org/pdf/1704.05179 | Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, Kyunghyun Cho | cs.CL | null | null | cs.CL | 20170418 | 20170611 | 7 1 0 2 n u J 1 1 ] L C . s c [
3 v 9 7 1 5 0 . 4 0 7 1 : v i X r a
# SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
# Matt Dunn Center for Data Science, NYU
# Levent Sagun Courant Institute, NYU
# Mike Higgins Center for Data Science, NYU
Center for Data Science, NYU â_ Courant Institute, NYU Center for Data Science, NYU
# V. UËgur G ¨uney Senior Data Scientist, Driversiti
# Volkan Cirik School of Computer Science, CMU
Senior Data Scientist, Driversiti School of Computer Science, CMU
# Kyunghyun Cho Courant Institute and Center for Data Science, NYU
# Abstract
We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reï¬ect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an ex- isting question-answer pair, crawled from J! Archive, and augment it with text snip- pets retrieved by Google. Following this approach, we built SearchQA, which con- sists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with ad- ditional meta-data such as the snippetâs URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two base- line methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a mean- ingful gap between the human and ma- chine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering.
# Introduction
One of the driving forces behind the recent suc- cess of deep learning in challenging tasks, such as object recognition (Krizhevsky et al., 2012), speech recognition (Xiong et al., 2016) and ma- chine translation (Bahdanau et al., 2014), has been the increasing availability of large-scale annotated data.
This observation has also led to the interest in building a large-scale annotated dataset for question-answering. In 2015, Bordes et al. (2015) released a large-scale dataset of 100k open-world question-answer pairs constructed from Freebase, and Hermann et al. (2015) released two datasets, each consisting of closed-world question-answer pairs automatically generated from news articles. The latter was followed by Hill et al. (2015), Ra- jpurkar et al. (2016) and Onishi et al. (2016), each of which has released a set of large-scale closed- world question-answer pairs focused on a speciï¬c aspect of question-answering.
Let us ï¬rst take a step back, and ask what a full end-to-end pipeline for question-answering would look like. A general question-answering system would be able to answer a question about any do- main, based on the world knowledge. This system would consist of three stages. A given question is read and reformulated in the ï¬rst stage, followed by information retrieval via a search engine. An answer is then synthesized based on the query and a set of retrieved documents.
We notice a gap between the existing closed- world question-answering data sets and this con- ceptual picture of a general question-answering system. The general question-answering system must deal with a noisy set of retrieved documents, which likely consist of many irrelevant docu- ments as well as semantically and syntactically ill- formed documents. On the other hand, most of the existing closed-world question-answering datasets were constructed in a way that the context pro- vided for each question is guaranteed relevant and well-written. This guarantee comes from the fact that each question-answer-context tuple was gen- erated starting from the context from which the question and answer were extracted.
In this paper, we build a new closed-world question-answering dataset that narrows this gap.
Unlike most of the existing work, we start by building a set of question-answer pairs from Jeop- ardy!. We augment each question-answer pair, which does not have any context attached to it, by querying Google with the question. This pro- cess enables us to retrieve a realistic set of rel- evant/irrelevant documents, or more speciï¬cally their snippets. We ï¬lter out those questions whose answers could not be found within the retrieved snippets and those with less than forty web pages returned by Google. We end up with 140k+ question-answer pairs, and in total 6.9M snippets.1 We evaluate this new dataset, to which we re- fer as SearchQA, with a variant of recently pro- posed attention sum reader (Kadlec et al., 2016) and with human volunteers. The evaluation shows that the proposed SearchQA is a challenging task both for humans and machines but there is still a signiï¬cant gap between them. This suggests that the new dataset would be a valuable resource for further research and advance our ability to build a better automated question-answering system.
# 2 SearchQA
Collection A major goal of the new dataset is to build and provide to the public a machine compre- hension dataset that better reï¬ects a noisy informa- tion retrieval system. In order to achieve this goal, we need to introduce a natural, realistic noise to the context of each question-answer pair. We use a production-level search engine âGoogleâ for this purpose.
We crawled the entire set of question-answer pairs from J! Archive2 which has archived all the question-answer pairs from the popular television show Jeopardy!. We used the question from each pair to query Google in order to retrieve a set of relevant web page snippets. The relevancy in this case was fully determined by an unknown, but in- production, algorithm underlying Googleâs search engine, making it much closer to a realistic sce- nario of question-answering.
Cleaning Because we do not have any control over the internals of Google search engine, we extensively cleaned up the entire set of question- answer-context tuples. First, we removed any snippet returned that included the air-date of the Jeopardy! episode, the exact copy of the question,
1 The dataset can be found at https://github.com/ nyu-dl/SearchQA.
# 2http://j-archive.com
a ikipedia.org/wiki/Klingon&sa-Usved=0ahUKEwi zhvDi e Klingons a fictional extraterrestrial humanoid warrior a book of sayings, and a cultural guide to the
language havi portrayed Montgomery Scott,
devised the ... language by...", of Guinness World Records, Kl.
# Figure 1: One example in .json format.
or a term âJeopardy!â, âquizâ or âtriviaâ, to en- sure that the answer could not be found trivially by a process of word/phrase matching. Furthermore, we manually checked any URL, from which these removed snippets were taken, that occurs more than 50 times and removed any that explicitly con- tains Jeopardy! question-answer pairs.
Among the remaining question-answer-context tuples, we removed any tuple whose context did not include the answer. This was done mainly for computational efï¬ciency in building a question- answering system using the proposed dataset. We kept only those tuples whose answers were three or less words long.
Basic Statistics After all these processes, we have ended up with 140,461 question-answer pairs. Each pair is coupled with a set of 49.6±2.10 snippets on average. Each snippet is 37.3±11.7 tokens long on average. Answers are on aver- age 1.47±0.58 tokens long. There are 1,257,327 unique tokens.
Meta-Data We collected for each question- answer-context tuple additional metadata from Jeopardy! and returned by Google. More speciï¬- cally, from Jeopardy! we have the category, dollar value, show number and air date for each ques- tion. From Google, we have the URL, title and a set of related links (often none) for each snip- pet. Although we do not use them in this paper, these items are included in the public release of SearchQA and may be used in the future. An ex- ample of one question-answer pair with just one snippet is presented in Fig. 1.
Training, Validation and Test Sets In order to maximize its reusability and reproducibility, we provide a predeï¬ned split of the dataset into train- ing, validation and test sets. One of the most im- portant aspects in question-answering is whether a question-answering machine would generalize to unseen questions from the future. We thus ensure that these three sets consist of question- answer pairs from non-overlapping years, and that
the validation and test question-answer pairs are from years later than the training setâs pairs. The training, validation and test sets consist of 99,820, 13,393 and 27,248 examples, respectively. Among these, examples with unigram answers are respec- tively 55,648, 8,672 and 17,056.
# 3 Related Work
Open-World Question-Answering An open- world question-answering dataset consists of a set of question-answer pairs and the knowledge database. It does not come with an explicit link be- tween each question-answer pair and any speciï¬c entry in the knowledge database. A representative example of such a dataset is SimpleQA by (Bordes et al., 2015). SimpleQA consists of 100k question- answer pairs, and uses Freebase as a knowledge database. The major limitation of this dataset is that all the questions are simple in that all of them are in the form of (subject, relationship, ?).
Closed-World Question-Answering Although we use open-world snippets, the ï¬nal SearchQA is a closed-world question-answering dataset since each question can be answered entirely based on the associated snippets. One family of such datasets includes Childrenâs Book dataset (Hill et al., 2015), CNN and DailyMail (Hermann et al., 2015). Each question-answer-context tuple in these datasets was constructed by ï¬rst selecting the context article and then creating a question- answer pair, where the question is a sentence with a missing word and the answer is the miss- ing word. This family differs from SearchQA in two aspects. First, in SearchQA we start from a question-answer pair, and, second, our question is not necessarily of a ï¬ll-in-a-word type.
Another family is an extension of the for- family in- mer cludes SQuAD (Rajpurkar et al., 2016) and NEWSQA (Trischler et al., 2016). Unlike the ï¬rst family, answers in this family are often multi- word phrases, and they do not necessarily appear as they are in the corresponding context. In con- trast, in SearchQA we ensure that all multi-word phrase answers appear in their corresponding con- text. Answers, often as well as questions, are thus often crowd-sourced in this family of datasets. Nonetheless, each tuple in these datasets was how- ever also constructed starting from a correspond- ing context article, making them less realistic than the proposed SearchQA.
Answer Per-question Average Per-user Average Per-user Std. Dev. F1 score (for n-gram) Unigram n-gram 66.97% 42.86% 64.85% 43.85% 10.43% 8.16% 57.62 % -
Table 1: The accuracies achieved by the volun- teers.
MS MARCO (Nguyen et al., 2016)âthe most recently released dataset to our knowledgeâ is perhaps most similar to the proposed SearchQA. Nguyen et al. (2016) selected a subset of actual user-generated queries to Microsoft Bing that cor- respond to questions. These questions are aug- mented with a manually selected subset of snip- pets returned by Bing. The question is then an- swered by a human. Two major differences be- tween MS MARCO and SearchQA are the choice of questions and search engine. We believe the comparison between MS MARCO and the pro- posed SearchQA would be valuable for expand- ing our understanding on how the choice of search engines as well as types of questions impact question-answering systems in the future.
# 4 Experiments and Results
As a part of our release of SearchQA, we provide a set of baseline performances against which other researchers may compare their future approaches. Unlike most of the previous datasets, SearchQA augments each question-answer pair with a noisy, real context retrieved from the largest search en- gine in the world. This implies that the human per- formance is not necessarily the upper-bound but we nevertheless provide it as a guideline.
# 4.1 Human Evaluation
We designed a web interface that displays a query and retrieved snippets and lets a user select an an- swer by clicking words on the screen. A user is given up to 40 minutes to answer as many ques- tions as possible. We randomly select question- answer-context pairs from the test set.
We recruited thirteen volunteers from the mas- terâs program in the Center for Data Science at NYU. They were uniform-randomly split into two groups. The ï¬rst group was presented with ques- tions that have single-word (unigram) answers only, and the other group with questions that have either single-word or multi-word (n-gram) an- swers. On average, each participant answers 47.23 questions with the standard deviation of 30.42.
We report the average and standard deviation of the accuracy achieved by the volunteers in Table 1. We notice the signiï¬cant gap between the accura- cies by the ï¬rst and second groups, suggesting that the difï¬culty of question-answering grows as the length of the answer increases. Also, according to the F1 scores, we observe a large gap between the ASR and humans. This suggests the potential for the proposed SearchQA as a benchmark for ad- vancing question-answering research. Overall, we found the performance of human volunteers much lower than expected and suspect the following un- derlying reasons. First, snippets are noisy, as they are often excerpts not full sentences. Second, hu- man volunteers may have become exhausted over the trial. We leave more detailed analysis of the performance of human subjects on the proposed SearchQA for the future.
# 4.2 Machine Baselines
TF-IDF Max An interesting property of the proposed SearchQA is that the context of each question-answer pair was retrieved by Google with the question as a query. This implies that the information about the question itself may be implicitly embedded in the snippets. We therefore test a naive strategy (TF-IDF Max) of selecting the word with the highest TF-IDF score in the context as an answer. Note that this can only be used for the questions with a unigram answer.
Attention Sum Reader Attention sum reader (ASR, Kadlec et al., 2016) is a variant of a pointer network (Vinyals et al., 2015) that was specifi- cally constructed to solve a cloze-style question- answering task. ASR consists of two encoding recurrent networks. The first network encodes a given context c, which is the concatenation of all the snippets in the case of SearchQA, into a set of hidden vectors {h¢}, and the second network en- codes a question gq into a single vector h?. The dot product between each hidden vector from the context and the question vector is exponentiated to form word scores 3; = exp(h' hS). ASR then pulls these word scores by summing the scores of the same word, resulting in a set of unique word scores 3% = Vie p, 63, where D; indicates where the word ¢ appears in the context. These unique- word scores are normalized, and we obtain an an- swer distribution p(ilc,qg) = 6// 30, 6). The ASR is trained to maximize this (log-)probability of the correct answer word in the context.
Set Model TF-IDF Valid Test Valid Test Max ASR Unigram Acc Acc@5 13.0 12.7 43.9 41.3 49.3 49.0 67.3 65.1 n-gram F1 â â 24.2 22.8
Table 2: The accuracies on the validation and test sets using the non-trainable baseline (TF-IDF Max) and the trainable baseline (ASR). We report top-1/5 accuracies for unigram answers, and oth- erwise, F1 scores.
This vanilla ASR only works with a unigram answer and is not suitable for an n-gram answer. We avoid this issue by introducing another recur- rent network which encodes the previous answer words (4, ...,@â1) into a vector h*. This vector is added to the question vectors, i.e., hY â hI+h*. During training, we use the correct previou an- swer words, while we let the model, called n-gram ASR, predict one answer at a time until it predicts (answer). This special token, appended to the con- ext, indicates the end of the answer.
We try both the vanilla and n-gram ASRâs on the unigram-answer-only subset and on the whole set, respectively. We use recurrent networks with 100 gated recurrent units (GRU, Cho et al., 2014) for both unigram and n-gram models, respec- tively. We use Adam (Kingma and Ba, 2014) and dropout (Srivastava et al., 2014) for training.
Result We report the results in Table 2. We see that the attention sum reader is below human eval- uation, albeit by a rather small margin. Also, TF- IDF Max scores are not on par when compared to ASR which is perhaps not surprising. Given the unstructured nature of SearchQA, we believe im- provements on the benchmarks presented are cru- cial for developing a real-world Q&A system.
# 5 Conclusion
for question- We constructed a new dataset answering research, called SearchQA. It was built using an in-production, commercial search engine. It closely reï¬ects the full pipeline of a (hypothet- ical) general question-answering system, which consists of information retrieval and answer syn- thesis. We conducted human evaluation as well as machine evaluation. Using the latest tech- nique, ASR, we show that there is a meaningful gap between humans and machines, which sug- gests the potential of SearchQA as a benchmark
task for question-answering research. We release SearchQA publicly, including our own implemen- tation of ASR and n-gram ASR in PyTorch.3
# Acknowledgments
KC thanks support by Google, NVIDIA, eBay and Facebook. MD conducted this work as a part of DS-GA 1010: Independent Study in Data Science at the Center for Data Science, New York Univer- sity.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473 .
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 .
Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Fethi Bougares, Holger Schwenk, and Yoshua Learning phrase representations Bengio. 2014. using rnn encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014).
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693â 1701.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301 .
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 .
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep con- volutional neural networks. In Advances in neural information processing systems. pages 1097â1105.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268 .
3http://pytorch.org/
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457 .
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Squad: 100,000+ questions Percy Liang. 2016. for machine comprehension of text. arXiv preprint arXiv:1606.05250 .
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Re- search 15(1):1929â1958.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. NewsQA: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830 .
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems. pages 2692â2700.
Wayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig. 2016. Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256 . | {
"id": "1511.02301"
} |
1704.05119 | Exploring Sparsity in Recurrent Neural Networks | Recurrent Neural Networks (RNN) are widely used to solve a variety of
problems and as the quantity of data and the amount of available compute have
increased, so have model sizes. The number of parameters in recent
state-of-the-art networks makes them hard to deploy, especially on mobile
phones and embedded devices. The challenge is due to both the size of the model
and the time it takes to evaluate it. In order to deploy these RNNs
efficiently, we propose a technique to reduce the parameters of a network by
pruning weights during the initial training of the network. At the end of
training, the parameters of the network are sparse while accuracy is still
close to the original dense neural network. The network size is reduced by 8x
and the time required to train the model remains constant. Additionally, we can
prune a larger dense network to achieve better than baseline performance while
still reducing the total number of parameters significantly. Pruning RNNs
reduces the size of the model and can also help achieve significant inference
time speed-up using sparse matrix multiply. Benchmarks show that using our
technique model size can be reduced by 90% and speed-up is around 2x to 7x. | http://arxiv.org/pdf/1704.05119 | Sharan Narang, Erich Elsen, Gregory Diamos, Shubho Sengupta | cs.LG, cs.CL | Published as a conference paper at ICLR 2017 | null | cs.LG | 20170417 | 20171106 | 7 1 0 2
v o N 6 ] G L . s c [
2 v 9 1 1 5 0 . 4 0 7 1 : v i X r a
Published as a conference paper at ICLR 2017
# EXPLORING SPARSITY IN RECURRENT NEURAL NETWORKS
Sharan Narang, Erich Elsenâ, Greg Diamos & Shubho Senguptaâ Baidu Research {sharan,gdiamos}@baidu.com
# ABSTRACT
Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to eval- In order to deploy these RNNs efï¬ciently, we propose a technique to uate it. reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8à and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than base- line performance while still reducing the total number of parameters signiï¬cantly. Pruning RNNs reduces the size of the model and can also help achieve signiï¬cant inference time speed-up using sparse matrix multiply. Benchmarks show that us- ing our technique model size can be reduced by 90% and speed-up is around 2à to 7Ã.
# INTRODUCTION
Recent advances in multiple ï¬elds such as speech recognition (Graves & Jaitly, 2014; Amodei et al., 2015), language modeling (J´ozefowicz et al., 2016) and machine translation (Wu et al., 2016) can be at least partially attributed to larger training datasets, larger models and more compute that allows larger models to be trained on larger datasets.
For example, the deep neural network used for acoustic modeling in Hannun et al. (2014) had 11 million parameters which grew to approximately 67 million for bidirectional RNNs and further to 116 million for the latest forward only GRU models in Amodei et al. (2015). And in language mod- eling the size of the non-embedding parameters (mostly in the recurrent layers) have exploded even as various ways of hand engineering sparsity into the embeddings have been explored in J´ozefowicz et al. (2016) and Chen et al. (2015a).
These large models face two signiï¬cant challenges in deployment. Mobile phones and embedded devices have limited memory and storage and in some cases network bandwidth is also a concern. In addition, the evaluation of these models requires a signiï¬cant amount of computation. Even in cases when the networks can be evaluated fast enough, it will still have a signiï¬cant impact on battery life in mobile devices (Han et al., 2015).
Inference performance of RNNs is dominated by the memory bandwidth of the hardware, since most of the work is simply reading in the parameters at every time step. Moving from a dense calculation to a sparse one comes with a penalty, but if the sparsity factor is large enough, then the smaller amount of data required by the sparse routines becomes a win. Furthermore, this suggests that if the parameter sizes can be reduced to ï¬t in cache or other very fast memory, then large speedups could be realized, resulting in a super-linear increase in performance.
# âNow at Google Brain eriche@google.com â Now at Facebook AI Research ssengupta@fb.com
1
Published as a conference paper at ICLR 2017
The more powerful server class GPUs used in data centers can generally perform inference quickly enough to serve one user, but in the data center performance per dollar is very important. Techniques that allow models to be evaluated faster enable more users to be served per GPU increasing the effective performance per dollar.
We propose a method to reduce the number of weights in recurrent neural networks. While the network is training we progressively set more and more weights to zero using a monotonically increasing threshold. By controlling the shape of the function that maps iteration count to threshold value, we can control how sparse the ï¬nal weight matrices become. We prune all the weights of a recurrent layer; other layer types with signiï¬cantly fewer parameters are not pruned. Separate threshold functions can be used for each layer, although in practice we use one threshold function per layer type. With this approach, we can achieve sparsity of 90% with a small loss in accuracy. We show this technique works with Gated Recurrent Units (GRU) (Cho et al., 2014) as well as vanilla RNNs.
In addition to the beneï¬ts of less storage and faster inference, this technique can also improve the accuracy over a dense baseline. By starting with a larger dense matrix than the baseline and then pruning it down, we can achieve equal or better accuracy compared to the baseline but with a much smaller number of parameters.
This approach can be implemented easily in current training frameworks and is agnostic to the optimization algorithm. Furthermore, training time does not increase unlike previous approaches such as in Han et al. (2015). State of the art results in speech recognition generally require days to weeks of training time, so a further 3-4Ã increase in training time is undesirable.
# 2 RELATED WORK
There have been several proposals to reduce the memory footprint of weights and activations in neural networks. One method is to use a ï¬xed point representation to quantize weights to signed bytes and activations to unsigned bytes (Vanhoucke et al., 2011). Another technique that has been tried in the past is to learn a low rank factorization of the weight matrices. One method is to carefully construct one of the factors and learn the other (Denil et al., 2013). Inspired by this technique, a low rank approximation for the convolution layers achieves twice the speed while staying within 1% of the original model in terms of accuracy (Denton et al., 2014). The convolution layer can also be approximated by a smaller set of basis ï¬lters (Jaderberg et al., 2014). By doing this they achieve a 2.5x speedup with no loss in accuracy. Quantization techniques like k-means clustering of weights can also reduce the storage size of the models by focusing only on the fully connected layers (Gong et al., 2014). A hash function can also reduce memory footprint by tying together weights that fall in the same hash bucket (Chen et al., 2015b). This reduces the model size by a factor of 8.
Yet another approach to reduce compute and network size is through network pruning. One method is to use several bias techniques to decay weights (Hanson & Pratt, 1989). Yet another approach is to use the diagonal terms of a Hessian matrix to construct a saliency threshold and used this to drop weights that fall below a given saliency threshold (LeCun et al., 1989). In this technique, once a weight has been set to 0, the network is retrained with these weights frozen at 0. Optimal Brain Surgeon is another work in the same vein that prunes weights using the inverse of a Hessian matrix with the additional advantage of no re-training after pruning (Hassibi et al., 1993).
Both pruning and quantization techniques can be combined to get impressive gains on AlexNet trained on the ImageNet dataset (Han et al., 2015). In this case, pruning, quantization and subsequent Huffman encoding results in a 35x reduction in model size without affecting accuracy. There has also been some recent work to shrink model size for recurrent and LSTM networks used in automatic speech recognition (ASR) (Lu et al., 2016). By using a hybrid strategy of using Toeplitz matrices for the bottom layer and shared low-rank factors on the top layers, they were able to reduce the parameters of a LSTM by 75% while incurring a 0.3% increase in word error rate (WER).
Our method is a pruning technique that is computationally efï¬cient for large recurrent networks that have become the norm for automatic speech recognition. Unlike the methods that need to approximate a Hessian (LeCun et al., 1989; Hassibi et al., 1993) our method uses a simple heuristic to choose the threshold used to drop weights. Yet another advantage, when compared to methods that need re-training (Han et al., 2015), is that our pruning technique is part of training and needs
2
Published as a conference paper at ICLR 2017
Table 1: Hyper-Parameters used for determining threshold (e)
HYPER-PARAM DESCRIPTION HEURISTIC VALUES Start_itr Iteration to start pruning Start of second epoch ramp itr Iteration to increase the rate of Start of 25% of total epochs pruning end _itr Iteration to stop pruning more pa- Start of 50% of total epochs rameters start_slope Initial slope to prune the weights See equation|T] (0) ramp slope Ramp slope to change the rate of 1.50 to 20 (4) pruning freq Number of iterations after which e 100 is updated
no additional re-training. Even though our technique requires judicious choice of pruning hyper- parameters, we feel that it is easier than choosing the structure of matrices to guide the sparsiï¬cation for recurrent networks (Lu et al., 2016). Another approach for pruning feed forward neural networks for speech recognition is using simple threshold to prune all weights (Yu et al., 2012) at a particular epoch. However, we ï¬nd that gradual pruning produces better results than hard pruning.
3
# IMPLEMENTATION
Our pruning approach involves maintaining a set of masks, a monotonically increasing threshold and a set of hyper parameters that are used to determine the threshold. During model initialization, we create a set of binary masks, one for each weight in the network that are all initially set to one. After every optimizer update step, each weight is multiplied with its corresponding mask. At regular intervals, the masks are updated by setting all parameters that are lower than the threshold to zero.
The threshold is computed using hyper-parameters shown in Table 1. The hyper-parameters control the duration, rate and frequency of pruning the parameters for each layer. We use a different set of hyper-parameters for each layer type resulting in a different threshold for each layer type. The threshold is updated at regular intervals using the hyper-parameters according to Algorithm 1. We donât modify the gradients in the back-propagation step. It is possible for the updates of a pruned weight to be larger than the threshold of that layer. In this case, the weight will be involved in the forward pass again.
We provide heuristics to help determine start itr, ramp itr and end itr in table 1. After picking these hyper parameters and assuming that ramp slope(Ï) is 1.5à start slope (θ), we calculate (θ) using equation 1.
θ = 2 â q â freq 2 â (ramp itr â start itr ) + 3 â (end itr â ramp itr ) (1)
In order to determine q in equation 1, we use an existing weight array from a previously trained model. The weights are sorted using absolute values and we pick the weight corresponding to the 90th percentile as q. This allows us to pick reasonable values for the hyper-parameters required for pruning. A validation set can be used to ï¬ne tune these parameters.
We only prune the weights of the recurrent and linear layers but not the biases or batch norm pa- rameters since they are much fewer in number compared to the weights. For the recurrent layers, we prune both the input weight matrix and the recurrent weight matrix. Similarly, we prune all the weights in gated recurrent units including those of the reset and update gates.
3
Published as a conference paper at ICLR 2017
# Algorithm 1 Pruning Algorithm
current_itr = 0 while training do for all parameters do param = (param and mask) if current_itr > start_itr and current_itr < end_itr then if (current_itr mod freq) == 0 then if current_itr < ramp_itr then ⬠= 0 * (current_itr â start_itr + 1)/freq else ⬠= (0 * (ramp-itr â start_itr + 1) + 6 * (current_itr â ramp-itr + 1))/freq end if mask = abs(param) < ⬠end if end if end for current_itr += 1 end while
# 4 EXPERIMENTS
We run all our experiments on a training set of 2100 hours of English speech data and a validation set of 3.5 hours of multi-speaker data. This is a small subset of the datasets that we use to train our state-of-the-art automatic speech recognition models. We train the models using Nesterov SGD for 20 epochs. Besides the hyper-parameters for determining the threshold, all other hyper-parameters remain unchanged between the dense and sparse training runs. We ï¬nd that our pruning approach works well for vanilla bidirectional recurrent layers and forward only gated recurrent units.
4.1 BIDIRECTIONAL RNNS
We use the Deep Speech 2 model for these experiments. As shown in Table 2, this model has 2 convolution layers, followed by 7 bidirectional recurrent layers and a CTC cost layer. Each recurrent linear layer has 1760 hidden units, creating a network of approximately 67 million parameters. For these experiments, we prune the linear layers that feed into the recurrent layers, the forward and backward recurrent layers and fully connected layer before the CTC layer. These experiments use clipped rectiï¬ed-linear units (ReLU) Ï(x) = min(max(x, 0), 20) as the activation function. In the sparse run, the pruning begins shortly after the ï¬rst epoch and continues until the 10th epoch. We chose these hyper-parameters so that the model has an overall sparsity of 88% at the end of pruning, which is 8x smaller than the original dense model. The character error rate (CER) on the devset is about 20% worse relative to the dense model as shown in Table 3.
An argument against this sparsity result might be that we are taking advantage of a large model that overï¬ts our relatively small dataset. In order to test this hypothesis, we train a dense model with 704 hidden units in each layer, that has approximately the same number of parameters as the ï¬nal sparse model. Table 3 shows that this model performs worse than the sparse models. Thus sparse model is a better approach to reduce parameters than using a dense model with fewer hidden units.
In order to recover the loss in accuracy, we train sparse models with larger recurrent layers with 2560 and 3072 hidden units. Figure 1a shows the training and dev curves for these sparse models compared to the dense baseline model. These experiments use the same hyper-parameters (except for small changes in the pruning hyper-parameters) and the same dataset as the baseline model. As we see in Table 3, the model with 2560 hidden units achieves a 0.75% relative improvement compared to the dense baseline model, while the model with 3072 hidden units has a 3.95% im- provement. The dense 2560 model also improves the CER by 11.85% relative to the dense baseline model. The sparse 2560 model is about 12% worse than the corresponding dense model. Both these large models are pruned to achieve a ï¬nal sparsity of around 92%. These sparse larger models have signiï¬cantly fewer parameters than the baseline dense model.
4
Published as a conference paper at ICLR 2017
Table 2: Deep Speech 2 architecture with 1760 hidden units
LAYER ID TYPE layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 layer 6 layer 7 layer 8 layer 9 layer 10 2D Convolution 2D Convolution Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear Bidirectional Recurrent Linear FullyConnected CTCCost 19616 239168 8507840 9296320 9296320 9296320 9296320 9296320 9296320 3101120 95054
# # PARAMS
We also compare our gradual pruning approach to the hard pruning approach proposed in Yu et al. (2012). In their approach, all parameters below a certain threshold are pruned at particular epoch. Table 4 shows the results of pruning the RNN dense baseline model at different epochs to achieve ï¬nal parameter count ranging from 8 million to 11 million. The network is trained for the same number of epochs as the gradual pruning experiments. These hard threshold results are compared with the RNN Sparse 1760 model in Table 3. For approximately same number of parameters, gradual pruning is 7% to 9% better than hard pruning.
We conclude that pruning models to achieve sparsity of around 90% reduces the relative accuracy of the model by 10% to 20%. However, for a given performance requirement, it is better to prune a larger model than to use a smaller dense model. Gradually pruning a model produces better results than hard pruning.
Table 3: GRU & bidirectional RNN model results
MODEL RNN Dense Baseline RNN Dense Small RNN Dense Medium 2560 RNN Sparse 1760 1760 RNN Sparse Medium 2560 3072 RNN Sparse Big 2560 GRU Dense GRU Sparse 2560 GRU Sparse Medium 3568 1760 704 10.67 14.50 9.43 12.88 10.59 10.25 9.55 10.87 9.76 67 million 11.6 million 141 million 8.3 million 11.1 million 16.7 million 115 million 13 million 17.8 million 0.0% -35.89% 11.85% -20.71% 0.75% 3.95% 0.0% -13.82% -2.20%
# # UNITS CER # PARAMS RELATIVE PERF
Table 4: RNN dense baseline model with hard pruning
# UNITS PRUNED EPOCH CER # PARAMS RELATIVE PERF
1760 1760 1760 1760 1760 5 7 10 12 15 13.82 13.27 13.41 13.63 26.33 8 million 11 million 8.4 million 8 million 9.2 million -29.52% -24.37% -25.68% -27.74% -146.77%
5
Published as a conference paper at ICLR 2017
| Wh baseline_67mil_123250_dev â 2560_11mil_122945 train so i 2560_11mil_122945 dev â 3072_16mil_130495 train 3072_16mil_130495_dev âaseline_67mil_123250_train CTC cost Epoch number
35] io Epoch number 20
(a) (b)
Figure 1: Training and dev curves for baseline (dense) and sparse training. Figure 1a includes training and dev curves for models with larger recurrent layers with 2560 and 3072 hidden units compared to the 1760 dense baseline. Figure 1b plots the training and dev curves for GRU models (sparse and dense) with 2560 parameters.
Table 5: Gated recurrent units model
LAYER ID TYPE layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 layer 6 layer 7 2D Convolution 2D Convolution Gated Recurrent Linear Gated Recurrent Linear Gated Recurrent Linear Row Convolution FullyConnected CTCCost 19616 239168 29752320 39336960 39336960 107520 6558720 74269
4.2 GATED RECURRENT UNITS
We also experimented with GRU models shown in Table 5, that have 2560 hidden units in the GRU layer and a total of 115 million parameters. For these experiments, we prune all layers except the convolution layers since they have relatively fewer parameters.
Figure 1b compares the training and dev curves of a sparse GRU model a dense GRU model. The sparse GRU model has a 13.8% drop in the accuracy relative to the dense model. As shown in Table 3, the sparse model has an overall sparsity of 88.6% with 13 million parameters. Similar to the RNN models, we train a sparse GRU model with 3568 hidden units. The dataset and the hyperparameters are not changed from the previous GRU experiments. This model has an overall sparsity of 91.82% with 17.8 million parameters. As shown in Table 3, the model with 3568 hidden units is only 2.2% worse than the baseline dense GRU model. We expect to match the performance of the GRU dense network by slightly lowering the sparsity of this network or by increasing the hidden units for the layers.
In addition, we experimented with pruning only the GRU layers and keeping all the parameters in fully connected layers. The accuracy for these experiments is around 7% worse than the baseline dense model. However, this model only achieves 50% compression due to the size of the fully connected layers.
6
Published as a conference paper at ICLR 2017
Table 6: GEMM times for recurrent layers with different sparsity
LAYER SIZE SPARSITY LAYER TYPE TIME (µsec) SPEEDUP
1760 1760 2560 3072 2560 2560 3568 0% 95% 95% 95% 0% 95% 95% RNN RNN RNN RNN GRU GRU GRU 56 20 29 48 313 46 89 1 2.8 1.93 1.16 1 6.80 3.5
# 5 PERFORMANCE
5.1 COMPUTE TIME
The success of deep learning in recent years have been driven by large models trained on large datasets. However this also increases the inference time after the models have been deployed. We can mitigate this effect by using sparse layers.
A General Matrix-Matrix Multiply (GEMM) is the most compute intensive operation in evaluating a neural network model. Table 6 compares times for GEMM for recurrent layers with different number of hidden units that are 95% sparse. The performance benchmark was run using NVIDIAâs CUDNN and cuSPARSE libraries on a TitanX Maxwell GPU and compiled using CUDA 7.5. All experiments are run on a minibatch of 1 and in this case, the operation is known as a sparse matrix-vector product (SpMV). We can achieve speed-ups ranging from 3x to 1.15x depending on the size of the recurrent layer. Similarly, for the GRU models, the speed-ups range from 7x to 3.5x. However, we notice that cuSPARSE performance is substantially lower than the approximately 20x speedup that we would expect by comparing the bandwidth requirements of the 95% sparse and dense networks. State of the art SpMV routines can achieve close to device memory bandwidth for a wide array of matrix shapes and sparsity patterns (see Baxter (2016) and Liu et al. (2013)). This means that the performance should improve by the factor that parameter counts are reduced. Additionally, we ï¬nd that the cuSPARSE performance degrades with larger batch sizes. It should be possible for a better implementation to further exploit the signiï¬cant reuse of the weight matrix provided by large batch sizes.
5.2 COMPRESSION
Pruning allows us to reduce the memory footprint of a model which allows them to be deployed on phones and other embedded devices. The Deep Speech 2 model can be compressed from 268 MB to around 32 MB (1760 hidden units) or 64 MB (3072 hidden units). The GRU model can be compressed from 460 MB to 50 MB. These pruned models can be further quantized down to ï¬oat16 or other smaller datatypes to further reduce the memory requirements without impacting accuracy.
# 6 DISCUSSION
6.1 PRUNING CHARACTERISTICS
Figure 2a shows the sparsity of all the recurrent layers with the same hyper-parameters used to prune the layers. The layers are ordered such that layer 1 is closest to input and layer 14 is the ï¬nal recur- rent layer before the cost layer. We see that the initial layers are pruned more aggressively compared to the ï¬nal layers. We also performed experiments where the hyper parameters are different for the recurrent layers resulting in equal sparsity for all the layers. However, we get higher CER for these experiments. We conclude that to get good accuracy, it is important to prune the ï¬nal layers slightly less than the initial ones.
7
Published as a conference paper at ICLR 2017
3500000, 3000000] 2500000 10000 § 1500000) 1000000] =ae ° 3055 roto 5600 wats 25890 âHoo00
(a) (b)
Sparsity s0% Pruning Percent 796 a a a OT Layers
Figure 2: Pruning characteristics. Figure 2a plots sparsity of recurrent layers in the network with the same hyper-parameters used for pruning . Figure 2b plots the pruning schedule of a single layer during a training run.
In Figure 2b, we plot the pruning schedule of a 95% sparse recurrent layer of the bidirectional model trained for 20 epochs (55000 iterations). We begin pruning the network at the start of the second epoch at 2700 iterations. We stop pruning a layer after 10 epochs (half the total epochs) are complete at 27000 iterations. We see that nearly 25000 weights are pruned before 5 epochs are complete at around 15000 iterations. In our experiments, weâve noticed that pruning schedules that are a convex curve tend to outperform schedules with a linear slope.
6.2 PERSISTENT KERNELS
Persistent Recurrent Neural Networks (Diamos et al., 2016) is a technique that increases the compu- tational intensity of evaluating an RNN by caching the weights in on-chip memory such as caches, block RAM, or register ï¬les across multiple timesteps. A high degree of sparsity allows signiï¬cantly large Persistent RNNs to be stored in on-chip memory. When all the weights are stored in ï¬oat16, a NVIDIA P100 GPU can support a vanilla RNN size of about 2600 hidden units. With the same datatype, at 90% sparsity, and 99% sparsity, a P100 can support RNNs with about 8000, and 24000 hidden units respectively. We expect these kernels to be bandwidth limited out of the memory that is used to store the parameters. This offers the potential of a 146x speedup compared to the TitanX GPU if the entire RNN layer can be stored in registers rather than the GPU DRAM of a TitanX.
Additionally, sparse matrix multiplication involves scheduling and load balancing phases to divide the work up evenly over thousands of threads and to route corresponding weights and activations to individual threads. Since the sparsity patterns for RNNs are ï¬xed over many timesteps these scheduling and load balancing operations can be factored outside of the loop, performed once, and reused many times.
# 7 CONCLUSION AND FUTURE WORK
We have demonstrated that by pruning the weights of RNNs during training we can ï¬nd sparse mod- els that are more accurate than dense models while signiï¬cantly reducing model size. These sparse models are especially suited for deployment on mobile devices and on back-end server farms due to their small size and increased computational efï¬ciency. Even with existing sub-optimal sparse matrix-vector libraries we realize speed-ups with these models. This technique is orthogonal to quantization techniques which would allow for even further reductions in model size and corre- sponding increase in performance.
We wish to investigate whether these techniques can generalize to language modeling tasks and if they can effectively reduce the size of embedding layers. We also wish to compare the sparsity generated by our pruning technique to that obtained by L1 regularization.
8
Published as a conference paper at ICLR 2017
We are investigating training techniques that donât require maintaining dense matrices for a sig- niï¬cant portion of the calculation. Further work remains to implement optimal small batch sparse matrix-dense vector routine for GPUs and ARM processors that would help in deployment.
# ACKNOWLEDGMENTS
We would like to thank Bryan Catanzaro for helpful discussions related to this work.
# REFERENCES
Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jing- dong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
Sean Baxter. Moderngpu, 2016. URL https://nvlabs.github.io/moderngpu/ segreduce.html.
Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. CoRR, abs/1512.04906, 2015a. URL http://arxiv.org/abs/1512. 04906.
Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015b. URL http://arxiv. org/abs/1504.04788.
Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Misha Denil, Babak Shakibi, Laurent Dinh, MarcâAurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. CoRR, abs/1306.0543, 2013. URL http://arxiv.org/abs/ 1306.0543.
Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efï¬cient evaluation. CoRR, abs/1404.0736, 2014. URL http://arxiv.org/abs/1404.0736.
Greg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse Engel, Awni Hannun, and Sanjeev Satheesh. Persistent rnns: Stashing recurrent weights on- chip. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2024â2033, 2016.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir D. Bourdev. Compressing deep convolutional networks using vector quantization. CoRR, abs/1412.6115, 2014. URL http://arxiv.org/ abs/1412.6115.
Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In ICML, volume 14, pp. 1764â1772, 2014.
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015.
Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
Stephen Jos´e Hanson and Lorien Pratt. Advances in neural information processing systems 1. chap- ter Comparing Biases for Minimal Network Construction with Back-propagation, pp. 177â185. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1989. ISBN 1-558-60015-9. URL http://dl.acm.org/citation.cfm?id=89851.89872.
9
Published as a conference paper at ICLR 2017
Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network In Neural Networks, 1993., IEEE International Conference on, pp. 293â299. IEEE, pruning. 1993.
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. CoRR, abs/1405.3866, 2014. URL http://arxiv.org/abs/ 1405.3866.
Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. URL http://arxiv.org/abs/ 1602.02410.
Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPs, volume 2, pp. 598â605, 1989.
Xing Liu, Mikhail Smelyanskiy, Edmond Chow, and Pradeep Dubey. Efï¬cient sparse matrix-vector multiplication on x86-based many-core processors. In Proceedings of the 27th International ACM Conference on International Conference on Supercomputing, ICS â13, pp. 273â282, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-2130-3. doi: 10.1145/2464996.2465013. URL http: //doi.acm.org/10.1145/2464996.2465013.
Zhiyun Lu, Vikas Sindhwani, and Tara N. Sainath. Learning compact recurrent neural networks. CoRR, abs/1604.02594, 2016. URL http://arxiv.org/abs/1604.02594.
Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Googleâs neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.
Dong Yu, Frank Seide, Gang Li, and Li Deng. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4409â4412. IEEE, 2012.
10 | {
"id": "1512.02595"
} |
1704.04861 | MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications | We present a class of efficient models called MobileNets for mobile and
embedded vision applications. MobileNets are based on a streamlined
architecture that uses depth-wise separable convolutions to build light weight
deep neural networks. We introduce two simple global hyper-parameters that
efficiently trade off between latency and accuracy. These hyper-parameters
allow the model builder to choose the right sized model for their application
based on the constraints of the problem. We present extensive experiments on
resource and accuracy tradeoffs and show strong performance compared to other
popular models on ImageNet classification. We then demonstrate the
effectiveness of MobileNets across a wide range of applications and use cases
including object detection, finegrain classification, face attributes and large
scale geo-localization. | http://arxiv.org/pdf/1704.04861 | Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam | cs.CV | null | null | cs.CV | 20170417 | 20170417 | 7 1 0 2
r p A 7 1 ] V C . s c [
1 v 1 6 8 4 0 . 4 0 7 1 : v i X r a
# MobileNets: Efï¬cient Convolutional Neural Networks for Mobile Vision Applications
Menglong Zhu Tobias Weyand Bo Chen Marco Andreetto Dmitry Kalenichenko Hartwig Adam
# Google Inc. {howarda,menglong,bochen,dkalenichenko,weijunw,weyand,anm,hadam}@google.com
# Abstract
We present a class of efï¬cient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth- wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper- parameters that efï¬ciently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classiï¬cation. We then demonstrate the effective- ness of MobileNets across a wide range of applications and use cases including object detection, ï¬negrain classiï¬ca- tion, face attributes and large scale geo-localization.
models. Section 3 describes the MobileNet architecture and two hyper-parameters width multiplier and resolution mul- tiplier to deï¬ne smaller and more efï¬cient MobileNets. Sec- tion 4 describes experiments on ImageNet as well a variety of different applications and use cases. Section 5 closes with a summary and conclusion.
# 2. Prior Work
There has been rising interest in building small and efï¬- cient neural networks in the recent literature, e.g. [16, 34, 12, 36, 22]. Many different approaches can be generally categorized into either compressing pretrained networks or training small networks directly. This paper proposes a class of network architectures that allows a model devel- oper to speciï¬cally choose a small network that matches the resource restrictions (latency, size) for their application. MobileNets primarily focus on optimizing for latency but also yield small networks. Many papers on small networks focus only on size but do not consider speed.
# 1. Introduction
Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet [19] popularized deep convolutional neural networks by winning the Ima- geNet Challenge: ILSVRC 2012 [24]. The general trend has been to make deeper and more complicated networks in order to achieve higher accuracy [27, 31, 29, 8]. How- ever, these advances to improve accuracy are not necessar- ily making networks more efï¬cient with respect to size and speed. In many real world applications such as robotics, self-driving car and augmented reality, the recognition tasks need to be carried out in a timely fashion on a computation- ally limited platform.
This paper describes an efï¬cient network architecture and a set of two hyper-parameters in order to build very small, low latency models that can be easily matched to the design requirements for mobile and embedded vision ap- plications. Section 2 reviews prior work in building small
MobileNets are built primarily from depthwise separable convolutions initially introduced in [26] and subsequently used in Inception models [13] to reduce the computation in the ï¬rst few layers. Flattened networks [16] build a network out of fully factorized convolutions and showed the poten- tial of extremely factorized networks. Independent of this current paper, Factorized Networks[34] introduces a similar factorized convolution as well as the use of topological con- nections. Subsequently, the Xception network [3] demon- strated how to scale up depthwise separable ï¬lters to out perform Inception V3 networks. Another small network is Squeezenet [12] which uses a bottleneck approach to design a very small network. Other reduced computation networks include structured transform networks [28] and deep fried convnets [37].
A different approach for obtaining small networks is shrinking, factorizing or compressing pretrained networks. Compression based on product quantization [36], hashing
1
Object Detection Finegrain Classification 481050 Photo by Juanede (CC BY 2.0) Face Attributes Photo by HarshLight (CC BY 2.0) Landmark Recognition Google Doodle by Sarah Harrison MobileNets Photo by Sharon VanderKaay (CC BY 2.0)
Figure 1. MobileNet models can be applied to various recognition tasks for efï¬cient on device intelligence.
[2], and pruning, vector quantization and Huffman coding [5] have been proposed in the literature. Additionally var- ious factorizations have been proposed to speed up pre- trained networks [14, 20]. Another method for training small networks is distillation [9] which uses a larger net- work to teach a smaller network. It is complementary to our approach and is covered in some of our use cases in section 4. Another emerging approach is low bit networks [4, 22, 11].
# 3. MobileNet Architecture
DF Ã M feature map F and produces a DF Ã DF Ã N feature map G where DF is the spatial width and height of a square input feature map1, M is the number of input channels (input depth), DG is the spatial width and height of a square output feature map and N is the number of output channel (output depth).
The standard convolutional layer is parameterized by convolution kernel K of size DK ÃDK ÃM ÃN where DK is the spatial dimension of the kernel assumed to be square and M is number of input channels and N is the number of output channels as deï¬ned previously.
In this section we ï¬rst describe the core layers that Mo- bileNet is built on which are depthwise separable ï¬lters. We then describe the MobileNet network structure and con- clude with descriptions of the two model shrinking hyper- parameters width multiplier and resolution multiplier.
The output feature map for standard convolution assum- ing stride one and padding is computed as:
Gk,l,n = Ki,j,m,n · Fk+iâ1,l+jâ1,m i,j,m (1)
# 3.1. Depthwise Separable Convolution
Standard convolutions have the computational cost of:
The MobileNet model is based on depthwise separable convolutions which is a form of factorized convolutions which factorize a standard convolution into a depthwise convolution and a 1 à 1 convolution called a pointwise con- volution. For MobileNets the depthwise convolution ap- plies a single ï¬lter to each input channel. The pointwise convolution then applies a 1 à 1 convolution to combine the outputs the depthwise convolution. A standard convolution both ï¬lters and combines inputs into a new set of outputs in one step. The depthwise separable convolution splits this into two layers, a separate layer for ï¬ltering and a separate layer for combining. This factorization has the effect of drastically reducing computation and model size. Figure 2 shows how a standard convolution 2(a) is factorized into a depthwise convolution 2(b) and a 1 à 1 pointwise convolu- tion 2(c).
DK · DK · M · N · DF · DF (2)
where the computational cost depends multiplicatively on the number of input channels M , the number of output channels N the kernel size Dk à Dk and the feature map size DF à DF . MobileNet models address each of these terms and their interactions. First it uses depthwise separa- ble convolutions to break the interaction between the num- ber of output channels and the size of the kernel.
The standard convolution operation has the effect of ï¬l- tering features based on the convolutional kernels and com- bining features in order to produce a new representation. The ï¬ltering and combination steps can be split into two steps via the use of factorized convolutions called depthwise
A standard convolutional layer takes as input a DF Ã
1We assume that the output feature map has the same spatial dimen- sions as the input and both feature maps are square. Our model shrinking results generalize to feature maps with arbitrary sizes and aspect ratios.
separable convolutions for substantial reduction in compu- tational cost.
Depthwise separable convolution are made up of two layers: depthwise convolutions and pointwise convolutions. We use depthwise convolutions to apply a single ï¬lter per each input channel (input depth). Pointwise convolution, a simple 1Ã1 convolution, is then used to create a linear com- bination of the output of the depthwise layer. MobileNets use both batchnorm and ReLU nonlinearities for both lay- ers.
Depthwise convolution with one ï¬lter per input channel (input depth) can be written as:
ËGk,l,m = ËKi,j,m · Fk+iâ1,l+jâ1,m i,j (3)
where ËK is the depthwise convolutional kernel of size DK à DK à M where the mth ï¬lter in ËK is applied to the mth channel in F to produce the mth channel of the ï¬ltered output feature map ËG.
Depthwise convolution has a computational cost of:
DK · DK · M · DF · DF (4)
Depthwise convolution is extremely efï¬cient relative to standard convolution. However it only ï¬lters input chan- nels, it does not combine them to create new features. So an additional layer that computes a linear combination of the output of depthwise convolution via 1 à 1 convolution is needed in order to generate these new features.
The combination of depthwise convolution and 1 Ã 1 (pointwise) convolution is called depthwise separable con- volution which was originally introduced in [26].
Depthwise separable convolutions cost:
DK · DK · M · DF · DF + M · N · DF · DF
which is the sum of the depthwise and 1 Ã 1 pointwise con- volutions.
By expressing convolution as a two step process of ï¬lter- ing and combining we get a reduction in computation of:
DK · DK · M · DF · DF + M · N · DF · DF DK · DK · M · N · DF · DF 1 D2 K
MobileNet uses 3 Ã 3 depthwise separable convolutions which uses between 8 to 9 times less computation than stan- dard convolutions at only a small reduction in accuracy as seen in Section 4.
Additional factorization in spatial dimension such as in [16, 31] does not save much additional computation as very little computation is spent in depthwise convolutions.
(5)
Dx | Dx âN-
(a) Standard Convolution Filters
1 â FOO 6 Dr + Mâ
(b) Depthwise Convolutional Filters
J M 1 | im } 1 ~_Nâ
(c) 1Ã1 Convolutional Filters called Pointwise Convolution in the con- text of Depthwise Separable Convolution
Figure 2. The standard convolutional ï¬lters in (a) are replaced by two layers: depthwise convolution in (b) and pointwise convolu- tion in (c) to build a depthwise separable ï¬lter.
# 3.2. Network Structure and Training
The MobileNet structure is built on depthwise separable convolutions as mentioned in the previous section except for the ï¬rst layer which is a full convolution. By deï¬ning the network in such simple terms we are able to easily explore network topologies to ï¬nd a good network. The MobileNet architecture is deï¬ned in Table 1. All layers are followed by a batchnorm [13] and ReLU nonlinearity with the exception of the ï¬nal fully connected layer which has no nonlinearity and feeds into a softmax layer for classiï¬cation. Figure 3 contrasts a layer with regular convolutions, batchnorm and ReLU nonlinearity to the factorized layer with depthwise convolution, 1 à 1 pointwise convolution as well as batch- norm and ReLU after each convolutional layer. Down sam- pling is handled with strided convolution in the depthwise convolutions as well as in the ï¬rst layer. A ï¬nal average pooling reduces the spatial resolution to 1 before the fully connected layer. Counting depthwise and pointwise convo- lutions as separate layers, MobileNet has 28 layers.
It is not enough to simply deï¬ne networks in terms of a small number of Mult-Adds. It is also important to make sure these operations can be efï¬ciently implementable. For
3x3 Conv 3x3 Depthwise Conv I I BN } BN I I ReLU ReLU 1x1 Conv BN I ReLU
Figure 3. Left: Standard convolutional layer with batchnorm and ReLU. Right: Depthwise Separable convolutions with Depthwise and Pointwise layers followed by batchnorm and ReLU.
instance unstructured sparse matrix operations are not typ- ically faster than dense matrix operations until a very high level of sparsity. Our model structure puts nearly all of the computation into dense 1 Ã 1 convolutions. This can be im- plemented with highly optimized general matrix multiply (GEMM) functions. Often convolutions are implemented by a GEMM but require an initial reordering in memory called im2col in order to map it to a GEMM. For instance, this approach is used in the popular Caffe package [15]. 1 Ã 1 convolutions do not require this reordering in memory and can be implemented directly with GEMM which is one of the most optimized numerical linear algebra algorithms. MobileNet spends 95% of itâs computation time in 1 Ã 1 convolutions which also has 75% of the parameters as can be seen in Table 2. Nearly all of the additional parameters are in the fully connected layer.
MobileNet models were trained in TensorFlow [1] us- ing RMSprop [33] with asynchronous gradient descent sim- ilar to Inception V3 [31]. However, contrary to training large models we use less regularization and data augmen- tation techniques because small models have less trouble with overï¬tting. When training MobileNets we do not use side heads or label smoothing and additionally reduce the amount image of distortions by limiting the size of small crops that are used in large Inception training [31]. Addi- tionally, we found that it was important to put very little or no weight decay (l2 regularization) on the depthwise ï¬lters since their are so few parameters in them. For the ImageNet benchmarks in the next section all models were trained with same training parameters regardless of the size of the model.
# 3.3. Width Multiplier: Thinner Models
Although the base MobileNet architecture is already small and low latency, many times a speciï¬c use case or application may require the model to be smaller and faster. In order to construct these smaller and less computationally expensive models we introduce a very simple parameter α called width multiplier. The role of the width multiplier α is to thin a network uniformly at each layer. For a given layer
Table 1. MobileNet Body Architecture
Type / Stride Conv / s2 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s1 Conv / s1 Conv dw / s2 Conv / s1 Conv dw / s2 Conv / s1 Avg Pool / s1 FC / s1 Softmax / s1 5à Filter Shape 3 à 3 à 3 à 32 3 à 3 à 32 dw 1 à 1 à 32 à 64 3 à 3 à 64 dw 1 à 1 à 64 à 128 3 à 3 à 128 dw 1 à 1 à 128 à 128 3 à 3 à 128 dw 1 à 1 à 128 à 256 3 à 3 à 256 dw 1 à 1 à 256 à 256 3 à 3 à 256 dw 1 à 1 à 256 à 512 3 à 3 à 512 dw 1 à 1 à 512 à 512 3 à 3 à 512 dw 1 à 1 à 512 à 1024 3 à 3 à 1024 dw 1 à 1 à 1024 à 1024 Pool 7 à 7 1024 à 1000 Classiï¬er Input Size 224 à 224 à 3 112 à 112 à 32 112 à 112 à 32 112 à 112 à 64 56 à 56 à 64 56 à 56 à 128 56 à 56 à 128 56 à 56 à 128 28 à 28 à 128 28 à 28 à 256 28 à 28 à 256 28 à 28 à 256 14 à 14 à 256 14 à 14 à 512 14 à 14 à 512 14 à 14 à 512 7 à 7 à 512 7 à 7 à 1024 7 à 7 à 1024 7 à 7 à 1024 1 à 1 à 1024 1 à 1 à 1000
Table 2. Resource Per Layer Type
Type Conv 1 Ã 1 Conv DW 3 Ã 3 Conv 3 Ã 3 Fully Connected Mult-Adds 94.86% 3.06% 1.19% 0.18% Parameters 74.59% 1.06% 0.02% 24.33%
and width multiplier α, the number of input channels M be- comes αM and the number of output channels N becomes αN .
The computational cost of a depthwise separable convo- lution with width multiplier α is:
DK · DK · αM · DF · DF + αM · αN · DF · DF
where α â (0, 1] with typical settings of 1, 0.75, 0.5 and 0.25. α = 1 is the baseline MobileNet and α < 1 are reduced MobileNets. Width multiplier has the effect of re- ducing computational cost and the number of parameters quadratically by roughly α2. Width multiplier can be ap- plied to any model structure to deï¬ne a new smaller model with a reasonable accuracy, latency and size trade off. It is used to deï¬ne a new reduced structure that needs to be trained from scratch.
# 3.4. Resolution Multiplier: Reduced Representa- tion
The second hyper-parameter to reduce the computational cost of a neural network is a resolution multiplier Ï. We ap-
Table 3. Resource usage for modiï¬cations to standard convolution. Note that each row is a cumulative effect adding on top of the previous row. This example is for an internal MobileNet layer with DK = 3, M = 512, N = 512, DF = 14.
Layer/Modiï¬cation Convolution Depthwise Separable Conv α = 0.75 Ï = 0.714 Million Mult-Adds 462 52.3 29.6 15.1 Million Parameters 2.36 0.27 0.15 0.15
ply this to the input image and the internal representation of every layer is subsequently reduced by the same multiplier. In practice we implicitly set Ï by setting the input resolu- tion.
We can now express the computational cost for the core layers of our network as depthwise separable convolutions with width multiplier α and resolution multiplier Ï:
DK · DK · αM · ÏDF · ÏDF + αM · αN · ÏDF · ÏDF (7)
where Ï â (0, 1] which is typically set implicitly so that the input resolution of the network is 224, 192, 160 or 128. Ï = 1 is the baseline MobileNet and Ï < 1 are reduced computation MobileNets. Resolution multiplier has the ef- fect of reducing computational cost by Ï2.
As an example we can look at a typical layer in Mo- bileNet and see how depthwise separable convolutions, width multiplier and resolution multiplier reduce the cost and parameters. Table 3 shows the computation and number of parameters for a layer as architecture shrinking methods are sequentially applied to the layer. The ï¬rst row shows the Mult-Adds and parameters for a full convolutional layer with an input feature map of size 14 à 14 à 512 with a ker- nel K of size 3 à 3 à 512 à 512. We will look in detail in the next section at the trade offs between resources and accuracy.
# 4. Experiments
In this section we ï¬rst investigate the effects of depth- wise convolutions as well as the choice of shrinking by re- ducing the width of the network rather than the number of layers. We then show the trade offs of reducing the net- work based on the two hyper-parameters: width multiplier and resolution multiplier and compare results to a number of popular models. We then investigate MobileNets applied to a number of different applications.
# 4.1. Model Choices
First we show results for MobileNet with depthwise sep- arable convolutions compared to a model built with full con- volutions. In Table 4 we see that using depthwise separa- ble convolutions compared to full convolutions only reduces
Table 4. Depthwise Separable vs Full Convolution MobileNet Million Parameters 29.3 4.2
Table 5. Narrow vs Shallow MobileNet Model 0.75 MobileNet Shallow MobileNet ImageNet Accuracy Mult-Adds Million 68.4% 65.3% 325 307 Million Parameters 2.6 2.9
Table 6. MobileNet Width Multiplier
Width Multiplier 1.0 MobileNet-224 0.75 MobileNet-224 0.5 MobileNet-224 0.25 MobileNet-224 ImageNet Accuracy Mult-Adds Million 70.6% 68.4% 63.7% 50.6% 569 325 149 41 Million Parameters 4.2 2.6 1.3 0.5
Resolution Table 7. MobileNet Resolution Million ImageNet Accuracy Mult-Adds 1.0 MobileNet-224 1.0 MobileNet-192 1.0 MobileNet-160 1.0 MobileNet-128 70.6% 69.1% 67.2% 64.4% 569 418 290 186 Million Parameters 4.2 4.2 4.2 4.2
accuracy by 1% on ImageNet was saving tremendously on mult-adds and parameters.
We next show results comparing thinner models with width multiplier to shallower models using less layers. To make MobileNet shallower, the 5 layers of separable ï¬lters with feature size 14 à 14 à 512 in Table 1 are removed. Table 5 shows that at similar computation and number of parameters, that making MobileNets thinner is 3% better than making them shallower.
# 4.2. Model Shrinking Hyperparameters
Table 6 shows the accuracy, computation and size trade offs of shrinking the MobileNet architecture with the width multiplier α. Accuracy drops off smoothly until the archi- tecture is made too small at α = 0.25.
Table 7 shows the accuracy, computation and size trade offs for different resolution multipliers by training Mo- bileNets with reduced input resolutions. Accuracy drops off smoothly across resolution.
Figure 4 shows the trade off between ImageNet Accu- racy and computation for the 16 models made from the cross product of width multiplier α â {1, 0.75, 0.5, 0.25} and resolutions {224, 192, 160, 128}. Results are log linear with a jump when models get very small at α = 0.25.
Imagenet Accuracy vs Mult-Adds 80 > 70 e g eee * 3 le eee en) ° 3B 3 ° $ S g§ 50 ° e e e 40 10 100 1000 Million Mult-Adds
Figure 4. This ï¬gure shows the trade off between computation (Mult-Adds) and accuracy on the ImageNet benchmark. Note the log linear dependence between accuracy and computation.
Imagenet Accuracy vs Million Parameters 80 @ 224 @ 192 @ 160 @ 128 70 60 Imagenet Accuracy 50 40 04 06 08 14 2 4 Million Parameters
Figure 5. This ï¬gure shows the trade off between the number of parameters and accuracy on the ImageNet benchmark. The colors encode input resolutions. The number of parameters do not vary based on the input resolution.
Figure 5 shows the trade off between ImageNet Ac- curacy and number of parameters for the 16 models made from the cross product of width multiplier α â {1, 0.75, 0.5, 0.25} and resolutions {224, 192, 160, 128}.
to the original GoogleNet [30] and VGG16 [27]. MobileNet is nearly as accurate as VGG16 while being 32 times smaller and 27 times less compute intensive. It is more accurate than GoogleNet while being smaller and more than 2.5 times less computation.
Table 9 compares a reduced MobileNet with width mul- tiplier α = 0.5 and reduced resolution 160 à 160. Reduced MobileNet is 4% better than AlexNet [19] while being 45à smaller and 9.4à less compute than AlexNet. It is also 4% better than Squeezenet [12] at about the same size and 22à less computation.
# Table 8. MobileNet Comparison to Popular Models Model
ImageNet Accuracy Mult-Adds Million Million Parameters 4.2 6.8 138 1.0 MobileNet-224 GoogleNet VGG 16 70.6% 69.8% 71.5% 569 1550 15300
# Table 9. Smaller MobileNet Comparison to Popular Models ImageNet Accuracy Mult-Adds
Model Million Million Parameters 1.32 1.25 60 0.50 MobileNet-160 Squeezenet AlexNet 60.2% 57.5% 57.2% 76 1700 720
Table 10. MobileNet for Stanford Dogs
Model Inception V3 [18] 1.0 MobileNet-224 0.75 MobileNet-224 1.0 MobileNet-192 0.75 MobileNet-192 Top-1 Million Accuracy Mult-Adds 84% 83.3% 81.9% 81.9% 80.5% 5000 569 325 418 239 Million Parameters 23.2 3.3 1.9 3.3 1.9
Table 11. Performance of PlaNet using the MobileNet architec- ture. Percentages are the fraction of the Im2GPS test dataset that were localized within a certain distance from the ground truth. The numbers for the original PlaNet model are based on an updated version that has an improved architecture and training dataset. PlaNet MobileNet 79.3% 60.3% 45.2% 31.7% 11.4%
Scale Im2GPS [7] PlaNet [35] 51.9% 35.4% 32.1% 21.9% 2.5% 77.6% 64.0% 51.1% 31.7% 11.0% Continent (2500 km) Country (750 km) Region (200 km) City (25 km) Street (1 km)
# 4.3. Fine Grained Recognition
We train MobileNet for ï¬ne grained recognition on the Stanford Dogs dataset [17]. We extend the approach of [18] and collect an even larger but noisy training set than [18] from the web. We use the noisy web data to pretrain a ï¬ne grained dog recognition model and then ï¬ne tune the model on the Stanford Dogs training set. Results on Stanford Dogs test set are in Table 10. MobileNet can almost achieve the state of the art results from [18] at greatly reduced compu- tation and size.
# 4.4. Large Scale Geolocalizaton
PlaNet [35] casts the task of determining where on earth a photo was taken as a classiï¬cation problem. The approach divides the earth into a grid of geographic cells that serve as the target classes and trains a convolutional neural network
on millions of geo-tagged photos. PlaNet has been shown to successfully localize a large variety of photos and to out- perform Im2GPS [6, 7] that addresses the same task.
We re-train PlaNet using the MobileNet architecture on the same data. While the full PlaNet model based on the In- ception V3 architecture [31] has 52 million parameters and 5.74 billion mult-adds. The MobileNet model has only 13 million parameters with the usual 3 million for the body and 10 million for the ï¬nal layer and 0.58 Million mult-adds. As shown in Tab. 11, the MobileNet version delivers only slightly decreased performance compared to PlaNet despite being much more compact. Moreover, it still outperforms Im2GPS by a large margin.
# 4.5. Face Attributes
Another use-case for MobileNet is compressing large systems with unknown or esoteric training procedures. In a face attribute classiï¬cation task, we demonstrate a syner- gistic relationship between MobileNet and distillation [9], a knowledge transfer technique for deep networks. We seek to reduce a large face attribute classiï¬er with 75 million parameters and 1600 million Mult-Adds. The classiï¬er is trained on a multi-attribute dataset similar to YFCC100M [32].
We distill a face attribute classiï¬er using the MobileNet architecture. Distillation [9] works by training the classi- ï¬er to emulate the outputs of a larger model2 instead of the ground-truth labels, hence enabling training from large (and potentially inï¬nite) unlabeled datasets. Marrying the scal- ability of distillation training and the parsimonious param- eterization of MobileNet, the end system not only requires no regularization (e.g. weight-decay and early-stopping), but also demonstrates enhanced performances. It is evi- dent from Tab. 12 that the MobileNet-based classiï¬er is re- silient to aggressive model shrinking: it achieves a similar mean average precision across attributes (mean AP) as the in-house while consuming only 1% the Multi-Adds.
# 4.6. Object Detection
MobileNet can also be deployed as an effective base net- work in modern object detection systems. We report results for MobileNet trained for object detection on COCO data based on the recent work that won the 2016 COCO chal- lenge [10]. In table 13, MobileNet is compared to VGG and Inception V2 [13] under both Faster-RCNN [23] and SSD [21] framework. In our experiments, SSD is evaluated with 300 input resolution (SSD 300) and Faster-RCNN is compared with both 300 and 600 input resolution (Faster- RCNN 300, Faster-RCNN 600). The Faster-RCNN model evaluates 300 RPN proposal boxes per image. The models are trained on COCO train+val excluding 8k minival images
2The emulation quality is measured by averaging the per-attribute cross-entropy over all attributes.
Table 12. Face attribute classiï¬cation using the MobileNet archi- tecture. Each row corresponds to a different hyper-parameter set- ting (width multiplier α and image resolution). Width Multiplier / Mean Million
Million Resolution 1.0 MobileNet-224 88.7% 0.5 MobileNet-224 88.1% 0.25 MobileNet-224 87.2% 1.0 MobileNet-128 88.1% 0.5 MobileNet-128 87.7% 0.25 MobileNet-128 86.4% 86.9% Baseline 568 149 45 185 48 15 1600 3.2 0.8 0.2 3.2 0.8 0.2 7.5
Table 13. COCO object detection results comparison using differ- ent frameworks and network architectures. mAP is reported with COCO primary challenge metric (AP at IoU=0.50:0.05:0.95)
Framework Resolution Model mAP Billion Million Mult-Adds Parameters SSD 300 Faster-RCNN 300 Faster-RCNN 600 deeplab-VGG 21.1% Inception V2 22.0% 19.3% MobileNet 22.9% VGG Inception V2 15.4% 16.4% MobileNet 25.7% VGG Inception V2 21.9% 19.8% 34.9 3.8 1.2 64.3 118.2 25.2 149.6 129.6 30.5 33.1 13.7 6.8 138.5 13.3 6.1 138.5 13.3 6.1 Mobilenet
PN a bus: 98% wsdl
Figure 6. Example objection detection results using MobileNet SSD.
and evaluated on minival. For both frameworks, MobileNet achieves comparable results to other networks with only a fraction of computational complexity and model size.
# 4.7. Face Embeddings
The FaceNet model is a state of the art face recognition model [25]. It builds face embeddings based on the triplet loss. To build a mobile FaceNet model we use distillation to train by minimizing the squared differences of the output
Table 14. MobileNet Distilled from FaceNet
Model FaceNet [25] 1.0 MobileNet-160 1.0 MobileNet-128 0.75 MobileNet-128 0.75 MobileNet-128 1e-4 Million Accuracy Mult-Adds 83% 79.4% 78.3% 75.2% 72.5% 1600 286 185 166 108 Million Parameters 7.5 4.9 5.5 3.4 3.8
of FaceNet and MobileNet on the training data. Results for very small MobileNet models can be found in table 14.
# 5. Conclusion
We proposed a new model architecture called Mo- bileNets based on depthwise separable convolutions. We investigated some of the important design decisions leading to an efï¬cient model. We then demonstrated how to build smaller and faster MobileNets using width multiplier and resolution multiplier by trading off a reasonable amount of accuracy to reduce size and latency. We then compared dif- ferent MobileNets to popular models demonstrating supe- rior size, speed and accuracy characteristics. We concluded by demonstrating MobileNetâs effectiveness when applied to a wide variety of tasks. As a next step to help adoption and exploration of MobileNets, we plan on releasing mod- els in Tensor Flow.
# References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorï¬ow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorï¬ow. org, 1, 2015. 4
[2] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015. 2
[3] F. Chollet. Xception: Deep learning with depthwise separa- ble convolutions. arXiv preprint arXiv:1610.02357v2, 2016. 1
[4] M. Courbariaux, J.-P. David, and Y. Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. 2
[5] S. Han, H. Mao, and W. J. Dally. Deep compression: Com- pressing deep neural network with pruning, trained quantiza- tion and huffman coding. CoRR, abs/1510.00149, 2, 2015. 2
[6] J. Hays and A. Efros. IM2GPS: estimating geographic in- formation from a single image. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2008. 7
[7] J. Hays and A. Efros. Large-Scale Image Geolocalization. In J. Choi and G. Friedland, editors, Multimodal Location Estimation of Videos and Images. Springer, 2014. 6, 7
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learn- ing for image recognition. arXiv preprint arXiv:1512.03385, 2015. 1
[9] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2, 7
[10] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. arXiv preprint arXiv:1611.10012, 2016. 7 [11] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Quantized neural networks: Training neural net- works with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016. 2
[12] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 1mb model size. arXiv preprint arXiv:1602.07360, 2016. 1, 6
[13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 1, 3, 7
[14] M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. 2
[15] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolu- tional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. 4
[16] J. Jin, A. Dundar, and E. Culurciello. Flattened convolutional neural networks for feedforward acceleration. arXiv preprint arXiv:1412.5474, 2014. 1, 3
[17] A. Khosla, N. Jayadevaprakash, B. Yao, and L. Fei-Fei. Novel dataset for ï¬ne-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. 6
[18] J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei. The unreasonable ef- fectiveness of noisy data for ï¬ne-grained recognition. arXiv preprint arXiv:1511.06789, 2015. 6
Imagenet In classiï¬cation with deep convolutional neural networks. Advances in neural information processing systems, pages 1097â1105, 2012. 1, 6
I. Oseledets, and V. Lempitsky. Speeding-up convolutional neural net- works using ï¬ne-tuned cp-decomposition. arXiv preprint arXiv:1412.6553, 2014. 2
[21] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. arXiv preprint Ssd: arXiv:1512.02325, 2015. 7 Single shot multibox detector.
[22] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor- net: Imagenet classiï¬cation using binary convolutional neu- ral networks. arXiv preprint arXiv:1603.05279, 2016. 1, 2
[23] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015. 7
[24] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015. 1
[25] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A uni- ï¬ed embedding for face recognition and clustering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815â823, 2015. 7, 8
[26] L. Sifre. Rigid-motion scattering for image classiï¬cation. PhD thesis, Ph. D. thesis, 2014. 1, 3
[27] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1, 6
[28] V. Sindhwani, T. Sainath, and S. Kumar. Structured trans- In Advances in forms for small-footprint deep learning. Neural Information Processing Systems, pages 3088â3096, 2015. 1
Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. 1
[30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015. 6
[31] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. 1, 3, 4, 7
[32] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64â73, 2016. 7
[33] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012. 4
[34] M. Wang, B. Liu, and H. Foroosh. Factorized convolutional neural networks. arXiv preprint arXiv:1608.04337, 2016. 1 [35] T. Weyand, I. Kostrikov, and J. Philbin. PlaNet - Photo Ge- olocation with Convolutional Neural Networks. In European Conference on Computer Vision (ECCV), 2016. 6
[36] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. arXiv preprint arXiv:1512.06473, 2015. 1
[37] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pages 1476â1483, 2015. 1 | {
"id": "1602.07360"
} |
1704.04683 | RACE: Large-scale ReAding Comprehension Dataset From Examinations | We present RACE, a new dataset for benchmark evaluation of methods in the
reading comprehension task. Collected from the English exams for middle and
high school Chinese students in the age range between 12 to 18, RACE consists
of near 28,000 passages and near 100,000 questions generated by human experts
(English instructors), and covers a variety of topics which are carefully
designed for evaluating the students' ability in understanding and reasoning.
In particular, the proportion of questions that requires reasoning is much
larger in RACE than that in other benchmark datasets for reading comprehension,
and there is a significant gap between the performance of the state-of-the-art
models (43%) and the ceiling human performance (95%). We hope this new dataset
can serve as a valuable resource for research and evaluation in machine
comprehension. The dataset is freely available at
http://www.cs.cmu.edu/~glai1/data/race/ and the code is available at
https://github.com/qizhex/RACE_AR_baselines. | http://arxiv.org/pdf/1704.04683 | Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy | cs.CL, cs.AI, cs.LG | EMNLP 2017 | null | cs.CL | 20170415 | 20171205 | 7 1 0 2 c e D 5
] L C . s c [
5 v 3 8 6 4 0 . 4 0 7 1 : v i X r a
# RACE: Large-scale ReAding Comprehension Dataset From Examinations
Guokun Laiâ and Qizhe Xieâ and Hanxiao Liu and Yiming Yang and Eduard Hovy {guokun, qzxie, hanxiaol, yiming, hovy}@cs.cmu.edu Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213
# Abstract
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task. Collected from the English exams for middle and high school Chinese students in the age range between 12 to 18, RACE con- sists of near 28,000 passages and near 100,000 questions generated by human experts (English instructors), and cov- ers a variety of topics which are care- fully designed for evaluating the studentsâ ability in understanding and reasoning. In particular, the proportion of questions that requires reasoning is much larger in RACE than that in other benchmark datasets for reading comprehension, and there is a signiï¬cant gap between the performance of the state-of-the-art mod- els (43%) and the ceiling human perfor- mance (95%). We hope this new dataset can serve as a valuable resource for re- search and evaluation in machine com- prehension. The dataset is freely avail- able at http://www.cs.cmu.edu/ Ëglai1/data/race/ and the code is available at https://github.com/ qizhex/RACE_AR_baselines
# Introduction
Constructing an intelligence agent capable of un- derstanding text as people is the major challenge of NLP research. With recent advances in deep learning techniques, it seems possible to achieve human-level performance in certain language un- derstanding tasks, and a surge of effort has been devoted to the machine comprehension task where people aim to construct a system with the ability to
answer questions related to a document that it has to comprehend (Chen et al., 2016; Kadlec et al., 2016; Dhingra et al., 2016; Yang et al., 2017).
Towards this goal, several large-scale datasets (Rajpurkar et al., 2016; Onishi et al., 2016; Hill et al., 2015; Trischler et al., 2016; Hermann et al., 2015) have been proposed, which allow re- searchers to train deep learning systems and ob- tain results comparable to the human performance. While having a suitable dataset is crucial for eval- uating the systemâs true ability in reading compre- hension, the existing datasets suffer several critical limitations. Firstly, in all datasets, the candidate options are directly extracted from the context (as a single entity or a text span), which leads to the fact that lots of questions can be solved trivially via word-based search and context-matching with- out deeper reasoning; this constrains the types of questions as well. Secondly, answers and ques- tions of most datasets are either crowd-sourced or automatically-generated, bringing a signiï¬cant amount of noises in the datasets and limits the ceil- ing performance by domain experts, such as 82% for Childrens Book Test and 84% for Who-did- What. Yet another issue in existing datasets is that the topic coverages are often biased due to the spe- ciï¬c ways that the data were initially collected, making it hard to evaluate the ability of systems in text comprehension over a broader range of topics. To address the aforementioned limitations, we constructed a new dataset by collecting a large set of questions, answers and associated pas- sages in the English exams for middle-school and high-school Chinese students within the 12â18 age range. Those exams were designed by do- main experts (instructors) for evaluating the read- ing comprehension ability of students, with en- Fur- sured quality and broad topic coverage. thermore, the answers by machines or by hu- mans can be objectively graded for evaluation
â* indicates equal contribution
and comparison using the same evaluation met- rics. Although efforts have been made with a sim- ilar motivation, including the MCTest dataset cre- ated by (Richardson et al., 2013) (containing 500 passages and 2000 questions) and several others (PeËnas et al., 2014; Rodrigo et al., 2015; Khashabi et al., 2016; Shibuki et al., 2014), the usefulness of those datasets is signiï¬cantly restricted due to their small sizes, especially not suitable for train- ing powerful deep neural networks whose success relies on the availability of relatively large training sets.
Our new dataset, namely RACE, consists of 27,933 passages and 97,687 questions. After read- ing each passage, each student is asked to answer several questions where each question is provided with four candidate answers â only one of them is correct . Unlike existing datasets, both the ques- tions and candidate answers in RACE are not re- stricted to be the text spans in the original passage; instead, they can be described in any words. A sample from our dataset is presented in Table 1.
Our latter analysis shows that correctly answer- ing a large portion of questions in RACE requires the ability of reasoning, the most important fea- ture as a machine comprehension dataset (Chen et al., 2016). RACE also offers two important sub- divisions of the reasoning types in its questions, namely passage summarization and attitude anal- ysis, which have not been introduced by the any of the existing large-scale datasets to our knowledge. In addition, compared to other existing datasets where passages are either domain-speciï¬c or of a single ï¬xed style (namely news stories for CNN/- Daily Mail, NEWSQA and Who-did-What, ï¬ction stories for Childrenâs Book Test and Book Test, and Wikipedia articles for SQUAD), passages in RACE almost cover all types of human articles, such as news, stories, ads, biography, philosophy, etc., in a variety of styles. This comprehensiveness of topic/style coverage makes RACE a desirable resource for evaluating the reading comprehension ability of machine learning systems in general.
The advantages of our proposed dataset over ex- isting large datasets in machine reading compre- hension can be summarized as follows:
⢠All questions and candidate options are gen- erated by human experts, which are intention- ally designed to test human agentâs ability in reading comprehension. This makes RACE a relatively accurate indicator for reï¬ecting the
text comprehension ability of machine learn- ing systems under human judge.
⢠The questions are substantially more difï¬cult than those in existing datasets, in terms of the large portion of questions involving reason- ing. At the meantime, it is also sufï¬ciently large to support the training of deep learning models.
⢠Unlike existing large-scale datasets, candi- date options in RACE are human generated sentences which may not appear in the origi- nal passage. This makes the task more chal- lenging and allows a rich type of questions such as passage summarization and attitude analysis.
⢠Broad coverage in various domains and writ- ing styles: a desirable property for evaluating generic (in contrast to domain/style-speciï¬c) comprehension ability of learning models.
# 2 Related Work
In this section, we brieï¬y outline existing datasets for the machine reading comprehension task, in- cluding their strengths and weaknesses.
# 2.1 MCTest
MCTest (Richardson et al., 2013) is a popular dataset for question answering in the same for- mat as RACE, where each question is associated with four candidate answers with a single cor- rect answer. Although questions in MCTest are of high-quality ensured by careful examinations through crowdsourcing, it contains only 500 stores and 2000 questions, which substantially restricts its usage in training advanced machine compre- hension models. Moreover, while MCTest is de- signed for 7 years old children, RACE is con- structed for middle and high school students at 12â18 years old hence is more complicated and requires stronger reasoning skills. In other words, RACE can be viewed as a larger and more difï¬cult version of the MCTest dataset.
# 2.2 Cloze-style datasets
The past few years have witnessed several large- scale cloze-style datasets (Hermann et al., 2015; Hill et al., 2015; Bajgar et al., 2016; Onishi et al., 2016), whose questions are formulated by obliter- ating a word or an entity in a sentence.
Passage: In a small village in England about 150 years ago, a mail coach was standing on the street. It didnât come to that village often. People had to pay a lot to get a letter. The person who sent the letter didnât have to pay the postage, while the receiver had to. âHereâs a letter for Miss Alice Brown,â said the mailman. â Iâm Alice Brown,â a girl of about 18 said in a low voice. Alice looked at the envelope for a minute, and then handed it back to the mailman. âIâm sorry I canât take it, I donât have enough money to pay itâ, she said. A gentleman standing around were very sorry for her. Then he came up and paid the postage for her. When the gentleman gave the letter to her, she said with a smile, â Thank you very much, This letter is from Tom. Iâm going to marry him. He went to London to look for work. Iâve waited a long time for this letter, but now I donât need it, there is nothing in it.â âReally? How do you know that?â the gentleman said in surprise. âHe told me that he would put some signs on the envelope. Look, sir, this cross in the corner means that he is well and this circle means he has found work. Thatâs good news.â The gentleman was Sir Rowland Hill. He didnât forgot Alice and her letter. âThe postage to be paid by the receiver has to be changed,â he said to himself and had a good plan. âThe postage has to be much lower, what about a penny? And the person who sends the letter pays the postage. He has to buy a stamp and put it on the envelope.â he said . The government accepted his plan. Then the ï¬rst stamp was put out in 1840. It was called the âPenny Blackâ. It had a picture of the Queen on it. Questions: 1): The ï¬rst postage stamp was made . A. in England B. in America C. by Alice D. in 1910 2): The girl handed the letter back to the mailman because . A. she didnât know whose letter it was B. she had no money to pay the postage C. she received the letter but she didnât want to open it D. she had already known what was written in the letter 3): We can know from Aliceâs words that A. Tom had told her what the signs meant before leaving B. Alice was clever and could guess the meaning of the signs C. Alice had put the signs on the envelope herself D. Tom had put the signs as Alice had told him to . 4): The idea of using stamps was thought of by . A. the government B. Sir Rowland Hill C. Alice Brown D. Tom 5): From the passage we know the high postage made . A. people never send each other letters B. lovers almost lose every touch with each other C. people try their best to avoid paying it D. receivers refuse to pay the coming letters Answer: ADABC
Table 1: Sample reading comprehension problems from our dataset.
CNN/Daily Mail (Hermann et al., 2015) are the largest machine comprehension datasets with 1.4M questions. However, both require limited reasoning ability (Chen et al., 2016). In fact, the best machine performance obtained by researchers (Chen et al., 2016; Dhingra et al., 2016) is close to humanâs performance on CNN/Daily Mail.
using one as the passage and the other as the ques- tion.
High noise is inevitable in cloze-style datasets due to their automatic generation process, which is reï¬ected in the human performance on these datasets: 82% for CBT and 84% for WDW.
Childrens Book Test (CBT) (Hill et al., 2015) and Book Test (BT) (Bajgar et al., 2016) are con- structed in a similar manner. Each passage in CBT consist of 20 contiguous sentences extracted from childrenâs books and the next (21st) sentence is used to make the question. The main difference between the two datasets is the size of BT being 60 times larger. Machine comprehension models have also matched human performance on CBT (Bajgar et al., 2016).
Who Did What (WDW) (Onishi et al., 2016) is yet another cloze-style dataset constructed from the LDC English Gigaword newswire corpus. The authors generate passages and questions by pick- ing two news articles describing the same event,
# 2.3 Datasets with Span-based Answers
In datasets such as SQUAD (Rajpurkar et al., 2016), NEWSQA (Trischler et al., 2016) MS MARCO (Nguyen et al., 2016) and recently pro- posed TriviaQA (Joshi et al., 2017). the answer to each question is in the form of a text span in the article. Articles of SQUAD, NEWSQA and MS MARCO come from Wikipedia, CNN news and the Bing search engine respectively. The answer to a certain question may not be unique and could be multiple spans. Instead of evaluating the accuracy, researchers need to use F1 score, BLEU (Papineni et al., 2002) or ROUGE (Lin and Hovy, 2003) as metrics, which measure the overlap between the prediction and ground truth answers since the
questions come without candidate spans.
Datasets with span-based answers are challeng- ing as the space of possible spans is usually large. However, restricting answers to be text spans in the context passage may be unrealistic and more importantly, may not be intuitive even for humans, indicated by the suffered human performance of 80.3% on SQUAD (or 65% claimed by Trischler et al. (2016)) and 46.5% on NEWSQA. In other words, the format of span-based answers may not necessarily be a good examination of reading com- prehension of machines whose aim is to approach the comprehension ability of humans.
# 2.4 Datasets from Examinations
There have been several datasets extracted from examinations, aiming at evaluating systems un- der the same conditions as how humans are evalu- ated in schools. E.g., the AI2 Elementary School Science Questions dataset (Khashabi et al., 2016) contains 1080 questions for students in elementary schools; NTCIR QA Lab (Shibuki et al., 2014) evaluates systems by the task of solving real-world university entrance exam questions; The Entrance Exams task at CLEF QA Track (PeËnas et al., 2014; Rodrigo et al., 2015) evaluates the systemâs read- ing comprehension ability. However, data pro- vided in these existing tasks are far from sufï¬cient for the training of advanced data-driven machine reading models, partially due to the expensive data generation process by human experts.
To the best of our knowledge, RACE is the ï¬rst large-scale dataset of this type, where questions are created based on exams designed to evaluate human performance in reading comprehension.
# 3 Data Analysis
In this section, we study the nature of questions covered in RACE at a detailed level. Speciï¬cally, we present the dataset statistics in Section 3.1, and then analyze different reasoning/question types in RACE in the remaining subsections.
# 3.1 Dataset Statistics
As mentioned in section 1, RACE is collected from English examinations designed for 12â15 year-old middle school students, and 15â18 year- old high school students in China. To distin- guish the two subgroups with drastic difï¬culty gap, RACE-M denotes the middle school exami- nations and RACE-H denotes high school exami-
nations. We split 5% data as the development set and 5% as the test set for RACE-M and RACE-H respectively. The number of samples in each set is shown in Table 2. The statistics for RACE-M and RACE-H is summarized in Table 3. We can ï¬nd that the length of the passages and the vocabulary size in the RACE-H are much larger than that of the RACE-M, an evidence of the higher difï¬culty of high school examinations.
However, notice that since the articles and ques- tions are selected and designed to test Chinese students learning English as a foreign language, the vocabulary size and the complexity of the lan- guage constructs are simpler than news articles and Wikipedia articles in other QA datasets.
# 3.2 Reasoning Types of the Questions
To get a comprehensive picture about the reason- ing difï¬culty requirement of RACE, we conduct human annotations of questions types. Following Chen et al. (2016); Trischler et al. (2016), we strat- ify the questions into ï¬ve classes as follows with ascending order of difï¬culty:
The question exactly matches a span in the article. The answer is self-evident.
⢠Paraphrasing: The question is entailed or paraphrased by exactly one sentence in the passage. The answer can be extracted within the sentence.
⢠Single-sentence reasoning: The answer could be inferred from a single sentence of the arti- cle by recognizing incomplete information or conceptual overlap.
⢠Multi-sentence reasoning: The answer must be inferred from synthesizing information distributed across multiple sentences.
⢠Insufï¬cient/Ambiguous: The question has no answer or the answer is not unique based on the given passage.
We refer readers to (Chen et al., 2016; Trischler et al., 2016) for examples of each category.
To obtain the proportion of different question types, we sample 100 passages from RACE (50 from RACE-M and 50 from RACE-H), all of which have 5 questions hence there are 500 ques- tions in total. We put the passages on Amazon Me-
Train 6,409 25,421 RACE-M Dev 368 1,436 Test 362 1,436 Train 18,728 62,445 RACE-H Dev 1,021 3,451 Test 1,045 3,498 Train 25,137 87,866 RACE Dev 1,389 4,887 Test 1,407 4,934 All 27,933 97,687
Table 2: The separation of the training, development and test sets of RACE-M,RACE-H and RACE
Dataset Passage Len Question Len Option Len Vocab size RACE-M RACE-H RACE 321.9 353.1 10.0 10.4 5.3 5.8 136,629 125,120 231.1 9.0 3.9 32,811
1. Detail reasoning: to answer the question, the agent should be clear about the details of the pas- sage. The answer appears in the passage but it can- not be found by simply matching the question with the passage. For example, Question 1 in the sam- ple passage falls into this category.
Table 3: Statistics of RACE where Len denotes length and Vocab denotes Vocabulary.
chanical Turk1, and a Hit is generated by a passage with 5 questions. Each question is labeled by two crowdworkers. We require the turkers to both an- swer the questions and label the reasoning type. We pay $0.70 and $1.00 per passage in RACE-M and RACE-H respectively, and restrict the access to master turkers only. Finally, we get 1000 labels for the 500 questions.
2. Whole-picture reasoning: the agent needs to understand the whole picture of the story to ob- tain the correct answer. For example, to answer the Question 2 in the sample passage, the agent is required to comprehend the entire story.
3. Passage summarization: The question re- quires the agent to select the best summarization of the passage among four candidate summariza- tions. A typical question of this type is âThe main idea of this passage is .â. An example question can be found in Appendix A.1.
The statistics about the reasoning type is sum- marized in Table 4. The higher difï¬culty level of RACE is justiï¬ed by its higher ratio of rea- soning questions in comparison to CNN, SQUAD and NEWSQA. Speciï¬cally, 59.2% questions of RACE are either in the category of single-sentence reasoning or in the category of multi-sentence reasoning, while the ratio is 21%, 20.5% and 33.9% for CNN, SQUAD and NEWSQA respec- tively. Also notice that the ratio of word match- ing questions on RACE is only 15.8%, the lowest among several categories. In addition, questions in RACE-H are more complex than questions in RACE-M since RACE-M has more word match- ing questions and fewer reasoning questions.
4. Attitude analysis: The question asks about the opinions/attitudes of the author or a character in the story towards somebody or something, e.g.,
⢠Evidence: â. . . Many people optimistically thought industry awards for better equipment the production of quieter would stimulate appliances. It was even suggested that noise from building sites could be alleviated . . . â
⢠Question: What was the authorâs attitude towards the industry awards for quieter?
⢠Options: A.suspicious B.positive C.enthusiastic D.indifferent
# 3.3 Subdividing Reasoning Types
5. World knowledge: Certain external knowl- edge is needed. Most frequent questions under this category involve simple arithmetic.
To better understand our dataset and facilitate fu- ture research, we list the subdivisions of ques- tions under the reasoning category. We ï¬nd the most frequent reasoning subdivisions include: de- tail reasoning, whole-picture understanding, pas- sage summarization, attitude analysis and world knowledge. One question may fall into multiple divisions. Deï¬nition of these subdivisions and their associated examples are as follows:
1https://www.mturk.com/mturk/welcome
⢠Evidence: âThe park is open from 8 am to 5 pm.â
⢠Question: The park is open for hours a day.
⢠Options: A.eight B.nine C.ten D.eleven
To the best of our knowledge, questions like passage summarization and attitude analysis have not been introduced by any of the existing large- scale machine comprehension datasets. Both are
Dataset Word Matching Paraphrasing Single-Sentence Reasoning Multi-Sentence Reasoning Ambiguous/Insufï¬cient RACE-M RACE-H RACE CNN 15.8% 13.0%â 19.2% 41.0%â 33.4% 19.0%â 25.8% 2.0%â 5.8% 25.0%â 29.4% 14.8% 31.3% 22.6% 1.8% 11.3% 20.6% 34.1% 26.9% 7.1% SQUAD NEWSQA 39.8%* 34.3%* 8.6%* 11.9%* 5.4%* 32.7%* 27.0%* 13.2%* 20.7%* 6.4%*
Table 4: Statistic information about Reasoning type in different datasets. * denotes the numbers coming from (Trischler et al., 2016) based on 1000 samples per dataset, and numbers with â come from (Chen et al., 2016).
crucial components in evaluating humansâ reading comprehension abilities.
# 4 Collection Methodology
We collected the raw data from three large free public websites in China2, where the reading com- prehension problems are extracted from English examinations designed by teachers in China. The data before cleaning contains 137,918 passages and 519,878 questions in total, where there are 38,159 passages with 156,782 questions in the middle school group, and 99,759 passages with 363,096 questions in the high school group.
The following ï¬ltering steps are conducted to clean the raw data. Firstly, we remove all prob- lems and questions that do not have the same for- mat as our problem setting, e.g., a question would be removed if the number of its options is not four. Secondly, we ï¬lter all articles and questions that are not self-contained based on the text informa- tion, i.e. we remove the articles and questions con- taining images or tables. We also remove all ques- tions containing keywords âunderlinedâ or âpara- graphâ, since it is difï¬cult to reproduce the effect of underlines and the paragraph segment informa- tion. Thirdly, we remove all duplicated articles.
On one of the websites (xkw.com), the answers are stored as images. We used two standard OCR programs tesseract 3 and ABBYY FineReader 4 to process the images. We remove all the answers that two software disagree. The OCR task is easy since we only need to recognize printed alphabet A, B, C, D with a standard font. Finally, we get the cleaned dataset RACE, with 27,933 passages and 97,687 questions.
# 5 Experiments
In this section, we compare the performance of several state-of-the-art reading comprehension models with human performance. We use accu- racy as the metric to evaluate different models.
# 5.1 Methods for Comparison
Sliding Window Algorithm Firstly, we build the rule-based baseline introduced by Richardson et al. (2013). It chooses the answer having the highest matching score. Speciï¬cally, it ï¬rst con- catenates the question and the answer and then cal- culates the TF-IDF style matching score between the concatenated sentence with every window (a span of text) of the article. The window size is decided by the model performance in the training and dev sets.
Stanford Attentive Reader Stanford Attentive Reader (Stanford AR) (Chen et al., 2016) is a strong model that achieves state-of-the-art results on CNN/Daily Mail. Moreover, the authors claim that their model has nearly reached the ceiling per- formance on these two datasets.
Suppose that the triple of passage, question and options is denoted by (p, q, o1,··· ,4). We ï¬rst em- ploy bidirectional GRUs to encode p and q respec- tively into hp n and hq. Then we sum- marize the most relevant part of the passage into sp with an attention model. Following Chen et al. (2016), we adopt a bilinear attention form. Specif- ically,
ay = Softmax;((h?) Wh") P= S- ajh? () i
2We checked that our dataset does not include exam- ple questions of exams with copyright, such as SSAT, SAT, TOEFL and GRE.
# 3https://github.com/tesseract-ocr 4https://www.abbyy.com/FineReader
Similarly, we use bidirectional GRUs to encode option oi into a vector hoi. Finally, we com- pute the matching score between the i-th option (i = 1, · · · , 4) and the summarized passage using
Random Sliding Window Stanford AR GA Turkers Ceiling Performance RACE-M RACE-H RACE MCTest CNN DM CBT-N CBT-C WDW 32.0â 10.2 19.6â 48.0â 64.0â 67.3â 71.2â 24.6 37.3 44.2 43.7 85.1 95.4 25.0 30.4 43.0 44.2 69.4 94.2 24.9 32.2 43.3 44.1 73.3 94.5 24.8 51.5â â â â â 10.6 0.06 0.06 24.8 30.8 16.8â 73.6â 76.6â 77.9â 80.9â 70.1â â â â â â â â 81.6â â 81.6â â 84â
Table 5: Accuracy of models and human on the each dataset, where â denotes the results coming from previous publications. DM denotes Daily Mail and WDW denotes Who-Did-What .
(a) RACE-M (b) RACE-H
Figure 1: Test accuracy of different baselines on each question type category introduced in Section 3.2, where Word-Match, Single-Reason, Multi-Reason and Ambiguous are the abbreviations for Word match- ing, Single-sentence Reasoning, Multi-sentence Reasoning and Insufï¬cient/Ambiguous respectively.
a bilinear attention. We pass the scores through softmax to get a probability distribution. Specif- ically, the probability of option i being the right answer is calculated as
pi = Softmaxi(hoiW2sd) (2)
After obtaining a query speciï¬c document rep- resentation sd, we use the same method as bilinear operation listed in Equation 2 to get the output.
Note that our implementation slightly differs from the original GA reader. Speciï¬cally, the At- tention Sum layer is not applied at the ï¬nal layer and no character-level embeddings are used.
Gated-Attention Reader Gated AR (Dhingra et al., 2016) is the state-of-the-art model on mul- tiple datasets. To build query-speciï¬c represen- tations of tokens in the document, it employs an attention mechanism to model multiplicative in- teractions between the query embedding and the document representation. With a multi-hop ar- chitecture, GA also enables a model to scan the document and the question iteratively for multi- ple passes. In other words, the multi-hop struc- ture makes it possible for the reader to reï¬ne token representations iteratively and the attention mech- anism ï¬nd the most relevant part of the document. We refer readers to (Dhingra et al., 2016) for more details.
Implementation Details We follow Chen et al. (2016) in our experiment settings. The vocabulary size is set to 50k. We choose word embedding size d = 100 and use the 100-dimensional Glove word embedding (Pennington et al., 2014) as em- bedding initialization. GRU weights are initial- ized from Gaussian distribution N (0, 0.1). Other parameters are initialized from a uniform distri- bution on (â0.01, 0.01). The hidden dimension- ality is set to 128 and the number of layers is set to one for both Stanford AR and GA. We use vanilla stochastic gradient descent (SGD) to train our models. We apply dropout on word embed- dings and the gradient is clipped when the norm
of the gradient is larger than 10. We use a grid search on validation set to choose the learning rate within {0.05, 0.1, 0.3, 0.5} and dropout rate within {0.2, 0.5, 0.7}. The highest accuracy on validation set is obtained by setting learning rate to 0.1 for Stanford AR and 0.3 for GA and dropout rate to 0.5. The data of RACE-M and RACE-H is used together to train our model and testing is performed separately.
# 5.2 Human Evaluation
As described in section 3.2, a randomly sam- pled subset of test set has been labeled by Ama- zon Turkers, which contains 500 questions with half from RACE-H and with the other half from RACE-M. The turkersâ performance is 85% for RACE-M and 70% for RACE-H. However, it is hard to guarantee that every turker performs the survey carefully, given the difï¬cult and long pas- sages of high school problems. Therefore, to ob- tain the ceiling human performance on RACE, we manually labeled the proportion of valid ques- tions. A question is valid if it is unambiguous and has a correct answer. We found that 94.5% of the data is valid, which sets the ceiling human per- formance. Similarly, the ceiling performance on RACE-M and RACE-H is 95.4% and 94.2% re- spectively.
# 5.3 Main Results
We compare modelsâ and human ceiling perfor- mance on datasets which have the same evalua- tion metric with RACE. The compared datasets include RACE, MCTest, CNN/Daily Mail (CNN and DM), CBT and WDW. On CBT, we report per- formance on two subsets where the missing token is either a common noun (CBT-C) or name entity (CBT-N) since the language models have already reached human-level performance on other types (Hill et al., 2015). The comparison is shown in Table 5.
Performance of Sliding Window We ï¬rst com- pare MCTest with RACE using Sliding Window, where it is unable to train Stanford AR and Gated Slid- AR on MCTestâs limited training data. ing Window achieves an accuracy of 51.5% on MCTest while only 37.3% on RACE, meaning that to answer the questions of RACE requires more reasoning than MCTest.
The performance of sliding window on RACE is not directly comparable with CBT and WDW
since CBT has ten candidate answers for each question and WDW has an average of three. In- stead, we evaluate the performance improvement of sliding window on the random baseline. Larger improvement indicates more questions solvable by simple matching. On RACE, Sliding Window is 28.6% better than the random baseline, while the improvement is 58.5%, 92.2% and 50% for CBT- N, CBT-C and WDW.
The accuracy on RACE-M (37.3%) and RACE- H (30.4%) indicates that the middle school ques- tions are simpler based on the matching algorithm.
Performance of Neural Models We further compare the difï¬culty of different datasets by state-of-the-art neural modelsâ performance. A lower performance means that more problems are unsolvable by machines. The Stanford AR and Gated AR achieve an accuracy of only 43.3% and 44.1% on RACE while their accuracy is much higher on CNN/Daily Mail, Childrens Book Test and Who-Did-What. It justiï¬es the fact that, among current large-scale machine comprehen- sion datasets, RACE is the most challenging one.
Human Ceiling Performance The human per- formance is 94.5% which shows our data is quite clean compared to other large-scale machine com- prehension datasets. Since we cannot enforce ev- ery turker do the test cautiously, the result shows a gap between turkersâ performance and human performance. Reasonably, problems in the high school group with longer passages and more com- plex questions lead to more signiï¬cant divergence. Nevertheless, the start-of-the-art models still have a large room to be improved to reach turkersâ per- formance. The performance gap is 41% for the middle school problems and 25% for the high school problems. Whatâs more, The performance of Stanford AR and GA is only less than a half of the ceiling human performance, which indicates that to match the humansâ reading comprehension ability, we still have a long way to go.
# 5.4 Reason Types Analysis
We evaluate human and models on different types of questions, shown in Figure 1. Turkers do the best on word matching problems while doing the worst on reasoning problems. Sliding window performs better on word matching than problems needing reasoning or paraphrasing. Surprisingly, Stanford AR does not have a stronger performance
on the word matching category than reasoning cat- egories. A possible reason is that the proportion of data in reasoning categories is larger than that of data. Also, the candidate answers of simple matching questions may share similar word em- beddings. For example, if the question is about color, it is difï¬cult to distinguish candidate an- swers, âgreenâ, âredâ, âblueâ and âyellowâ, in the embedding vector space. The similar performance on different categories also explains the reason that the performance of the neural models is close in the middle and high school groups in Table 5.
# 6 Conclusion
We introduce a large, high-quality dataset for read- ing comprehension that is carefully designed to examine human ability on this task. Some desir- able properties of RACE include the broad cover- age of domains/styles and the richness in the ques- tion format. Most importantly, it requires substan- tially more reasoning to do well on RACE than on other datasets, as there is a signiï¬cant gap be- tween the performance of state-of-the-art machine comprehension models and that of the human. We hope this dataset will stimulate the development of more advanced machine comprehension models.
# Acknowledgement
We would like to thank Graham Neubig for sug- gestions on the draft and Diyi Yangâs help on ob- taining the crowdsourced labels.
This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program.
# References
Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindi- enst. 2016. Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956 .
Danqi Chen, Jason Bolton, and Christopher D Man- ning. 2016. A thorough examination of the cn- arXiv n/daily mail reading comprehension task. preprint arXiv:1606.02858 .
Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention arXiv preprint readers for text comprehension. arXiv:1606.01549 .
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma-
chines to read and comprehend. In Advances in Neu- ral Information Processing Systems. pages 1693â 1701.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading childrenâs books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301 .
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. ACL .
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547 .
Daniel Khashabi, Tushar Khot, Ashish Sabhar- wal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming arXiv preprint over semi-structured knowledge. arXiv:1604.06076 .
Auto- matic evaluation of summaries using n-gram co- In Proceedings of the 2003 occurrence statistics. Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1. Association for Computational Linguistics, pages 71â78.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268 .
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457 .
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th annual meeting on association for compu- tational linguistics. Association for Computational Linguistics, pages 311â318.
Anselmo PeËnas, Yusuke Miyao, ´Alvaro Rodrigo, Ed- uard H Hovy, and Noriko Kando. 2014. Overview of clef qa entrance exams task 2014. In CLEF (Work- ing Notes). pages 1194â1200.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532â 1543.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 .
Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP. volume 3, page 4.
´Alvaro Rodrigo, Anselmo PeËnas, Yusuke Miyao, Ed- uard H Hovy, and Noriko Kando. 2015. Overview of clef qa entrance exams task 2015. In CLEF (Work- ing Notes).
Hideyuki Shibuki, Kotaro Sakamoto, Yoshinobu Kano, Teruko Mitamura, Madoka Ishioroshi, Kelly Y Itakura, Di Wang, Tatsunori Mori, and Noriko Kando. 2014. Overview of the ntcir-11 qa-lab task. In NTCIR.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830 .
Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William W Cohen. 2017. Semi-supervised qa with arXiv preprint generative domain-adaptive nets. arXiv:1702.02206 .
# A Appendix
# A.1 Example Question of Passage Summarization
Passage: Do you love holidays but hate gaining weight? You are not alone. Holidays are times for celebrating. Many people are worried about their weight. With proper planning, though, it is pos- sible to keep normal weight during the holidays. The idea is to enjoy the holidays but not to eat too much. You donât have to turn away from the foods that you enjoy.
Here are some tips for preventing weight gain and maintaining physical ï¬tness:
Donât skip meals. Before you leave home, have a small, low-fat meal or snack. This may help to avoid getting too excited before delicious foods.
Control the amount of food. Use a small plate that may encourage you to âload upâ. You should be most comfortable eating an amount of food about the size of your ï¬st.
Begin with soup and fruit or vegetables. Fill up beforehand on water-based soup and raw fruit or vegetables, or drink a large glass of water before you eat to help you to feel full.
Avoid high-fat foods. Dishes that look oily or creamy may have large amount of fat. Choose lean meat . Fill your plate with salad and green vegeta- bles. Use lemon juice instead of creamy food.
Stick to physical activity. Donât let exercise take a break during the holidays. A 20-minute walk helps to burn off extra calories.
Questions: What is the best title of the passage? Options: A. How to avoid holiday feasting B. Doâs and donâts for keeping slim and ï¬t. C. How to avoid weight gain over holidays. D. Wonderful holidays, boring experiences. | {
"id": "1511.02301"
} |
1704.04651 | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which
combines multiple algorithmic and architectural contributions to produce an
agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al.,
2016) and Categorical DQN (Bellemare et al., 2017), while giving better
run-time performance than A3C (Mnih et al., 2016). Our first contribution is a
new policy evaluation algorithm called Distributional Retrace, which brings
multi-step off-policy updates to the distributional reinforcement learning
setting. The same approach can be used to convert several classes of multi-step
policy evaluation algorithms designed for expected value evaluation into
distributional ones. Next, we introduce the \b{eta}-leave-one-out policy
gradient algorithm which improves the trade-off between variance and bias by
using action values as a baseline. Our final algorithmic contribution is a new
prioritized replay algorithm for sequences, which exploits the temporal
locality of neighboring observations for more efficient replay prioritization.
Using the Atari 2600 benchmarks, we show that each of these innovations
contribute to both the sample efficiency and final agent performance. Finally,
we demonstrate that Reactor reaches state-of-the-art performance after 200
million frames and less than a day of training. | http://arxiv.org/pdf/1704.04651 | Audrunas Gruslys, Will Dabney, Mohammad Gheshlaghi Azar, Bilal Piot, Marc Bellemare, Remi Munos | cs.AI | null | null | cs.AI | 20170415 | 20180619 | 8 1 0 2
n u J 9 1 ] I A . s c [
2 v 1 5 6 4 0 . 4 0 7 1 : v i X r a
Published as a conference paper at ICLR 2018
# THE REACTOR: A FAST AND SAMPLE-EFFICIENT ACTOR-CRITIC AGENT FOR REINFORCEMENT LEARNING
Audr ¯unas Gruslys, DeepMind audrunas@google.com
Will Dabney, DeepMind wdabney@google.com
Mohammad Gheshlaghi Azar, DeepMind mazar@google.com
# Bilal Piot, DeepMind piot@google.com
Marc G. Bellemare, Google Brain bellemare@google.com
Rémi Munos, DeepMind munos@google.com
# ABSTRACT
In this work, we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efï¬ciency than Prioritized Dueling DQN (Wang et al., 2017) and Categori- cal DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our ï¬rst contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms, designed for expected value evaluation, into distributional algorithms. Next, we introduce the β-leave-one-out policy gradient algorithm, which improves the trade-off between variance and bias by using action values as a baseline. Our ï¬nal algorithmic con- tribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efï¬cient replay prioritiza- tion. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both sample efï¬ciency and ï¬nal agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.
# INTRODUCTION
Model-free deep reinforcement learning has achieved several remarkable successes in domains ranging from super-human-level control in video games (Mnih et al., 2015) and the game of Go (Silver et al., 2016; 2017), to continuous motor control tasks (Lillicrap et al., 2015; Schulman et al., 2015).
Much of the recent work can be divided into two categories. First, those of which that, often building on the DQN framework, act e-greedily according to an action-value function and train using mini- batches of transitions sampled from an experience replay buffer 2015} He et al. 2017} Anschel et al. O17). These value-function agents benefit from improved sample complexity, but tend to suffer from long runtimes (e.g. DQN requires approximately a week to train on Atari). The second category are the actor-critic agents, which includes the asynchronous advantage actor-critic (A3C) algorithm, introduced by [Mnih et al.|(2016). These agents train on transitions collected by multiple actors running, and often training, in parallel (Schulman et al.|/2017 2017). The deep actor-critic agents train on each trajectory only once, and thus tend to have worse sample complexity. However, their distributed nature allows significantly faster training in terms of wall-clock time. Still, not all existing algorithms can be put in the above two categories and various hybrid approaches do exist (Zhao et al./2016}{Oâ"Donoghue et al.|[2017}/Gu| 2017 2017).
1
Published as a conference paper at ICLR 2018
Data-efï¬ciency and off-policy learning are essential for many real-world domains where interactions with the environment are expensive. Similarly, wall-clock time (time-efï¬ciency) directly impacts an algorithmâs applicability through resource costs. The focus of this work is to produce an agent that is sample- and time-efï¬cient. To this end, we introduce a new reinforcement learning agent, called Reactor (Retrace-Actor), which takes a principled approach to combining the sample-efï¬ciency of off-policy experience replay with the time-efï¬ciency of asynchronous algorithms. We combine recent advances in both categories of agents with novel contributions to produce an agent that inherits the beneï¬ts of both and reaches state-of-the-art performance over 57 Atari 2600 games.
Our primary contributions are (1) a novel policy gradient algorithm, β-LOO, which makes better use of action-value estimates to improve the policy gradient; (2) the ï¬rst multi-step off-policy distributional reinforcement learning algorithm, distributional Retrace(λ); (3) a novel prioritized replay for off-policy sequences of transitions; and (4) an optimized network and parallel training architecture.
We begin by reviewing background material, including relevant improvements to both value-function agents and actor-critic agents. In Section 3 we introduce each of our primary contributions and present the Reactor agent. Finally, in Section 4, we present experimental results on the 57 Atari 2600 games from the Arcade Learning Environment (ALE) (Bellemare et al., 2013), as well as a series of ablation studies for the various components of Reactor.
# 2 BACKGROUND
We consider a Markov decision process (MDP) with state space X and ï¬nite action space A. A (stochastic) policy Ï(·|x) is a mapping from states x â X to a probability distribution over actions. We consider a γ-discounted inï¬nite-horizon criterion, with γ â [0, 1) the discount factor, and deï¬ne for policy Ï the action-value of a state-action pair (x, a) as
, def Q" (x, 0)â B[ D507! releo =, a0 = 4,7],
where ({xt}tâ¥0) is a trajectory generated by choosing a in x and following Ï thereafter, i.e., at â¼ Ï(·|xt) (for t ⥠1), and rt is the reward signal. The objective in reinforcement learning is to ï¬nd an optimal policy Ïâ, which maximises QÏ(x, a). The optimal action-values are given by Qâ(x, a) = maxÏ QÏ(x, a).
2.1 VALUE-BASED ALGORITHMS
The Deep Q-Network (DQN) framework, introduced by Mnih et al. (2015), popularised the current line of research into deep reinforcement learning by reaching human-level, and beyond, performance across 57 Atari 2600 games in the ALE. While DQN includes many speciï¬c components, the essence of the framework, much of which is shared by Neural Fitted Q-Learning (Riedmiller, 2005), is to use of a deep convolutional neural network to approximate an action-value function, training this approximate action-value function using the Q-Learning algorithm (Watkins & Dayan, 1992) and mini-batches of one-step transitions (xt, at, rt, xt+1, γt) drawn randomly from an experience replay buffer (Lin, 1992). Additionally, the next-state action-values are taken from a target network, which is updated to match the current network periodically. Thus, the temporal difference (TD) error for transition t used by these algorithms is given by
be =e + max Q(r1+1, 4/39) â Q(x1, a4; 9), (1)
where θ denotes the parameters of the network and ¯θ are the parameters of the target network.
Since this seminal work, we have seen numerous extensions and improvements that all share the same underlying framework. Double DQN (2016), attempts to cor- rect for the over-estimation bias inherent in Q-Learning by changing the second term of to Q(@141, arg MaxXac 4 Q(X141, aâ; 9); 8). The dueling architecture ( 2015), changes the and A(z, a;@) with
2
Published as a conference paper at ICLR 2018
Recently, Hessel et al. (2017) introduced Rainbow, a value-based reinforcement learning agent combining many of these improvements into a single agent and demonstrating that they are largely complementary. Rainbow signiï¬cantly out performs previous methods, but also inherits the poorer time-efï¬ciency of the DQN framework. We include a detailed comparison between Reactor and Rainbow in the Appendix. In the remainder of the section we will describe in more depth other recent improvements to DQN.
2.1.1 PRIORITIZED EXPERIENCE REPLAY
The experience replay buffer was ï¬rst introduced by Lin (1992) and later used in DQN (Mnih et al., 2015). Typically, the replay buffer is essentially a ï¬rst-in-ï¬rst-out queue with new transitions gradually replacing older transitions. The agent would then sample a mini-batch uniformly at random from the replay buffer. Drawing inspiration from prioritized sweeping (Moore & Atkeson, 1993), prioritized experience replay replaces the uniform sampling with prioritized sampling proportional to the absolute TD error (Schaul et al., 2016).
Speciï¬cally, for a replay buffer of size N , prioritized experience replay samples transition t with probability P (t), and applies weighted importance-sampling with wt to correct for the prioritization bias, where
B ce 1 o4\ P(t) are wr (+7) » p=|dil+e, a,6,e>0. (2) k Pk
Prioritized DQN signiï¬cantly increases both the sample-efï¬ciency and ï¬nal performance over DQN on the Atari 2600 benchmarks (Schaul et al., 2015).
# 2.1.2 RETRACE(λ)
Retrace(λ) is a convergent off-policy multi-step algorithm extending the DQN agent (Munos et al., 2016). Assume that some trajectory {x0, a0, r0, x1, a1, r1, . . . , xt, at, rt, . . . , } has been generated according to behaviour policy µ, i.e., at ⼠µ(·|xt). Now, we aim to evaluate the value of a different target policy Ï, i.e. we want to estimate QÏ. The Retrace algorithm will update our current estimate Q of QÏ in the direction of
def AQ(x1,44) = Ves â(co ».1â¬5)0,Q, (3)
# where
s Q def= rs + γEÏ[Q(xs+1, ·)] â Q(xs, as) is the temporal difference at time s under Ï, and
cr was) cs =Amin(1,ps), ps (4)
The Retrace algorithm comes with the theoretical guarantee that in ï¬nite state and action spaces, repeatedly updating our current estimate Q according to (3) produces a sequence of Q functions which converges to QÏ for a ï¬xed Ï or to Qâ if we consider a sequence of policies Ï which become increasingly greedy w.r.t. the Q estimates (Munos et al., 2016).
# 2.1.3 DISTRIBUTIONAL RL
Distributional reinforcement learning refers to a class of algorithms that directly estimate the distri- bution over returns, whose expectation gives the traditional value function (Bellemare et al., 2017). Such approaches can be made tractable with a distributional Bellman equation, and the recently proposed algorithm C51 showed state-of-the-art performance in the Atari 2600 benchmarks. C51 parameterizes the distribution over returns with a mixture over Diracs centered on a uniform grid,
N-1 4; («,a) e .Umax â Umin Q(x, 4:8) = So agile, a;9)zi, Gi Nal gyeay? 7 = Yin HiT?) i=
with hyperparameters vmin, vmax that bound the distribution support of size N .
3
Published as a conference paper at ICLR 2018
# 2.2 ACTOR-CRITIC ALGORITHMS
In this section we review the actor-critic framework for reinforcement learning algorithms and then discuss recent advances in actor-critic algorithms along with their various trade-offs. The asynchronous advantage actor-critic (A3C) algorithm (Mnih et al., 2016), maintains a parameterized policy Ï(a|x; θ) and value function V (x; θv), which are updated with
AO = Vo log r(ai|213 9) A(xt, 2430), AO, = Alar, a1; Ov) Vo, V(x), (6)
n-1 where, â A(a4, a1; 0) = > V rege +°V (14n) â V(a2)- (7) k
A3C uses M = 16 parallel CPU workers, each acting independently in the environment and applying the above updates asynchronously to a shared set of parameters. In contrast to the previously discussed value-based methods, A3C is an on-policy algorithm, and does not use a GPU nor a replay buffer.
Proximal Policy Optimization (PPO) is a closely related actor-critic algorithm (Schulman et al., 2017), which replaces the advantage (7) with,
min(p,A(xz, a2; 9v), clip(pr, 1 â â¬, 1 + â¬) A(z, at; Ov), ⬠> 0,
where Ït is as deï¬ned in Section 2.1.2. Although both PPO and A3C run M parallel workers collecting trajectories independently in the environment, PPO collects these experiences to perform a single, synchronous, update in contrast with the asynchronous updates of A3C.
Actor-Critic Experience Replay (ACER) extends the A3C framework with an experience replay buffer, Retrace algorithm for off-policy corrections, and the Truncated Importance Sampling Likelihood Ratio (TISLR) algorithm used for off-policy policy optimization (Wang et al., 2017).
# 3 THE REACTOR
The Reactor is a combination of four novel contributions on top of recent improvements to both deep value-based RL and policy-gradient algorithms. Each contribution moves Reactor towards our goal of achieving both sample and time efï¬ciency.
# 3.1 β-LOO
The Reactor architecture represents both a policy 7(a|z) and action-value function Q(x, a). We use a policy gradient algorithm to train the actor 7 which makes use of our current estimate Q(x, a) of Q⢠(x, a). Let Vâ (xq) be the value function at some initial state x, the policy gradient theorem says that VV" (ao) = E[ >, 7â , Q7(21,@)Vr(alxz)], where V refers to the gradient w.r.t. policy parameters (Sutton et al.|/2000). We now consider several possible ways to estimate this gradient.
To simplify notation, we drop the dependence on the state x for now and consider the problem of estimating the quantity
# aQÏ(a)âÏ(a).
(8)
In the off-policy case, we consider estimating G using a single action Ëa drawn from a (possibly different from Ï) behaviour distribution Ëa ⼠µ. Let us assume that for the chosen action Ëa we have access to an unbiased estimate R(Ëa) of QÏ(Ëa). Then, we can use likelihood ratio (LR) method combined with an importance sampling (IS) ratio (which we call ISLR) to build an unbiased estimate of G:
ËGISLR = Ï(Ëa) µ(Ëa) (R(Ëa) â V )â log Ï(Ëa),
where V is a baseline that depends on the state but not on the chosen action. However this estimate suffers from high variance. A possible way for reducing variance is to estimate G directly from (8) by using the return R(Ëa) for the chosen action Ëa and our current estimate Q of QÏ for the other actions, which leads to the so-called leave-one-out (LOO) policy-gradient estimate:
Groo = R(@)Vr(@) + Da zaQ(a)Vr(a). (9)
4
Published as a conference paper at ICLR 2018
1. Mix action-value distributions by = Tt Y ee ea Lae 2. Shrink mixed distribution by 7 Tt Tt+1 4, Obtain target_probabilities
Figure 1: Single-step (left) and multi-step (right) distribution bootstrapping.
This estimate has low variance but may be biased if the estimated Q values differ from QÏ. A better bias-variance tradeoff may be obtained by the more general β-LOO policy-gradient estimate:
Go.100 = B(R(a) â Q(a))Va(@) + 0, Q(@)Va(a), (10)
where β = β(µ, Ï, Ëa) can be a function of both policies, Ï and µ, and the selected action Ëa. Notice that when β = 1, (10) reduces to (9), and when β = 1/µ(Ëa), then (10) is
ala a (R(4@) â Q(4@))V log 7(@) + >, Q(a)Vz(a). ay G -Loo = Ble
This estimate is unbiased and can be seen as a generalization of Gisir where instead of using a state-only dependent baseline, we use a state-and-action-dependent baseline (our current estimate Q) and add the correction term >, Vz(a)Q(a) to cancel the bias. Proposition[I] gives our analysis of the bias of Gg..00, with a proof left to the Appendix. Proposition 1. Assume @ ~ y and that E[R(4)] = Q7(@). Then, the bias of G'.100 is | 041 â u(a)5(a))Vr(a)[Q(a) â Q*(a)]]-
|
Thus the bias is small when (a) is close to 1/j:(a), or when the Q-estimates are close to the true Q⢠values, and unbiased regardless of the estimates if 6(a) = 1/j(a). The variance is low when 8 is small, therefore, in order to improve the bias-variance tradeoff we recommend using the 6-LOO estimate with 3 defined as: 6(@) = min (ce, ma)? for some constant c > 1. This truncated 1/p coefficient shares similarities with the truncated IS gradient estimate introduced in (which we call TISLR for truncated-ISLR):
Cnsiz=min (« ray) RO) - Vlog (@)+>) Ga â e),w(a)(Q*(a) ~ V)V log x(a).
The differences are: (i) we truncate 1/µ(Ëa) = Ï(Ëa)/µ(Ëa) à 1/Ï(Ëa) instead of truncating Ï(Ëa)/µ(Ëa), which provides an additional variance reduction due to the variance of the LR â log Ï(Ëa) = âÏ(Ëa) Ï(Ëa) (since this LR may be large when a low probability action is chosen), and (ii) we use our Q-baseline instead of a V baseline, reducing further the variance of the LR estimate.
3.2 DISTRIBUTIONAL RETRACE
In off-policy learning it is very difï¬cult to produce an unbiased sample R(Ëa) of QÏ(Ëa) when following another policy µ. This would require using full importance sampling correction along the trajectory. Instead, we use the off-policy corrected return computed by the Retrace algorithm, which produces a (biased) estimate of QÏ(Ëa) but whose bias vanishes asymptotically (Munos et al., 2016).
In Reactor, we consider predicting an approximation of the return distribution function from any state-action pair (x, a) in a similar way as in Bellemare et al. (2017). The original algorithm C51 described in that paper considered single-step Bellman updates only. Here we need to extend this idea to multi-step updates and handle the off-policy correction performed by the Retrace algorithm, as deï¬ned in (3). Next, we describe these two extensions.
Multi-step distributional Bellman operator: First, we extend C51 to multi-step Bellman backups. We consider return-distributions from (2, a) of the form )>; q(x, a)dz, (where 6, denotes a Dirac in z)
5
(10)
Published as a conference paper at ICLR 2018
which are supported on a finite uniform grid {2;} ⬠[Umins Umaxls 21 < 2-415 21 = Umnins 2m = Umax: The coefficients q;(a,a) (discrete distribution) corresponds to the probabilities assigned to each atom z; of the grid. From an observed n-step sequence {x,,:,71,T141,---;T+n}, generated by behavior policy p (ie, as ~ pu(-|vs) fort < s < t+ n), we build the n-step backed-up return-distribution from (x,,a;). The n-step distributional Bellman target, whose expectation is yin ys trs +" Q(Lt4n, @), is given by:
t+n-1 Ss Gi(tten,a)d2», with 2; = Ss rg by zie i
Since this distribution is supported on the set of atoms {zn i }, which is not necessarily aligned with the grid {zi}, we do a projection step and minimize the KL-loss between the projected target and the current estimate, just as with C51 except with a different target distribution (Bellemare et al., 2017).
Distributional Retrace: Now, the Retrace algorithm deï¬ned in (3) involves an off-policy correction which is not handled by the previous n-step distributional Bellman backup. The key to extending this distributional back-up to off-policy learning is to rewrite the Retrace algorithm as a linear combination of n-step Bellman backups, weighted by some coefï¬cients αn,a. Indeed, notice that (3) rewrites as
t+n-1 AQ(#t, at) = YE ana| > 7 ârs +7" Q(ce4ns a) | â Q(x, 41), n>1aeA
# n-step Bellman backup
where Qn. = (cr41 . -Ct4nâ1) (x(a|r14n) âlI{a= t4n}crtn): These coefficients depend on the degree of off-policy-ness (between ji and 77) along the trajectory. We have that 7,5; 04 Qn,a = Snot (cep1 ee Ct4nâ1) (1 â cr4n) = 1, but notice some coefficients may be negative. However, in expectation (over the behavior policy) they are non-negative. Indeed,
E,,[Qn,a] B[ (un tee Ctenâ1) Bae pn ~p(-lesn) [r(a 14n) â Ma = arin}ersn|tern] | E[ (conn Lee Ct4nâ1) (*(alxr+n) = p(a|ze4n)A min (1, merce) > 0,
by deï¬nition of the cs coefï¬cients (4). Thus in expectation (over the behavior policy), the Retrace update can be seen as a convex combination of n-step Bellman updates.
Then, the distributional Retrace algorithm can be deï¬ned as backing up a mixture of n-step distribu- tions. More precisely, we deï¬ne the Retrace target distribution as:
Sg (xp, a1)5z,, with g} (x2, ar) =D Leann ail Tron, A4n)hz, (2; ), i=l n>1 a
where hzi(x) is a linear interpolation kernel, projecting onto the support {zi}:
hzi(x) = (x â ziâ1)/(zi â ziâ1), (zi+1 â x)/(zi+1 â zi), 0, 1, if ziâ1 ⤠x ⤠zi if zi ⤠x ⤠zi+1 if x ⤠ziâ1 or x ⥠zi+1 if (x ⤠vmin and zi = vmin) or (x ⥠vmax and zi = vmax)
We update the current probabilities q(xt, at) by performing a gradient step on the KL-loss
VKL(q* (x1, a), (et, a,)) = Sag (a, at )V log qi(ae, ay). (12) i=l
Again, notice that some target âprobabilitiesâ qâ i (xt, at) may be negative for some sample trajectory, but in expectation they will be non-negative. Since the gradient of a KL-loss is linear w.r.t. its ï¬rst argument, our update rule (12) provides an unbiased estimate of the gradient of the KL between the expected (over the behavior policy) Retrace target distribution and the current predicted distribution.1
1We store past action probabilities µ together with actions taken in the replay memory.
6
Published as a conference paper at ICLR 2018
Remark: The same method can be applied to other algorithms (such as TB(λ) (Precup et al., 2000) and importance sampling (Precup et al., 2001)) in order to derive distributional versions of other off-policy multi-step RL algorithms.
3.3 PRIORITIZED SEQUENCE REPLAY
Prioritized experience replay has been shown to boost both statistical efï¬ciency and ï¬nal performance of deep RL agents (Schaul et al., 2016). However, as originally deï¬ned prioritized replay does not handle sequences of transitions and weights all unsampled transitions identically. In this section we present an alternative initialization strategy, called lazy initialization, and argue that it better encodes prior information about temporal difference errors. We then brieï¬y describe our computationally efï¬cient prioritized sequence sampling algorithm, with full details left to the appendix.
It is widely recognized that TD errors tend to be temporally correlated, indeed the need to break this temporal correlation has been one of the primary justiï¬cations for the use of experience replay (Mnih et al., 2015). Our proposed algorithm begins with this fundamental assumption. Assumption 1. Temporal differences are temporally correlated, with correlation decaying on average with the time-difference between two transitions.
Prioritized experience replay adds new transitions to the replay buffer with a constant priority, but given the above assumption we can devise a better method. Speciï¬cally, we propose to add experience to the buffer with no priority, inserting a priority only after the transition has been sampled and used for training. Also, instead of sampling transitions, we assign priorities to all (overlapping) sequences of length n. When sampling, sequences with an assigned priority are sampled proportionally to that priority. Sequences with no assigned priority are sampled proportionally to the average priority of assigned priority sequences within some local neighbourhood. Averages are weighted to compensate for sampling biases (i.e. more samples are made in areas of high estimated priorities, and in the absence of weighting this would lead to overestimation of unassigned priorities).
The lazy initialization scheme starts with priorities p; corresponding to the sequences {x1,,---,t4n} for which a priority was already assigned. Then it extrapolates a priority of all other sequences in the following way. Let us define a partition (J;); of the states ordered by increasing time such that each cell J; contains exactly one state s; with already assigned priority. We define the estimated priority p; to all other sequences as p, = Vsies(t) Sycyn wy PCS)» where J(t) is a collection of contiguous cells (I;) containing time t, and w; = |J;| is the length of the cell I; containing s;. For already defined priorities denote p; = p;. Cell sizes work as estimates of inverse local density and are used as importance weights for priority estimation. | For the algorithm to be unbiased, partition (J;); must not be a function of the assigned priorities. So far we have defined a class of algorithms all free to choose the partition (J;) and the collection of cells I(t), as long that they satisfy the above constraints. Figure/4]in the Appendix illustrates the above description.
# siâJ(t)
Now, with probability « we sample uniformly at random, and with probability 1 â « we sample proportionally to p;. We implemented an algorithm satisfying the above constraints and called it Contextual Priority Tree (CPT). It is based on AVL trees Nene 6) and can execute sampling, insertion, deletion and density evaluation in O(In(n)) time. We describe CPT in detail in the Appendix in Section|6.3]
We treated prioritization as purely a variance reduction technique. Importance-sampling weights were evaluated as in prioritized experience replay, with ï¬xed β = 1 in (2). We used simple gradient magnitude estimates as priorities, corresponding to a mean absolute TD error along a sequence for Retrace, as deï¬ned in (3) for the classical RL case, and total variation in the distributional Retrace case.3
3.4 AGENT ARCHITECTURE
In order to improve CPU utilization we decoupled acting from learning. This is an important aspect of our architecture: an acting thread receives observations, submits actions to the environment, and
2Not to be confused with importance weights of produced samples. 3Sum of absolute discrete probability differences.
7
Published as a conference paper at ICLR 2018
Algorithm DQN Double DQN Dueling Prioritized DQN Rainbow A3C Reactor Reactor 500m Reactor* Training Time 8 days 8 days 8 days 8 days 10 days 4 days < 2 days 4 days < 1 day Type GPU 1 GPU 1 GPU 1 GPU 1 GPU 1 CPU 16 CPU 10+1 CPU 10+1 CPU 20+1 # Workers
Figure 2: (Left) The model of parallelism of DQN, A3C and Reactor architectures. Each row represents a separate thread. In Reactorâs case, each worker, consiting of a learner and an actor is run on a separate worker machine. (Right) Comparison of training times and resources for various algorithms. 500m denotes 500 million training frames; otherwise 200m training frames were used.
stores transitions in memory, while a learning thread re-samples sequences of experiences from memory and trains on them (Figure 2, left). We typically execute 4-6 acting steps per each learning step. We sample sequences of length n = 33 in batches of 4. A moving network is unrolled over frames 1-32 while the target network is unrolled over frames 2-33.
We allow the agent to be distributed over multiple machines each containing action-learner pairs. Each worker downloads the newest network parameters before each learning step and sends delta-updates at the end of it. Both the network and target network are stored on a shared parameter server while each machine contains its own local replay memory. Training is done by downloading a shared network, evaluating local gradients and sending them to be applied on the shared network. While the agent can also be trained on a single machine, in this work we present results of training obtained with either 10 or 20 actor-learner workers and one parameter server. In Figure 2 (right) we compare resources and runtimes of Reactor with related algorithms.4
3.4.1 NETWORK ARCHITECTURE
In some domains, such as Atari, it is useful to base decisions on a short history of past observations. The two techniques generally used to achieve this are frame stacking and recurrent network architec- tures. We chose the latter over the former for reasons of implementation simplicity and computational efï¬ciency. As the Retrace algorithm requires evaluating action-values over contiguous sequences of trajectories, using a recurrent architecture allowed each frame to be processed by the convolutional network only once, as opposed to n times times if n frame concatenations were used.
The Reactor architecture uses a recurrent neural network which takes an observation xt as input and produces two outputs: categorical action-value distributions qi(xt, a) (i here is a bin identiï¬er), and policy probabilities Ï(a|xt). We use an architecture inspired by the duelling network architecture (Wang et al., 2015). We split action-value -distribution logits into state-value logits and advantage logits, which in turn are connected to the same LSTM network (Hochreiter & Schmidhuber, 1997). Final action-value logits are produced by summing state- and action-speciï¬c logits, as in Wang et al. (2015). Finally, a softmax layer on top for each action produces the distributions over discounted future returns.
The policy head uses a softmax layer mixed with a ï¬xed uniform distribution over actions, where this mixing ratio is a hyperparameter (Wiering, 1999, Section 5.1.3). Policy and Q-networks have separate LSTMs. Both LSTMs are connected to a shared linear layer which is connected to a shared convolutional neural network (Krizhevsky et al., 2012). The precise network speciï¬cation is given in Table 3 in the Appendix.
Gradients coming from the policy LSTM are blocked and only gradients originating from the Q- network LSTM are allowed to back-propagate into the convolutional neural network. We block gradients from the policy head for increased stability, as this avoids positive feedback loops between Ï and qi caused by shared representations. We used the Adam optimiser (Kingma & Ba, 2014),
4All results are reported with respect to the combined total number of observations obtained over all worker machines.
8
Published as a conference paper at ICLR 2018
Reactor Ablation and Sample-Efficiency 250% Reactor Time-Efficiency 2 2 z Reactor (10-+1) z 150% Reactor (20+1) == Rainbow â 100% 100% Prioritized DQN âââ ASC (16) ââ DON ââââ= Reactor (10+1) << Reactor: Minus Distributional == Reactor: Minus Prioritization âââ Reactor: TISLR, 0% 0% Human Normalized Score Human Normalized Score g 50% 1050 100 200 400 2550 100 200 Millions of Training Samples Hours of Training
Figure 3: a function of training time in hours. Rainbow learning curve provided by Hessel et al. (2017).
# comparison as
with a learning rate of 5 Ã 10â5 and zero momentum because asynchronous updates induce implicit momentum (Mitliagkas et al., 2016). Further discussion of hyperparameters and their optimization can be found in Appendix 6.1.
# 4 EXPERIMENTAL RESULTS
We trained and evaluated Reactor on 57 Atari games (Bellemare et al., 2013). Figure 3 compares the performance of Reactor with different versions of Reactor each time leaving one of the algorithmic improvements out. We can see that each of the algorithmic improvements (Distributional retrace, beta- LOO and prioritized replay) contributed to the ï¬nal results. While prioritization was arguably the most important component, Beta-LOO clearly outperformed TISLR algorithm. Although distributional and non-distributional versions performed similarly in terms of median human normalized scores, distributional version of the algorithm generalized better when tested with random human starts (Table 1).
ALGORITHM NORMALIZED MEAN RANK 11.65 6.82 9.05 7.63 6.35 6.63 6.25 6.30 4.18 4.98 4.58 3.65 SCORES 0.00 1.00 0.69 1.11 1.17 1.13 1.15 1.13 1.53 1.51 1.65 1.82 ELO -563 0 -172 -58 32 13 40 37 186 126 156 227 ALGORITHM RANDOM HUMAN DQN DDQN DUEL PRIOR PRIOR. DUEL. ACER6 500M RAINBOW REACTOR ND 5 REACTOR REACTOR 500M NORMALIZED MEAN RANK 10.93 6.89 8.65 7.28 5.19 6.11 5.44 - 3.63 4.53 4.46 3.47 SCORES 0.00 1.00 0.79 1.18 1.51 1.24 1.72 1.9 2.31 1.80 1.87 2.30 ELO -673 0 -167 -27 143 70 126 - 270 195 196 280
RANDOM HUMAN DQN DDQN DUEL PRIOR PRIOR. DUEL. A3C LSTM RAINBOW REACTOR ND 5 REACTOR REACTOR 500M
Table 1: Random human starts
Table 2: 30 random no-op starts.
4.1 COMPARING TO PRIOR WORK
We evaluated Reactor with target update frequency Tupdate = 1000, λ = 1.0 and β-LOO with β = 1 on 57 Atari games trained on 10 machines in parallel. We averaged scores over 200 episodes using 30 random human starts and noop starts (Tables 4 and 5 in the Appendix). We calculated mean and median human normalised scores across all games. We also ranked all algorithms (including random and human scores) for each game and evaluated mean rank of each algorithm across all 57 Atari games. We also evaluated mean Rank and Elo scores for each algorithm for both human and noop start settings. Please refer to Section 6.2 in the Appendix for more details.
9
Published as a conference paper at ICLR 2018
Tables 1 & 2 compare versions of our algorithm,5 with several other state-of-art algorithms across 57 Atari games for a ï¬xed random seed across all games (Bellemare et al., 2013). We compare Reactor against are: DQN (Mnih et al., 2015), Double DQN (Van Hasselt et al., 2016), DQN with prioritised experience replay (Schaul et al., 2015), dueling architecture and prioritised dueling (Wang et al., 2015), ACER (Wang et al., 2017), A3C (Mnih et al., 2016), and Rainbow (Hessel et al., 2017). Each algorithm was exposed to 200 million frames of experience, or 500 million frames when followed by 500M, and the same pre-processing pipeline including 4 action repeats was used as in the original DQN paper (Mnih et al., 2015).
In Table 1, we see that Reactor exceeds the performance of all algorithms across all metrics, despite requiring under two days of training. With 500 million frames and four days training we see Reactorâs performance continue to improve signiï¬cantly. The difference in time-efï¬ciency is especially apparent when comparing Reactor and Rainbow (see Figure 3, right). Additionally, unlike Rainbow, Reactor does not use Noisy Networks (Fortunato et al., 2017), which was reported to have contributed to the performance gains. When evaluating under the no-op starts regime (Table 2), Reactor out performs all methods except for Rainbow. This suggests that Rainbow is more sample-efï¬cient when training and evaluation regimes match exactly, but may be overï¬tting to particular trajectories due to the signiï¬cant drop in performance when evaluated on the random human starts.
Regarding ACER, another Retrace-based actor-critic architecture, both classical and distributional versions of Reactor (Figure 3) exceeded the best reported median human normalized score of 1.9 with noop starts achieved in 500 million steps.6
# 5 CONCLUSION
In this work we presented a new off-policy agent based on Retrace actor-critic architecture and show that it achieves similar performance as the current state-of-the-art while giving signiï¬cant real-time performance gains. We demonstrate the beneï¬ts of each of the suggested algorithmic improvements, including Distributional Retrace, beta-LOO policy gradient and contextual priority tree.
# REFERENCES
Oron Anschel, Nir Baram, and Nahum Shimkin. Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning. In International Conference on Machine Learning, pp. 176â185, 2017.
Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi- ronment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253â279, 2013.
Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning. arXiv preprint arXiv:1707.06887, 2017.
Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, et al. Noisy networks for exploration. arXiv preprint arXiv:1706.10295, 2017.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efï¬cient policy gradient with an off-policy critic. International Conference on Learning Representations, 2017.
Frank S He, Yang Liu, Alexander G Schwing, and Jian Peng. Learning to play in a day: Faster deep reinforcement learning by optimality tightening. In International Conference on Learning Representations, 2017.
Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298, 2017.
5 âNDâ stands for a non-distributional (i.e. classical) version of Reactor using Retrace (Munos et al., 2016). 6 Score for ACER in Table 2 was obtained from (Figure 1 in Wang et al. (2017)), but is not directly comparable due to the authorsâ use of a cumulative maximization along each learning curve before taking the median.
10
Published as a conference paper at ICLR 2018
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735â1780, 1997.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Long-H Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3/4):69â97, 1992.
Ioannis Mitliagkas, Ce Zhang, Stefan Hadjis, and Christopher Ré. Asynchrony begets momentum, with an application to deep learning. In Communication, Control, and Computing (Allerton), 2016 54th Annual Allerton Conference on, pp. 997â1004. IEEE, 2016.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â533, 2015.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016.
Andrew W Moore and Christopher G Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine learning, 13(1):103â130, 1993.
Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and efï¬cient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1046â1054, 2016.
Brendan OâDonoghue, Remi Munos, Koray Kavukcuoglu, and Volodymyr Mnih. Combining policy gradient and q-learning. International Conference on Learning Representations, 2017.
Doina Precup, Richard S Sutton, and Satinder Singh. Eligibility traces for off-policy policy evaluation. In Proceedings of the Seventeenth International Conference on Machine Learning, 2000.
Doina Precup, Richard S Sutton, and Sanjoy Dasgupta. Off-policy temporal-difference learning with function approximation. In Proceedings of the 18th International Conference on Machine Laerning, pp. 417â424, 2001.
Martin Riedmiller. Neural ï¬tted q iteration-ï¬rst experiences with a data efï¬cient neural reinforcement learning method. In ECML, volume 3720, pp. 317â328. Springer, 2005.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In International Conference on Learning Representations, 2016.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1889â1897, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
11
Published as a conference paper at ICLR 2018
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550(7676):354â359, 10 2017. URL http: //dx.doi.org/10.1038/nature24270.
Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In In Advances in Neural Information Processing Systems 12, pp. 1057â1063. MIT Press, 2000.
Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q- learning. In AAAI, pp. 2094â2100, 2016.
Adelâson G Velskii and E Landis. An algorithm for the organisation of information. Dokl. Akad. Nauk SSSR, 146:263â266, 1976.
Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161, 2017.
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. International Conference on Machine Learning, pp. 1995â2003, 2015.
Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efï¬cient actor-critic with experience replay. In International Conference on Learning Representations, 2017.
C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8(3):272â292, 1992.
Marco A Wiering. Explorations in efï¬cient reinforcement learning. PhD thesis, University of Amsterdam, 1999.
Dongbin Zhao, Haitao Wang, Kun Shao, and Yuanheng Zhu. Deep reinforcement learning with experience replay based on sarsa. In Computational Intelligence (SSCI), 2016 IEEE Symposium Series on, pp. 1â6. IEEE, 2016.
12
Published as a conference paper at ICLR 2018
# 6 APPENDIX
Proposition 1. Assume @ ~ jy and that E[R(4)| = Q*(4). Then, the bias of G3.r00 is | D1 â i(a)B(a))Vx(a)[Q(a) â Q*(a)]|-
Proof. The bias of ËGβ-LOO is
E[G100] â G Y u(a)[3(@)(E[R(a)] â Q(a))]Va(a) + Â¥2 Q(a)Va(a) â G S71 = H(@)8(a))[Q(@) - Q7 (a)|V x(a) a
# 6.1 HYPERPARAMETER OPTIMIZATION
As we believe that algorithms should be robust with respect to the choice of hyperparameters, we spent little effort on parameter optimization. In total, we explored three distinct values of learning rates and two values of ADAM momentum (the default and zero) and two values of Tupdate on a subset of 7 Atari games without prioritization using non-distributional version of Reactor. We later used those values for all experiments. We did not optimize for batch sizes and sequence length or any prioritization hyperparamters.
6.2 RANK AND ELO EVALUATION
Commonly used mean and median human normalized scores have several disadvantages. A mean human normalized score implicitly puts more weight on games that computers are good and humans are bad at. Comparing algorithm by a mean human normalized score across 57 Atari games is almost equivalent to comparing algorithms on a small subset of games close to the median and thus dominating the signal. Typically a set of ten most score-generous games, namely Assault, Asterix, Breakout, Demon Attack, Double Dunk, Gopher, Pheonix, Stargunner, Upân Down and Video Pinball can explain more than half of inter-algorithm variance. A median human normalized score has the opposite disadvantage by effectively discarding very easy and very hard games from the comparison. As typical median human normalized scores are within the range of 1-2.5, an algorithm which scores zero points on Montezumaâs Revenge is evaluated equal to the one which scores 2500 points, as both performance levels are still below human performance making incremental improvements on hard games not being reï¬ected in the overall evaluation. In order to address both problem, we also evaluated mean rank and Elo metrics for inter-algorithm comparison. Those metrics implicitly assign the same weight to each game, and as a result is more sensitive of relative performance on very hard and easy games: swapping scores of two algorithms on any game would result in the change of both mean rank and Elo metrics.
We calculated separate mean rank and Elo scores for each algorithm using results of test evaluations with 30 random noop-starts and 30 random human starts (Tables 5 and 4). All algorithms were ranked across each game separately, and a mean rank was evaluated across 57 Atari games. For Elo score evaluation algorithm, A was considered to win over algorithm B if it obtained more scores on a given Atari. We produced an empirical win-probability matrix by summing wins across all games and used this matrix to evaluate Elo scores. A ranking difference of 400 corresponds to the odds of winning of 10:1 under the Gaussian assumption.
6.3 CONTEXTUAL PRIORITY TREE
Contextual priority tree is one possible implementation of lazy prioritization (Figure 4). All sequence keys are put into a balanced binary search tree which maintains a temporal order. An AVL tree (Velskii & Landis (1976)) was chosen due to the ease of implementation and because it is on average more evenly balanced than a Red-Black Tree.
Each tree node has up to two children (left and right) and contains currently stored key and a priority of the key which is either set or is unknown. Some trees may only have a single child subtree while
13
Published as a conference paper at ICLR 2018
I, L, I, lp li3 I, |e |e @ | oe eo | J 5 No set priority [Priority to be estimated 8 Has a set priority ll, | Contains exactly one [ij rH Contains [J and at least one lI, | 5
Figure 4: Illustration of Lazy prioritization, where sequences with no explicitly assigned priorities get priorities estimated by a linear combination of nearby assigned priorities. Exact boundaries of blue and red intervals are arbitrary (as long as all conditions described in Section 3.3 are satisï¬ed) thus leading to many possible algorithms. Each square represents an individual sequence of size 32 (sequences overlap). Inverse sizes of blue regions work as local density estimates allowing to produce unbiased priority estimates.
a) ¢, cb) ¢ c c) ¢, c, m, m, m, 2 m. 2? 2? P : p a as ec ea - c,=c+c+1 7 cj=c¢+c+ c,=¢ tor m,=(mc¢,+p+em)/c, m,= (me, +cem)/(c, +c) m= p 4) e) f) C, c C iG G G U | r 1 r out % m ? 2? 2? | p | 2 2 mc, \ Ap [me, me\ 4m, / m,¢, ¢ \ 4 1 / Cc. c,=c¢+c+ c,=co+co+1 cy =ct+c+t f vr m,= (m,+p)/(c, +1) m= m, m=?
Figure 5: Rules used to evaluate summary statistics on each node of a binary search tree where all sequence keys are kept sorted by temporal order. cl and cr are total number of nodes within left and right subtrees. ml and ml are estimated mean priorities per node within the subtree. A central square node corresponds to a single key stored within the parent node with its corresponding priority of p (if set) or ? if not set. Red subtrees do not have any singe child with a set priority, and a result do not have priority estimates. A red square shows that priority of the key stored within the parent node is not known. Unknown mean priorities is marked by a question mark. Empty child nodes simply behave as if c = 0 with p =?. Rules a-f illustrate how mean values are propagated down from children to parents when priorities are only partially known (rules d and e also apply symmetrically). Sampling is done by going from the root node up the tree by selecting one of the children (or the current key) stochastically proportional to orange proportions. Sampling terminates once the current (square) key is chosen.
14
Published as a conference paper at ICLR 2018
\/ /
Figure 6: Example of a balanced priority tree. Dark blue nodes contain keys with known priorities, light blue nodes have at least one child with at least a single known priority, while ping nodes do not have any priority estimates. Nodes 1, 2 and 3 will obtain priority estimates equal to 2/3 of the priority of key 5 and 1/3 of the priority of node 4. This implies that estimated priorities of keys 1, 2 and 3 are implicitly deï¬ned by keys 4 and 6. Nodes 8, 9 and 11 are estimated to have the same priority as node 10.
some may have none. In addition to this information, we were tracking other summary statistics at each node which was re-evaluated after each tree rotation. The summary statistics was evaluated by consuming previously evaluated summary statistics of both children and a priority of the key stored within the current node. In particular, we were tracking a total number of nodes within each subtree and mean-priority estimates updated according to rules shown in Figure 5. The total number of nodes within each subtree was always known (c in Figure 5), while mean priority estimates per key (m in Figure 5) could either be known or unknown.
If a mean priority of either one child subtree or a key stored within the current node is unknown then it can be estimated to by exploiting information coming from another sibling subtree or a priority stored within the parent node.
Sampling was done by traversing the tree from the root node up while sampling either one of the children subtrees or the currently held key proportionally to the total estimated priority masses contained within. The rules used to evaluate proportions are shown in orange in Figure 5. Similarly, probabilities of arbitrary keys can be queried by traversing the tree from the root node towards the child node of an interest while maintaining a product of probabilities at each branching point. Insertion, deletion, sampling and probability query operations can be done in O(ln(n)) time.
The suggested algorithm has the desired property that it becomes a simple proportional sampling algorithm once all the priorities are known. While some key priorities are unknown, they are estimated by using nearby known key priorities (Figure 6).
Each time when a new sequence key is added to the tree, it was set to have an unknown priority. Any priority was assigned only after the key got ï¬rst sampled and the corresponding sequence got passed through the learner. When a priority of a key is set or updated, the key node is deliberately removed from and placed back to the tree in order to become a leaf-node. This helped to set priorities of nodes in the immediate vicinity more accurately by using the freshest information available.
6.4 NETWORK ARCHITECTURE
The value of ⬠= 0.01 is the minimum probability of choosing a random action and it is hard-coded into the policy network. Figure[7]shows the overall network topology while Table[3]specifies network layer sizes.
15
Published as a conference paper at ICLR 2018
Action value estimate Q(x, a) _â V(x) â~__ ESEâ LSTM A(x, a) Current policy T1(X, a) r LSTM A Linear 4 Convnet
Figure 7: Network architecture.
Table 3: Speciï¬cation of the neural network used (illustrated in Figure 7)
SIZE CONVOLUTIONAL KERNEL OUTPUT | STRIDES WIDTH | CHANNELS Conv I [84, 84, 1] [8, 8] 16 4 CONCATRELU [20, 20, 16] Conv 2 (20, 20, 32] (4, 4] 32 2 CONCATRELU [9, 9, 32] Conv 3 [9, 9, 64] (3, 3] 32 1 CONCATRELU [7, 7, 32] FULLY CONNECTED OUTPUT SIZE LINEAR [7, 7, 64] 128 CONCATRELU [128] RECURRENT OUTPUT SIZE Tv LSTM [256] 128 LINEAR [128] 32 CONCATRELU [32] LINEAR [64] #ACTIONS SOFTMAX [#ACTIONS] #ACTIONS X(1-â¬)+â¬/#ACTIONS [#ACTIONS] #ACTIONS RECURRENT Q OUTPUT SIZE LSTM [256] 128 VALUE LOGIT HEAD OUTPUT SIZE LINEAR [128] 32 CONCATRELU [32] LINEAR [64] #BINS ADVANTAGE LOGIT HEAD #ACTIONS X #BINS LINEAR [128] 32 CONCATRELU [32]
16
Published as a conference paper at ICLR 2018
6.5 COMPARISONS WITH RAINBOW
In this section we compare Reactor with the recently published Rainbow agent (Hessel et al., 2017). While ACER is the most closely related algorithmically, Rainbow is most closely related in terms of performance and thus a deeper understanding of the trade-offs between Rainbow and Reactor may beneï¬t interested readers. There are many architectural and algorithmic differences between Rainbow and Reactor. We will therefore begin by highlighting where they agree. Both use a categorical action-value distribution critic (Bellemare et al., 2017), factored into state and state-action logits (Wang et al., 2015),
l(a, a) 1 q(x, a) 5, hea)" 1,(a,a) = 1,(x) + 1;(2, a) â iA > 1,(x, b). beA
Both use prioritized replay, and ï¬nally, both perform n-step Bellman updates.
Despite these similarities, Reactor and Rainbow are fundamentally different algorithms and are based upon different lines of research. While Rainbow uses Q-Learning and is based upon DQN (Mnih et al., 2015), Reactor is an actor-critic algorithm most closely based upon A3C (Mnih et al., 2016). Each inherits some design choices from their predecessors, and we have not performed an extensive ablation comparing these various differences. Instead, we will discuss four of the differences we believe are important but less obvious.
First, the network structures are substantially different. Rainbow uses noisy linear layers and ReLU activations throughout the network, whereas Reactor uses standard linear layers and concatenated ReLU activations throughout. To overcome partial observability, Rainbow, inheriting this choice from DQN, uses frame stacking. On the other hand, Reactor, inheriting its choice from A3C, uses LSTMs after the convolutional layers of the network. It is also difï¬cult to directly compare the number of parameters in each network because the use of noisy linear layers doubles the number of parameters, although half of these are used to control noise, while the LSTM units in Reactor require more parameters than a corresponding linear layer would.
Second, both algorithms perform n-step updates, however, the Rainbow n-step update does not use any form of off-policy correction. Because of this, Rainbow is restricted to using only small values of n (e.g. n = 3) because larger values would make sequences more off-policy and hurt performance. By comparison, Reactor uses our proposed distributional Retrace algorithm for off-policy correction of n-step updates. This allows the use of larger values of n (e.g. n = 33) without loss of performance.
Third, while both agents use prioritized replay buffers (Schaul et al.| 2016), they each store different information and prioritize using different algorithms. Rainbow stores a tuple containing the state x,_1, action a;_1, sum of n discounted rewards ann Tek in t-+m, product of n discount factors Tio Vt+k, and next-state n steps away X;4,â1. Tuples are prioritized based upon the last observed TD error, and inserted into replay with a maximum priority. Reactor stores length n sequences of tuples (2,~1, a¢â1, 1+, Ye) and also prioritizes based upon the observed TD error. However, when inserted into the buffer the priority is instead inferred based upon the known priorities of neighboring sequences. This priority inference was made efficient using the previously introduced contextual priority tree, and anecdotally we have seen it improve performance over a simple maximum priority approach.
Finally, the two algorithms have different approaches to exploration. Rainbow, unlike DQN, does not use â¬-greedy exploration, but instead replaces all linear layers with noisy linear layers which induce randomness throughout the network. This method, called Noisy Networks 2017), creates an adaptive exploration integrated into the agentâs network. Reactor does not use etworks, but instead uses the same entropy cost method used by A3C and many others which penalizes deterministic policies thus encouraging indifference between similarly valued actions. Because Rainbow can essentially learn not to explore, it may learn to become entirely greedy in the early parts of the episode, while still exploring in states not as frequently seen. In some sense, this is precisely what we want from an exploration technique, but it may also lead to highly deterministic trajectories in the early part of the episode and an increase in overfitting to those trajectories. We hypothesize that this may be the explanation for the significant difference in Rainbowâs performance between evaluation under no-op and random human starts, and why Reactor does not show such a large difference.
17
Published as a conference paper at ICLR 2018
# 6.6 ATARI RESULTS
Table 4: Scores for each game evaluated with 30 random human starts. Reactor was evaluated by averaging scores over 200 episodes. All scores (except for Reactor) were taken from Wang et al. (2015), Mnih et al. (2016) and Hessel et al. (2017).
Table 5: Scores for each game evaluated with 30 random noop starts. Reactor was evaluated by averaging scores over 200 episodes. All scores (except for Reactor) were taken from Wang et al. (2015) and Hessel et al. (2017).
GAME AGENT RANDOM HUMAN DQN DDQN DUEL PRIOR RAINBOW REACTOR
18 | {
"id": "1707.06347"
} |
1704.04368 | Get To The Point: Summarization with Pointer-Generator Networks | Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. | http://arxiv.org/pdf/1704.04368 | Abigail See, Peter J. Liu, Christopher D. Manning | cs.CL | Add METEOR evaluation results, add some citations, fix some equations
(what are now equations 1, 8 and 11 were missing a bias term), fix url to
pyrouge package, add acknowledgments | null | cs.CL | 20170414 | 20170425 | 7 1 0 2
r p A 5 2 ] L C . s c [
2 v 8 6 3 4 0 . 4 0 7 1 : v i X r a
# Get To The Point: Summarization with Pointer-Generator Networks
# Abigail See Stanford University abisee@stanford.edu
Peter J. Liu Google Brain peterjliu@google.com
# Christopher D. Manning Stanford University manning@stanford.edu
# Abstract
Neural sequence-to-sequence models have provided a viable new approach for ab- stractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the origi- nal text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we pro- pose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate repro- duction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail sum- marization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.
Original Text (truncated): lagos, nigeria (cnn) a day after winning nige- riaâs presidency, muhammadu buhari told cnnâs christiane amanpour that he plans to aggressively ï¬ght corruption that has long plagued nigeria and go after the root of the nationâs unrest. buhari said heâll ârapidly give attentionâ to curbing violence in the northeast part of nigeria, where the ter- rorist group boko haram operates. by cooperating with neighboring nations chad, cameroon and niger, he said his administration is conï¬dent it will be able to thwart criminals and others contributing to nigeriaâs instability. for the ï¬rst time in nigeriaâs history, the opposition defeated the ruling party in democratic elections. buhari defeated incumbent goodluck jonathan by about 2 million votes, according to nigeriaâs independent national electoral commission. the win comes after a long history of military rule, coups and botched attempts at democracy in africaâs most populous nation.
Baseline Seq2Seq + Attention: UNK UNK says his administration is conï¬- dent it will be able to destabilize nigeriaâs economy. UNK says his admin- istration is conï¬dent it will be able to thwart criminals and other nigerians. he says the country has long nigeria and nigeriaâs economy.
Pointer-Gen: muhammadu buhari says he plans to aggressively ï¬ght cor- ruption in the northeast part of nigeria. he says heâll ârapidly give at- tentionâ to curbing violence in the northeast part of nigeria. he says his administration is conï¬dent it will be able to thwart criminals.
Pointer-Gen + Coverage: muhammadu buhari says he plans to aggressively ï¬ght corruption that has long plagued nigeria. he says his administration is conï¬dent it will be able to thwart criminals. the win comes after a long his- tory of military rule, coups and botched attempts at democracy in africaâs most populous nation.
Figure 1: Comparison of output of 3 abstrac- tive summarization models on a news article. The baseline model makes factual errors, a nonsen- sical sentence and struggles with OOV words muhammadu buhari. The pointer-generator model is accurate but repeats itself. Coverage eliminates repetition. The ï¬nal summary is composed from several fragments.
1
# 1 Introduction
Summarization is the task of condensing a piece of text to a shorter version that contains the main in- formation from the original. There are two broad approaches to summarization: extractive and ab- stractive. Extractive methods assemble summaries exclusively from passages (usually whole sen- tences) taken directly from the source text, while abstractive methods may generate novel words and phrases not featured in the source text â as a human-written abstract usually does. The ex- tractive approach is easier, because copying large
chunks of text from the source document ensures baseline levels of grammaticality and accuracy. On the other hand, sophisticated abilities that are crucial to high-quality summarization, such as paraphrasing, generalization, or the incorporation of real-world knowledge, are possible only in an abstractive framework (see Figure 5).
Due to the difï¬culty of abstractive summariza- tion, the great majority of past work has been ex- tractive (Kupiec et al., 1995; Paice, 1990; Sag- gion and Poibeau, 2013). However, the recent suc- cess of sequence-to-sequence models (Sutskever
Context Vector "beat" uonnqiasiq Asejnqeson, Attention Distribution | Encoder Hidden States Ss + ry ry Germany emerge victorious in 20 win against Argentina on ry 7 { saveis UapplH Japooeq Saturday. <START> Germany \y Source Text â_,âYâ Partial Summary
Figure 2: Baseline sequence-to-sequence model with attention. The model may attend to relevant words in the source text to generate novel words, e.g., to produce the novel word beat in the abstractive summary Germany beat Argentina 2-0 the model may attend to the words victorious and win in the source text.
et al., 2014), in which recurrent neural networks (RNNs) both read and freely generate text, has made abstractive summarization viable (Chopra et al., 2016; Nallapati et al., 2016; Rush et al., 2015; Zeng et al., 2016). Though these systems are promising, they exhibit undesirable behavior such as inaccurately reproducing factual details, an inability to deal with out-of-vocabulary (OOV) words, and repeating themselves (see Figure 1).
that were applied to short-text summarization. We propose a novel variant of the coverage vector (Tu et al., 2016) from Neural Machine Translation, which we use to track and control coverage of the source document. We show that coverage is re- markably effective for eliminating repetition.
# 2 Our Models
In this paper we present an architecture that addresses these three issues in the context of multi-sentence summaries. While most recent ab- stractive work has focused on headline genera- tion tasks (reducing one or two sentences to a single headline), we believe that longer-text sum- marization is both more challenging (requiring higher levels of abstraction while avoiding repe- tition) and ultimately more useful. Therefore we apply our model to the recently-introduced CNN/ Daily Mail dataset (Hermann et al., 2015; Nallap- ati et al., 2016), which contains news articles (39 sentences on average) paired with multi-sentence summaries, and show that we outperform the state- of-the-art abstractive system by at least 2 ROUGE points.
Our hybrid pointer-generator network facili- tates copying words from the source text via point- ing (Vinyals et al., 2015), which improves accu- racy and handling of OOV words, while retaining the ability to generate new words. The network, which can be viewed as a balance between extrac- tive and abstractive approaches, is similar to Gu et al.âs (2016) CopyNet and Miao and Blunsomâs (2016) Forced-Attention Sentence Compression,
In this section we describe (1) our baseline (2) our pointer- sequence-to-sequence model, generator model, and (3) our coverage mechanism that can be added to either of the ï¬rst two models. The code for our models is available online.1
# 2.1 Sequence-to-sequence attentional model
Our baseline model is similar to that of Nallapati et al. (2016), and is depicted in Figure 2. The to- kens of the article wi are fed one-by-one into the encoder (a single-layer bidirectional LSTM), pro- ducing a sequence of encoder hidden states hi. On each step t, the decoder (a single-layer unidirec- tional LSTM) receives the word embedding of the previous word (while training, this is the previous word of the reference summary; at test time it is the previous word emitted by the decoder), and has decoder state st. The attention distribution at is calculated as in Bahdanau et al. (2015):
et i = vT tanh(Whhi +Wsst + battn) at = softmax(et)
(2)
where v, Wh, Ws and battn are learnable parame- ters. The attention distribution can be viewed as
1www.github.com/abisee/pointer-generator
(1)
Final Distribution â âArgentinaâ (1 = Pgen) -ââââ+ { X Pgen Context Vector a uonnquisig Auejnqeoo,, Attention Distribution Encoder Hidden States { + { + { + + + y Germany emerge victorious in 20 win against Argentina on <START> Germany _ beat WY $9}e1S UaPPIH Japooaq Saturday .. \y Source Text MY Partial Summary
Figure 3: Pointer-generator model. For each decoder timestep a generation probability pgen â [0, 1] is calculated, which weights the probability of generating words from the vocabulary, versus copying words from the source text. The vocabulary distribution and the attention distribution are weighted and summed to obtain the ï¬nal distribution, from which we make our prediction. Note that out-of-vocabulary article words such as 2-0 are included in the ï¬nal distribution. Best viewed in color.
a probability distribution over the source words, that tells the decoder where to look to produce the next word. Next, the attention distribution is used to produce a weighted sum of the encoder hidden states, known as the context vector h;: ny =Yaihi (3) The context vector, which can be seen as a fixed- size representation of what has been read from the source for this step, is concatenated with the de- coder state s, and fed through two linear layers to produce the vocabulary distribution Pyocab: Proca = softmax(V'(V[s,,h7]+b) +bâ) (4)
where V, Vâ, b and bâ are learnable parameters. Pyocab iS a probability distribution over all words in the vocabulary, and provides us with our final distribution from which to predict words w:
P(w) = Pvocab(w) (5)
During training, the loss for timestep t is the neg- ative log likelihood of the target word wâ t for that timestep:
losst = â log P(wâ t ) and the overall loss for the whole sequence is:
loss = 1 T âT t=0 losst (7)
(6)
# 2.2 Pointer-generator network
Our pointer-generator network is a hybrid between our baseline and a pointer network (Vinyals et al., 2015), as it allows both copying words via point- ing, and generating words from a ï¬xed vocabulary. In the pointer-generator model (depicted in Figure 3) the attention distribution at and context vector hâ t are calculated as in section 2.1. In addition, the generation probability pgen â [0, 1] for timestep t is calculated from the context vector hâ t , the decoder state st and the decoder input xt:
t + wT where vectors whâ, ws, wx and scalar bptr are learn- able parameters and Ï is the sigmoid function. Next, pgen is used as a soft switch to choose be- tween generating a word from the vocabulary by sampling from Pvocab, or copying a word from the input sequence by sampling from the attention dis- tribution at. For each document let the extended vocabulary denote the union of the vocabulary, and all words appearing in the source document. We obtain the following probability distribution over the extended vocabulary: P(w) = pgenPvocab(w) + (1 â pgen)âi:wi=w at Note that if w is an out-of-vocabulary (OOV) word, then Pvocab(w) is zero; similarly if w does
not appear in the source document, then âi:wi=w at i is zero. The ability to produce OOV words is one of the primary advantages of pointer-generator models; by contrast models such as our baseline are restricted to their pre-set vocabulary.
The loss function is as described in equations (6) and (7), but with respect to our modiï¬ed prob- ability distribution P(w) given in equation (9).
# 2.3 Coverage mechanism
Repetition is a common problem for sequence- to-sequence models (Tu et al., 2016; Mi et al., 2016; Sankaran et al., 2016; Suzuki and Nagata, 2016), and is especially pronounced when gener- ating multi-sentence text (see Figure 1). We adapt the coverage model of Tu et al. (2016) to solve the problem. In our coverage model, we maintain a coverage vector ct, which is the sum of attention distributions over all previous decoder timesteps:
ct = âtâ1 Intuitively, ct is a (unnormalized) distribution over the source document words that represents the de- gree of coverage that those words have received from the attention mechanism so far. Note that c0 is a zero vector, because on the ï¬rst timestep, none of the source document has been covered.
The coverage vector is used as extra input to the attention mechanism, changing equation (1) to:
i = vT tanh(Whhi +Wsst + wcct et i + battn) (11)
where wc is a learnable parameter vector of same length as v. This ensures that the attention mecha- nismâs current decision (choosing where to attend next) is informed by a reminder of its previous decisions (summarized in ct). This should make it easier for the attention mechanism to avoid re- peatedly attending to the same locations, and thus avoid generating repetitive text.
We ï¬nd it necessary (see section 5) to addition- ally deï¬ne a coverage loss to penalize repeatedly attending to the same locations:
covlosst = âi min(at i, ct i) (12)
Note that the coverage loss is bounded; in particu- lar covlosst ⤠âi at i = 1. Equation (12) differs from the coverage loss used in Machine Translation. In MT, we assume that there should be a roughly one- to-one translation ratio; accordingly the ï¬nal cov- erage vector is penalized if it is more or less than 1.
Our loss function is more ï¬exible: because sum- marization should not require uniform coverage, we only penalize the overlap between each atten- tion distribution and the coverage so far â prevent- ing repeated attention. Finally, the coverage loss, reweighted by some hyperparameter λ , is added to the primary loss function to yield a new composite loss function:
losst = â log P(wâ t ) + λ âi min(at i, ct i) (13)
# 3 Related Work
Neural abstractive summarization. Rush et al. (2015) were the ï¬rst to apply modern neural net- works to abstractive text summarization, achiev- ing state-of-the-art performance on DUC-2004 and Gigaword, two sentence-level summarization datasets. Their approach, which is centered on the attention mechanism, has been augmented with re- current decoders (Chopra et al., 2016), Abstract Meaning Representations (Takase et al., 2016), hi- erarchical networks (Nallapati et al., 2016), vari- ational autoencoders (Miao and Blunsom, 2016), and direct optimization of the performance metric (Ranzato et al., 2016), further improving perfor- mance on those datasets.
However, large-scale datasets for summariza- tion of longer text are rare. Nallapati et al. (2016) adapted the DeepMind question-answering dataset (Hermann et al., 2015) for summarization, result- ing in the CNN/Daily Mail dataset, and provided the ï¬rst abstractive baselines. The same authors then published a neural extractive approach (Nal- lapati et al., 2017), which uses hierarchical RNNs to select sentences, and found that it signiï¬cantly outperformed their abstractive result with respect to the ROUGE metric. To our knowledge, these are the only two published results on the full data- set.
Prior to modern neural methods, abstractive summarization received less attention than extrac- tive summarization, but Jing (2000) explored cut- ting unimportant parts of sentences to create sum- maries, and Cheung and Penn (2014) explore sen- tence fusion using dependency trees.
Pointer-generator networks. The pointer net- work (Vinyals et al., 2015) is a sequence-to- sequence model that uses the soft attention dis- tribution of Bahdanau et al. (2015) to produce an output sequence consisting of elements from
the input sequence. The pointer network has been used to create hybrid approaches for NMT (Gul- cehre et al., 2016), language modeling (Merity et al., 2016), and summarization (Gu et al., 2016; Gulcehre et al., 2016; Miao and Blunsom, 2016; Nallapati et al., 2016; Zeng et al., 2016).
Our approach is close to the Forced-Attention Sentence Compression model of Miao and Blun- som (2016) and the CopyNet model of Gu et al. (2016), with some small differences: (i) We cal- culate an explicit switch probability pgen, whereas Gu et al. induce competition through a shared soft- max function. (ii) We recycle the attention distri- bution to serve as the copy distribution, but Gu et al. use two separate distributions. (iii) When a word appears multiple times in the source text, we sum probability mass from all corresponding parts of the attention distribution, whereas Miao and Blunsom do not. Our reasoning is that (i) calcu- lating an explicit pgen usefully enables us to raise or lower the probability of all generated words or all copy words at once, rather than individually, (ii) the two distributions serve such similar pur- poses that we ï¬nd our simpler approach sufï¬ces, and (iii) we observe that the pointer mechanism often copies a word while attending to multiple oc- currences of it in the source text.
Our approach is considerably different from that of Gulcehre et al. (2016) and Nallapati et al. (2016). Those works train their pointer compo- nents to activate only for out-of-vocabulary words or named entities (whereas we allow our model to freely learn when to use the pointer), and they do not mix the probabilities from the copy distribu- tion and the vocabulary distribution. We believe the mixture approach described here is better for abstractive summarization â in section 6 we show that the copy mechanism is vital for accurately reproducing rare but in-vocabulary words, and in section 7.2 we observe that the mixture model en- ables the language model and copy mechanism to work together to perform abstractive copying.
Coverage. Originating from Statistical Ma- chine Translation (Koehn, 2009), coverage was adapted for NMT by Tu et al. (2016) and Mi et al. (2016), who both use a GRU to update the cov- erage vector each step. We ï¬nd that a simpler approach â summing the attention distributions to obtain the coverage vector â sufï¬ces. In this re- spect our approach is similar to Xu et al. (2015), who apply a coverage-like method to image cap-
tioning, and Chen et al. (2016), who also incorpo- rate a coverage mechanism (which they call âdis- tractionâ) as described in equation (11) into neural summarization of longer text.
Temporal attention is a related technique that has been applied to NMT (Sankaran et al., 2016) and summarization (Nallapati et al., 2016). In this approach, each attention distribution is di- vided by the sum of the previous, which effec- tively dampens repeated attention. We tried this method but found it too destructive, distorting the signal from the attention mechanism and reducing performance. We hypothesize that an early inter- vention method such as coverage is preferable to a post hoc method such as temporal attention â it is better to inform the attention mechanism to help it make better decisions, than to override its de- cisions altogether. This theory is supported by the large boost that coverage gives our ROUGE scores (see Table 1), compared to the smaller boost given by temporal attention for the same task (Nallapati et al., 2016).
# 4 Dataset
We use the CNN/Daily Mail dataset (Hermann et al., 2015; Nallapati et al., 2016), which con- tains online news articles (781 tokens on average) paired with multi-sentence summaries (3.75 sen- tences or 56 tokens on average). We used scripts supplied by Nallapati et al. (2016) to obtain the same version of the the data, which has 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. Both the datasetâs published results (Nallapati et al., 2016, 2017) use the anonymized version of the data, which has been pre-processed to replace each named entity, e.g., The United Na- tions, with its own unique identiï¬er for the exam- ple pair, e.g., @entity5. By contrast, we operate directly on the original text (or non-anonymized version of the data),2 which we believe is the fa- vorable problem to solve because it requires no pre-processing.
# 5 Experiments
our model has 256- For dimensional hidden states and 128-dimensional word embeddings. For the pointer-generator mod- els, we use a vocabulary of 50k words for both source and target â note that due to the pointer net- workâs ability to handle OOV words, we can use
2at www.github.com/abisee/pointer-generator
abstractive model (Nallapati et al., 2016)* seq-to-seq + attn baseline (150k vocab) seq-to-seq + attn baseline (50k vocab) pointer-generator pointer-generator + coverage lead-3 baseline (ours) lead-3 baseline (Nallapati et al., 2017)* extractive model (Nallapati et al., 2017)* 1 35.46 30.49 31.33 36.44 39.53 40.34 39.2 39.6 ROUGE 2 13.30 11.17 11.81 15.66 17.28 17.70 15.7 16.2 L 32.65 28.08 28.83 33.42 36.38 36.57 35.5 35.3 METEOR exact match + stem/syn/para - 11.65 12.03 15.35 17.32 20.48 - - - 12.86 13.20 16.65 18.72 22.21 - -
Table 1: ROUGE F1 and METEOR scores on the test set. Models and baselines in the top half are abstractive, while those in the bottom half are extractive. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our ROUGE scores have a 95% conï¬dence interval of at most ±0.25 as reported by the ofï¬cial ROUGE script. The METEOR improvement from the 50k baseline to the pointer-generator model, and from the pointer-generator to the pointer-generator+coverage model, were both found to be statistically signiï¬cant using an approximate randomization test with p < 0.01.
a smaller vocabulary size than Nallapati et al.âs (2016) 150k source and 60k target vocabularies. For the baseline model, we also try a larger vocab- ulary size of 150k.
Note that the pointer and the coverage mecha- nism introduce very few additional parameters to the network: for the models with vocabulary size 50k, the baseline model has 21,499,600 parame- ters, the pointer-generator adds 1153 extra param- eters (whâ, ws, wx and bptr in equation 8), and cov- erage adds 512 extra parameters (wc in equation 11).
Unlike Nallapati et al. (2016), we do not pre- train the word embeddings â they are learned from scratch during training. We train using Ada- grad (Duchi et al., 2011) with learning rate 0.15 and an initial accumulator value of 0.1. (This was found to work best of Stochastic Gradient Descent, Adadelta, Momentum, Adam and RM- SProp). We use gradient clipping with a maximum gradient norm of 2, but do not use any form of reg- ularization. We use loss on the validation set to implement early stopping.
During training and at test time we truncate the article to 400 tokens and limit the length of the summary to 100 tokens for training and 120 to- kens at test time.3 This is done to expedite train- ing and testing, but we also found that truncating the article can raise the performance of the model
3The upper limit of 120 is mostly invisible:
the beam search algorithm is self-stopping and almost never reaches the 120th step.
(see section 7.1 for more details). For training, we found it efï¬cient to start with highly-truncated sequences, then raise the maximum length once converged. We train on a single Tesla K40m GPU with a batch size of 16. At test time our summaries are produced using beam search with beam size 4.
We trained both our baseline models for about 600,000 iterations (33 epochs) â this is similar to the 35 epochs required by Nallapati et al.âs (2016) best model. Training took 4 days and 14 hours for the 50k vocabulary model, and 8 days 21 hours for the 150k vocabulary model. We found the pointer-generator model quicker to train, re- quiring less than 230,000 training iterations (12.8 In par- epochs); a total of 3 days and 4 hours. ticular, the pointer-generator model makes much quicker progress in the early phases of training. To obtain our ï¬nal coverage model, we added the coverage mechanism with coverage loss weighted to λ = 1 (as described in equation 13), and trained for a further 3000 iterations (about 2 hours). In this time the coverage loss converged to about 0.2, down from an initial value of about 0.5. We also tried a more aggressive value of λ = 2; this re- duced coverage loss but increased the primary loss function, thus we did not use it.
We tried training the coverage model without the loss function, hoping that the attention mech- anism may learn by itself not to attend repeatedly to the same locations, but we found this to be inef- fective, with no discernible reduction in repetition. We also tried training with coverage from the ï¬rst
iteration rather than as a separate training phase, but found that in the early phase of training, the coverage objective interfered with the main objec- tive, reducing overall performance.
# 6 Results
# 6.1 Preliminaries
Our results are given in Table 1. We evalu- ate our models with the standard ROUGE metric (Lin, 2004b), reporting the F1 scores for ROUGE- 1, ROUGE-2 and ROUGE-L (which respectively measure the word-overlap, bigram-overlap, and longest common sequence between the reference summary and the summary to be evaluated). We obtain our ROUGE scores using the pyrouge package.4 We also evaluate with the METEOR metric (Denkowski and Lavie, 2014), both in ex- act match mode (rewarding only exact matches between words) and full mode (which addition- ally rewards matching stems, synonyms and para- phrases).5
In addition to our own models, we also report the lead-3 baseline (which uses the ï¬rst three sen- tences of the article as a summary), and compare to the only existing abstractive (Nallapati et al., 2016) and extractive (Nallapati et al., 2017) mod- els on the full dataset. The output of our models is available online.6
Given that we generate plain-text summaries but Nallapati et al. (2016; 2017) generate anonymized summaries (see Section 4), our ROUGE scores are not strictly comparable. There is evidence to suggest that the original-text dataset may re- sult in higher ROUGE scores in general than the anonymized dataset â the lead-3 baseline is higher on the former than the latter. One possible expla- nation is that multi-word named entities lead to a higher rate of n-gram overlap. Unfortunately, ROUGE is the only available means of compar- ison with Nallapati et al.âs work. Nevertheless, given that the disparity in the lead-3 scores is (+1.1 ROUGE-1, +2.0 ROUGE-2, +1.1 ROUGE- L) points respectively, and our best model scores exceed Nallapati et al. (2016) by (+4.07 ROUGE- 1, +3.98 ROUGE-2, +3.73 ROUGE-L) points, we may estimate that we outperform the only previous abstractive system by at least 2 ROUGE points all- round.
4pypi.python.org/pypi/pyrouge/0.1.3 5www.cs.cmu.edu/~alavie/METEOR 6www.github.com/abisee/pointer-generator
s e t a c i l p u d 30 20 e r a 10 t a h t % 0 1 - g r a m s 2 - g r a m s 3 - g r a m s 4 - g r a m s s e n t e n c e s pointer-generator, no coverage pointer-generator + coverage reference summaries
Figure 4: Coverage eliminates undesirable repe- tition. Summaries from our non-coverage model contain many duplicated n-grams while our cover- age model produces a similar number as the ref- erence summaries.
# 6.2 Observations
We ï¬nd that both our baseline models perform poorly with respect to ROUGE and METEOR, and in fact the larger vocabulary size (150k) does not seem to help. Even the better-performing baseline (with 50k vocabulary) produces summaries with several common problems. Factual details are fre- quently reproduced incorrectly, often replacing an uncommon (but in-vocabulary) word with a more- common alternative. For example in Figure 1, the baseline model appears to struggle with the rare word thwart, producing destabilize instead, which leads to the fabricated phrase destabilize nigeriaâs economy. Even more catastrophically, the summaries sometimes devolve into repetitive nonsense, such as the third sentence produced by the baseline model in Figure 1. In addition, the baseline model canât reproduce out-of-vocabulary words (such as muhammadu buhari in Figure 1). Further examples of all these problems are pro- vided in the supplementary material.
Our pointer-generator model achieves much better ROUGE and METEOR scores than the baseline, despite many fewer training epochs. The difference in the summaries is also marked: out- of-vocabulary words are handled easily, factual details are almost always copied correctly, and there are no fabrications (see Figure 1). However, repetition is still very common.
Our pointer-generator model with coverage im- proves the ROUGE and METEOR scores further, convincingly surpassing the best abstractive model
Article: smugglers lure arab and african migrants by offer- ing discounts to get onto overcrowded ships if people bring more potential passengers, a cnn investigation has revealed. (...) Summary: cnn investigation uncovers the business inside a human smuggling ring.
Article: eyewitness video showing white north charleston police ofï¬cer michael slager shooting to death an unarmed black man has exposed discrepancies in the reports of the ï¬rst ofï¬cers on the scene. (...) Summary: more questions than answers emerge in con- troversial s.c. police shooting.
Figure 5: Examples of highly abstractive reference summaries (bold denotes novel words).
of Nallapati et al. (2016) by several ROUGE points. Despite the brevity of the coverage train- ing phase (about 1% of the total training time), the repetition problem is almost completely elimi- nated, which can be seen both qualitatively (Figure 1) and quantitatively (Figure 4). However, our best model does not quite surpass the ROUGE scores of the lead-3 baseline, nor the current best extrac- tive model (Nallapati et al., 2017). We discuss this issue in section 7.1.
# 7 Discussion
# 7.1 Comparison with extractive systems
It is clear from Table 1 that extractive systems tend to achieve higher ROUGE scores than abstractive, and that the extractive lead-3 baseline is extremely strong (even the best extractive system beats it by only a small margin). We offer two possible ex- planations for these observations.
Firstly, news articles tend to be structured with the most important information at the start; this partially explains the strength of the lead-3 base- line. Indeed, we found that using only the ï¬rst 400 tokens (about 20 sentences) of the article yielded signiï¬cantly higher ROUGE scores than using the ï¬rst 800 tokens.
Secondly, the nature of the task and the ROUGE metric make extractive approaches and the lead- 3 baseline difï¬cult to beat. The choice of con- tent for the reference summaries is quite subjective â sometimes the sentences form a self-contained summary; other times they simply showcase a few interesting details from the article. Given that the articles contain 39 sentences on average, there are many equally valid ways to choose 3 or 4 high- lights in this style. Abstraction introduces even more options (choice of phrasing), further decreas-
ing the likelihood of matching the reference sum- mary. For example, smugglers proï¬t from des- perate migrants is a valid alternative abstractive summary for the ï¬rst example in Figure 5, but it scores 0 ROUGE with respect to the reference summary. This inï¬exibility of ROUGE is exac- erbated by only having one reference summary, which has been shown to lower ROUGEâs relia- bility compared to multiple reference summaries (Lin, 2004a).
Due to the subjectivity of the task and thus the diversity of valid summaries, it seems that ROUGE rewards safe strategies such as select- ing the ï¬rst-appearing content, or preserving orig- inal phrasing. While the reference summaries do sometimes deviate from these techniques, those deviations are unpredictable enough that the safer strategy obtains higher ROUGE scores on average. This may explain why extractive systems tend to obtain higher ROUGE scores than abstractive, and even extractive systems do not signiï¬cantly ex- ceed the lead-3 baseline.
To explore this issue further, we evaluated our systems with the METEOR metric, which rewards not only exact word matches, but also matching stems, synonyms and paraphrases (from a pre- deï¬ned list). We observe that all our models re- ceive over 1 METEOR point boost by the inclu- sion of stem, synonym and paraphrase matching, indicating that they may be performing some ab- straction. However, we again observe that the lead-3 baseline is not surpassed by our models. It may be that news article style makes the lead- 3 baseline very strong with respect to any metric. We believe that investigating this issue further is an important direction for future work.
# 7.2 How abstractive is our model?
We have shown that our pointer mechanism makes our abstractive system more reliable, copying fac- tual details correctly more often. But does the ease of copying make our system any less abstractive? Figure 6 shows that our ï¬nal modelâs sum- maries contain a much lower rate of novel n-grams (i.e., those that donât appear in the article) than the reference summaries, indicating a lower degree of abstraction. Note that the baseline model produces novel n-grams more frequently â however, this statistic includes all the incorrectly copied words, UNK tokens and fabrications alongside the good instances of abstraction.
l e v o n e r a t a h t % 100 80 60 40 20 0 1 - g r a m s 2 - g r a m s 3 - g r a m s 4 - g r a m s s e n t e n c e s pointer-generator + coverage sequence-to-sequence + attention baseline reference summaries
Figure 6: Although our best model is abstractive, it does not produce novel n-grams (i.e., n-grams that donât appear in the source text) as often as the reference summaries. The baseline model produces more novel n-grams, but many of these are erroneous (see section 7.2).
Article: andy murray (...) is into the semi-ï¬nals of the mi- ami open , but not before getting a scare from 21 year-old austrian dominic thiem, who pushed him to 4-4 in the sec- ond set before going down 3-6 6-4, 6-1 in an hour and three quarters. (...) Summary: andy murray defeated dominic thiem 3-6 6-4, 6-1 in an hour and three quarters.
Article: (...) wayne rooney smashes home during manch- ester united âs 3-1 win over aston villa on saturday. (...) Summary: manchester united beat aston villa 3-1 at old trafford on saturday.
Figure 7: Examples of abstractive summaries pro- duced by our model (bold denotes novel words).
In particular, Figure 6 shows that our ï¬nal model copies whole article sentences 35% of the time; by comparison the reference summaries do so only 1.3% of the time. This is a main area for improvement, as we would like our model to move beyond simple sentence extraction. However, we observe that the other 65% encompasses a range of abstractive techniques. Article sentences are trun- cated to form grammatically-correct shorter ver- sions, and new sentences are composed by stitch- ing together fragments. Unnecessary interjections, clauses and parenthesized phrases are sometimes omitted from copied passages. Some of these abil- ities are demonstrated in Figure 1, and the supple- mentary material contains more examples.
Figure 7 shows two examples of more impres- sive abstraction â both with similar structure. The dataset contains many sports stories whose sum- maries follow the X beat Y (score) on (day) tem-
plate, which may explain why our model is most conï¬dently abstractive on these examples. In gen- eral however, our model does not routinely pro- duce summaries like those in Figure 7, and is not close to producing summaries like in Figure 5.
The value of the generation probability pgen also gives a measure of the abstractiveness of our model. During training, pgen starts with a value of about 0.30 then increases, converging to about 0.53 by the end of training. This indicates that the model ï¬rst learns to mostly copy, then learns to generate about half the time. However at test time, pgen is heavily skewed towards copying, with a mean value of 0.17. The disparity is likely due to the fact that during training, the model re- ceives word-by-word supervision in the form of the reference summary, but at test time it does not. Nonetheless, the generator module is use- ful even when the model is copying. We ï¬nd that pgen is highest at times of uncertainty such as the beginning of sentences, the join between stitched-together fragments, and when producing periods that truncate a copied sentence. Our mix- ture model allows the network to copy while si- multaneously consulting the language model â en- abling operations like stitching and truncation to In any case, be performed with grammaticality. encouraging the pointer-generator model to write more abstractively, while retaining the accuracy advantages of the pointer module, is an exciting direction for future work.
# 8 Conclusion
In this work we presented a hybrid pointer- generator architecture with coverage, and showed that it reduces inaccuracies and repetition. We ap- plied our model to a new and challenging long- text dataset, and signiï¬cantly outperformed the abstractive state-of-the-art result. Our model ex- hibits many abstractive abilities, but attaining higher levels of abstraction remains an open re- search question.
# 9 Acknowledgment
We thank the ACL reviewers for their helpful com- ments. This work was begun while the ï¬rst author was an intern at Google Brain and continued at Stanford. Stanford University gratefully acknowl- edges the support of the DARPA DEFT Program AFRL contract no. FA8750-13-2-0040. Any opin- ions in this material are those of the authors alone.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks In International Joint for modeling documents. Conference on Artiï¬cial Intelligence.
Jackie Chi Kit Cheung and Gerald Penn. 2014. Unsu- pervised sentence enhancement for automatic sum- marization. In Empirical Methods in Natural Lan- guage Processing.
Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with at- In North Amer- tentive recurrent neural networks. ican Chapter of the Association for Computational Linguistics.
Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language speciï¬c translation evaluation In EACL 2014 Workshop for any target language. on Statistical Machine Translation.
John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning Journal of Machine and stochastic optimization. Learning Research 12:2121â2159.
Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Incorporating copying mechanism in In Association for Li. 2016. sequence-to-sequence learning. Computational Linguistics.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing In Association for Computa- the unknown words. tional Linguistics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Neural Informa- tion Processing Systems.
Hongyan Jing. 2000. Sentence reduction for automatic In Applied natural language text summarization. processing.
Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press.
Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In International ACM SIGIR conference on Research and develop- ment in information retrieval.
Looking for a few good metrics: Automatic summarization evaluation-how In NACSIS/NII Test many samples are enough? Collection for Information Retrieval (NTCIR) Work- shop.
Chin-Yew Lin. 2004b. Rouge: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out: ACL workshop.
Stephen Merity, Caiming Xiong, James Bradbury, and Pointer sentinel mixture Richard Socher. 2016. In NIPS 2016 Workshop on Multi-class models. and Multi-label Learning in Extremely Large Label Spaces.
Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. In Empirical Methods in Natural Language Processing.
Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sen- tence compression. In Empirical Methods in Natu- ral Language Processing.
Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Association for the Advancement of Artiï¬cial Intelligence.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ aglar Gulc¸ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Computational Natural Lan- guage Learning.
Chris D Paice. 1990. Constructing literature abstracts by computer: techniques and prospects. Information Processing & Management 26(1):171â186.
MarcâAurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level train- ing with recurrent neural networks. In International Conference on Learning Representations.
Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Empirical Methods in Nat- ural Language Processing.
Horacio Saggion and Thierry Poibeau. 2013. Auto- matic text summarization: Past, present and future. In Multi-source, Multilingual Information Extrac- tion and Summarization, Springer, pages 3â21.
Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and Abe Ittycheriah. 2016. Temporal attention model arXiv preprint for neural machine translation. arXiv:1608.02927 .
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Neural Information Processing Systems.
Jun Suzuki and Masaaki Nagata. 2016. RNN-based encoder-decoder approach with word frequency es- timation. arXiv preprint arXiv:1701.00138 .
Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hi- rao, and Masaaki Nagata. 2016. Neural headline generation on abstract meaning representation. In Empirical Methods in Natural Language Process- ing.
Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural In Association for Computa- machine translation. tional Linguistics.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Neural Information Pro- cessing Systems.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning.
Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Efï¬cient summarization with arXiv preprint Urtasun. 2016. read-again and copy mechanism. arXiv:1611.03382 .
# Supplementary Material
This appendix provides examples from the test set, with side-by-side comparisons of the reference sum- maries and the summaries produced by our models. In each example:
⢠italics denote out-of-vocabulary words
⢠red denotes factual errors in the summaries
⢠green shading intensity represents the value of the generation probability pgen
⢠yellow shading intensity represents ï¬nal value of the coverage vector at the end of ï¬nal modelâs summarization process.
Article (truncated): andy murray came close to giving himself some extra preparation time for
his wedding next week before ensuring that he still has unï¬nished tennis business to attend to . the world no 4 is into the semi-ï¬nals of the miami open , but not before getting a scare from 21 year-old austrian dominic thiem , who pushed him to 4-4 in the second set before going down 3-6 6-4 , 6-1 in an hour and three quarters . murray was awaiting the winner from the last eight match between tomas berdych and argentina âs juan monaco . prior to this tournament thiem lost in the second round of a challenger event to soon-to-be new brit aljaz bedene . andy murray pumps his ï¬rst after defeating dominic thiem to reach the miami open semi ï¬nals . muray throws his sweatband into the crowd after completing a 3-6 , 6-4 , 6-1 victory in ï¬orida . murray shakes hands with thiem who he described as a â strong guy â after the game . and murray has a fairly simple message for any of his fellow british tennis players who might be agitated about his imminent arrival into the home ranks : do nât complain . instead the british no 1 believes his colleagues should use the assimilation of the world number 83 , originally from slovenia , as motivation to better themselves .
Reference Summary: british no 1 defeated dominic thiem in miami open quarter ï¬nals . andy murray celebrated his 500th career win in the previous round . third seed will play the winner of tomas berdych and juan monaco in the semi ï¬nals of the atp masters 1000 event in key biscayne .
Baseline: andy murray beat UNK bedene 6-3 , 6-4 , 6-1 in an hour and three quarters . british no 1 believes his colleagues should use the maze of the world number 83 , originally from slovenia , as motivation to better themselves .
Pointer-Generator, No Coverage: andy murray came close to giving himself some extra preparation for his wedding next week before getting a scare from 21 year-old austrian dominic thiem . the world no 4 is into the semi-ï¬nals of the miami open , but not before getting a scare from 21 year-old austrian dominic thiem .
Pointer-Generator, With Coverage: andy murray defeated dominic thiem 3-6 6-4 , 6-1 in an hour and three quarters .
murray was awaiting the winner from the last eight match between tomas berdych and argentina âs juan monaco .
prior to this tournament thiem lost in the second round of a challenger event to soon-to-be new brit aljaz bedene .
Figure 8: The baseline model reports the wrong score 6-3, substitutes bedene for thiem and strug- gles with the uncommon word assimilation. The pointer-network models accurately reproduce the out- of-vocabulary words thiem and aljaz. Note that the ï¬nal model produces the novel word defeated to incorporate several fragments into a single sentence.
Article (truncated): munster have signed new zealand international francis saili on a two-year deal . utility back saili , who made his all blacks debut against argentina in 2013 , will move to the province later this year after the completion of his 2015 contractual commitments . the 24-year-old currently plays for auckland-based super rugby side the blues and was part of the new zealand under-20 side that won the junior world championship in italy in 2011 . saili âs signature is something of a coup for munster and head coach anthony foley believes he will be a great addition to their backline . francis saili has signed a two-year deal to join munster and will link up with them later this year . â we are really pleased that francis has committed his future to the province , â foley told munster âs ofï¬cial website . â he is a talented centre with an impressive skill-set and he possesses the physical attributes to excel in the northern hemisphere . â i believe he will be a great addition to our backline and we look forward to welcoming him to munster . â saili has been capped twice by new zealand and was part of the under 20 side that won the junior championship in 2011 .
Reference Summary: utility back francis saili will join up with munster later this year . the new zealand international has signed a two-year contract . saili made his debut for the all blacks against argentina in 2013 .
Baseline: dutch international francis UNK has signed a two-year deal to join irish UNK super rugby side the blues . UNK âs signature is something of a coup for munster and his head coach anthony foley believes he will be a great addition to their respective prospects . UNK has been capped twice by new zealand .
Pointer-Generator, No Coverage: new zealand international francis saili will move to the province later this year . utility back saili made his all blacks debut against argentina in 2013 . utility back saili will move to the province later this year .
Pointer-Generator, With Coverage: francis saili has signed a two-year deal to join munster later this year .
the 24-year-old was part of the new zealand under-20 side that won the junior world
championship in italy in 2011 . saili âs signature is something of a coup for munster and head coach anthony foley .
Figure 9: The baseline model incorrectly substitutes dutch for new zealand (perhaps reï¬ecting the Euro- pean bias of the dataset), fabricates irish, and struggles with out-of-vocabulary words saili and auckland- based. Though it is not clear why, the phrase addition to our backline is changed to the nonsensical addition to their respective prospects. The pointer-generator model ï¬xes these accuracy problems, and the addition of coverage ï¬xes the repetition problem. Note that the ï¬nal model skips over large passages of text to produce shorter sentences.
Article (truncated): right from the moment he breezed through the doors at old trafford , louis
van gaal was a man with a plan . the ï¬rst season , he stated , would see him deliver manchester united back into their rightful place in the champions league . he would restore them to the premier league top four but loftier aims of silverware would have to wait . his three-year vision would allow for such thoughts but , ï¬rst things ï¬rst , united needed to be dining from european football âs top table again . louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league . wayne rooney smashes home during manchester united âs 3-1 win over aston villa on saturday . united âs win over aston villa took them third , eight points ahead of ï¬fth-placed liverpool in the table . april 12 manchester city ( h ) . april 18 chelsea ( a ) . april 26 everton ( a ) . may 2 west bromwich albion ( h ) . may 9 crystal palace ( a ) . may 17 arsenal ( h ) . may 24 hull city ( a ) . one season out of the champions league was far from ideal , but two seasons would be an absolute disaster and something , he understood , that would not be tolerated . in november , even that was looking optimistic . a 1-0 defeat to manchester city meant that united had taken just 13 points from their opening 10 matches - it was their worst start to a league campaign since 1986 , when ron atkinson only lasted until november .
Reference Summary: man united have an eight-point cushion from ï¬fth-place liverpool . van gaal looks likely to deliver on his promise of top four ï¬nish . but the dutchman has a three-year vision mapped out . next season will have to see united mount sustained challenge for title . they must also reach the later stages of the champions league .
Baseline: manchester united beat aston villa 3-1 at old trafford on saturday . louis van gaal is close to delivering his UNK aim of returning man united into the premier league top four . louis van gaal is close to delivering his UNK aim of returning man united into champions league .
Pointer-Generator, No Coverage: louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league. united âs win over aston villa took them third , eight points ahead of ï¬fth-placed liverpool in the table . louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league.
Pointer-Generator, With Coverage: manchester united beat aston villa 3-1 at old trafford on saturday . louis van gaal is close to delivering his ï¬rst-season aim of returning man united into champions league . united needed to be dining from european football âs top table again .
Figure 10: In this example, both our baseline model and ï¬nal model produce a completely abstractive ï¬rst sentence, using a novel word beat.
Article (truncated): having been on the receiving end of a 6-1 thumping , a defeat like that
could be justiï¬ably met with a backlash by angry supporters . watching a 3-1 ï¬rst leg aggregate advantage turn into a 7-4 deï¬cit come the end of the reverse encounter too could send many fans apoplectic at the capitulation of their side . however that does nât appear the case for those devoted to porto . porto supporters gave their team a hero âs welcome following their 6-1 defeat at bayern munich on tuesday . porto star striker jackson martinez was one of many players to look perplexed by their warm reception . porto boss julen lopetegui ( left ) was hugged by fans congratulating him on their champions league run . police escorts were needed to keep the delirious supporters at bay as the porto team bus drove past . the team bus was met with a cacophony of noise from porto supporters proudly chanting about their club . on their return from a humiliating champions league quarter-ï¬nal loss at the hands of bayern munich on tuesday night , the squad were given a heroes reception as they arrived back in portugal . in the early hours of wednesday morning , fans mobbed the squad congratulating them on their run in the tournament . star striker jackson martinez and ricardo quaresma were one of many porto players who looked perplexed as they were hugged by fans before they making their way on to the team bus - set upon a cacophony of ï¬ercely-proud chanting . it was the ï¬rst time that porto , who had been unbeaten in this season âs tournament up until tuesday night , had reached the quarter-ï¬nals of the champions league since the 2008-09 season .
Reference Summary: bayern munich beat porto 6-1 in their champions league tie on tuesday . result saw bayern win quarter-ï¬nal encounter 7-4 on aggregate . it was the ï¬rst-time porto had reached that stage since the 2008-09 season .
Baseline: porto beat bayern munich 2-0 in the champions league on tuesday night . porto star james UNK was one of many players involved in the match . the squad were given a trophy as they arrived back in portugal .
Pointer-Generator, No Coverage: porto star striker jackson martinez was one of many players to look perplexed by their warm reception. porto star striker jackson martinez was one of many players to look perplexed by their warm reception. porto star striker jackson martinez was one of many players to look perplexed by their warm reception.
Pointer-Generator, With Coverage: porto star striker jackson martinez was one of many players to look perplexed by their warm
reception .
porto boss julen lopetegui was hugged by fans congratulating him on their champions league run .
however that does nât appear the case for those devoted to porto .
police escorts were needed to keep the delirious supporters at bay .
Figure 11: The baseline model makes several factual inaccuracies: it claims porto beat bayern munich not vice versa, the score is changed from 7-4 to 2-0, jackson is changed to james and a heroes reception is replaced with a trophy. Our ï¬nal model produces sentences that are individually accurate, but they do not make sense as a whole. Note that the ï¬nal model omits the parenthesized phrase ( left ) from its second sentence.
Article: ( cnn ) â it âs showtime ! â michael keaton paid homage â ever so slightly â to his roles in â beetlejuice â and â batman â in his third turn hosting â saturday night live â this weekend . keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 . â i had a baby â he âs 31 . i also have a new girlfriend â she âs 28 , â he said . fans who were hoping for a full-blown revival of keaton âs most memorable characters might have been a little disappointed . snl cast members taran killam and bobby moynihan begged the actor with a song to â play â batman and beetlejuice with them . all they got in response were a couple of one-liners . overall , keaton âs performance drew high marks from viewers and critics for its â deadpan â manner and â unpredictable weirdness , â in the words of a.v . club âs dennis perkins . fans also delighted in a cameo from â walking dead â star norman reedus during weekend update . keaton scored some laughs from the audience as an ad executive who âs not very good at his job , a confused grandfather and a high school teacher who gets asked to the prom in a riff on the romantic comedy â she âs all that . â other crowd-pleasing spots included a scientology parody music video and a news conference spooï¬ng the ncaa student-athlete debate . the show also poked fun at cnn with cast member cecily strong playing anchor brooke baldwin . baldwin said on twitter that she âs taking it as a â crazy compliment â and shared a clip from the skit .
Reference Summary: michael keaton hosted â saturday night live â for the ï¬rst time in 1982 . in 2015 , his nods to starring roles in â beetlejuice â and â batman â are brief .
Baseline: â snl â wins â saturday night live â . keaton acknowledged in his opening â saturday night live â . keaton acknowledged in his opening night âs â UNK â . the show also pokes fun at cnn with cast member mariah carey .
Pointer-Generator, No Coverage: â beetlejuice â michael keaton paid homage to his roles in â beetlejuice â and â batman â in his third turn hosting â saturday night live â this weekend . keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 .
Pointer-Generator, With Coverage: michael keaton paid homage to his roles in â beetlejuice â and â batman â in his third turn
hosting â saturday night live â .
keaton acknowledged in his opening monologue that a lot has changed since he ï¬rst hosted the comedy sketch show in 1982 .
Figure 12: Baseline model replaces cecily strong with mariah carey, and produces generally nonsensical output. The baseline model may be struggling with the out-of-vocabulary word beetlejuice, or perhaps the unusual non-news format of the article. Note that the ï¬nal model omits â ever so slightly â from its ï¬rst sentence.
Article (truncated): they are supposed to be the dream team who can solve the conundrum of how to win the election . but david cameron and boris johnson were left scratching their heads today as they struggled with a children âs jigsaw teaching toddlers about the seasons . as the london mayor tried to hammer ill-ï¬tting pieces together with his hands , the prime minister tried out what could be a new election slogan , telling him : â if in doubt , wedge it in . â after being put right by a four-year-old who spotted their errors , the pair had more fun ï¬nger painting with tory blue paint . david cameron and boris johnson were left stumped by the puzzle at advantage children âs day nursery in surbiton , as three-year-old stephanie looked on . when they tried to put the puzzle back together , they hit trouble after it proved to be more difï¬cult than expected . the conservative duo made their ï¬rst appearance together on the campaign trail with a visit to advantage day nursery in surbiton , south west london . they were supposed to be highlighting tory plans to double free childcare for 600,000 children aged three and four . they sat at a table with joshua and stephanie , both aged three , and four-year-olds hamish and leo where they broke up a puzzle which depicted the four seasons . when they tried to put it back together , they hit trouble after it proved to be more difï¬cult than expected .
Reference Summary: david cameron and boris johnson visit nursery to promote childcare . duo left ï¬ummoxed by jigsaw aimed at teaching children about seasons . london mayor resorted to brute force to hammer wrong pieces together . prime minister gave his approval , telling boris : â if in doubt , wedge it in â .
Baseline: david cameron and boris johnson were left mystiï¬ed by the puzzle . they were at the nursery in UNK to promote a tory pledge to double free childcare for 25 children aged three and four . the pair were at the nursery in UNK to promote a tory pledge to double free childcare for children aged three and four .
Pointer-Generator, No Coverage: london mayor tried to hammer ill-ï¬tting pieces together with his hands . they were supposed to be highlighting tory plans to double free childcare . they were supposed to be highlighting tory plans to double free childcare . they were supposed to be highlighting tory plans to double free childcare .
Pointer-Generator, With Coverage: london mayor tried to hammer ill-ï¬tting pieces together with his hands .
prime minister tried out what could be a new election slogan .
but they hit trouble after it proved to be more difï¬cult than expected .
Figure 13: The baseline model appropriately replaces stumped with novel word mystiï¬ed. However, the reference summary chooses ï¬ummoxed (also novel) so the choice of mystiï¬ed is not rewarded by the ROUGE metric. The baseline model also incorrectly substitutes 600,000 for 25. In the ï¬nal modelâs output we observe that the generation probability is largest at the beginning of sentences (especially the ï¬rst verb) and on periods.
Article (truncated): muhammadu buhari lagos , nigeria ( cnn ) a day after winning nigeria âs presidency , that he plans to aggressively ï¬ght corruption that has long plagued nigeria and go after the root of the nation âs unrest . buhari told cnn âs christiane amanpour said he âll â rapidly give attention â to curbing violence in the northeast part of nigeria , where the terrorist group boko haram operates . by cooperating with neighboring nations chad , cameroon and niger , he said his administration is conï¬dent it will be able to thwart criminals and others contributing to nigeria âs instability . for the ï¬rst time in nigeria âs history , the opposition defeated the ruling party in democratic elections . buhari defeated incumbent goodluck jonathan by about 2 million votes , according to nigeria âs independent national electoral commission . the win comes after a long history of military rule , coups and botched attempts at democracy in africa âs most populous nation .
Reference Summary: muhammadu buhari tells cnn âs christiane amanpour that he will ï¬ght corruption in nigeria . nigeria is the most populous country in africa and is grappling with violent boko haram extremists . nigeria is also africa âs biggest economy , but up to 70 % of nigerians live on less than a dollar a day .
Baseline: UNK UNK says his administration is conï¬dent it will be able to destabilize nigeria âs economy . UNK says his administration is conï¬dent it will be able to thwart criminals and other nigerians . he says the country has long nigeria and nigeria âs economy .
Pointer-Generator, No Coverage: muhammadu buhari says he plans to aggressively ï¬ght corruption in the northeast part of nigeria . he says he âll â rapidly give attention â to curbing violence in the northeast part of nigeria . he says his administration is conï¬dent it will be able to thwart criminals .
Pointer-Generator, With Coverage: muhammadu buhari says he plans to aggressively ï¬ght corruption that has long plagued nigeria
.
he says his administration is conï¬dent it will be able to thwart criminals . the win comes after a long history of military rule , coups and botched attempts at democracy in africa âs most populous nation .
Figure 14: The baseline model incorrectly changes thwart criminals and others contributing to nigeriaâs instability to destabilize nigeriaâs economy â which has a mostly opposite meaning. It also produces a nonsensical sentence. Note that our ï¬nal model produces the novel word says to paraphrase told cnn âs christiane amanpour.
Article: cairo ( cnn ) at least 12 people were killed sunday , and more injured , in separate
attacks on a police station , a checkpoint and along a highway in egypt âs northern sinai authorities said . six people , including one civilian , were killed when a car bomb exploded near the police station in al-arish , capital of north sinai , health ministry spokesman hossam abdel-ghafar told ahram online . he said 40 people were injured . ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack , which came hours after another operation that the group also claimed . in that earlier attack , a ï¬rst lieutenant , a sergeant and four conscripts were killed when their armored vehicle was attacked on the highway from al-arish to sheikh zuweid in northern sinai , the military said . two other soldiers were injured and taken to a military hospital . ansar beit al-maqdis has claimed many attacks against the army and police in sinai . a third attack sunday on a checkpoint in rafah left three security personnel injured , after unknown assailants opened ï¬re at them , according to state media . the attacks come as the military announced a reshufï¬e of several senior military positions , state media reported . among those being replaced are the generals in charge of military intelligence and egypt âs second ï¬eld army , which is spearheading the battle against the insurgents in the northern sinai . egypt âs army has been ï¬ghting a decade-long militant islamist insurgency , which has spiked since the ouster of muslim brotherhood president mohamed morsy in the summer of 2013 . hundreds of police and soldiers , as well as civilians , have been killed in militant attacks in the past months . ian lee reported from cairo . anas hamdan reported from atlanta . ,
Reference Summary: six people , including one civilian , are killed when a car bomb explodes near a police station . six others are killed when their armored vehicle is attacked on a highway in northern sinai . ansar beit al-maqdis , an isis afï¬liate , claims responsibility .
Baseline: he says the attacks come after another operation that killed 12 . the attacks come as military announced a u.n. peacekeeping force in northern sinai .
Pointer-Generator, No Coverage: ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack . ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack . the attacks come as the military announced a reshufï¬e of several senior military positions .
Pointer-Generator, With Coverage: six people ,
including one civilian , were killed when a car bomb explodes near the police
station . ansar beit al-maqdis , an isis afï¬liate , claimed responsibility for the attack .
egypt âs army has been ï¬ghting a decade-long militant islamist insurgency .
Figure 15: The baseline model fabricates a completely false detail about a u.n. peacekeeping force that is not mentioned in the article. This is most likely inspired by a connection between U.N. peacekeeping forces and northern sinai in the training data. The pointer-generator model is more accurate, correctly reporting the reshufï¬e of several senior military positions. | {
"id": "1701.00138"
} |
1704.03073 | Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots. | http://arxiv.org/pdf/1704.03073 | Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller | cs.LG, cs.RO | 12 pages, 5 Figures | null | cs.LG | 20170410 | 20170410 | # Data-efï¬cient Deep Reinforcement Learning for Dexterous Manipulation
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller DeepMind
AbstractâDeep learning and reinforcement learning methods have recently been used to solve a variety of problems in continu- ous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difï¬cult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another. Solving this difï¬cult and practically relevant problem in the real world is an important long-term goal for the ï¬eld of robotics. Here we take a step towards this goal by examining the problem in simulation and providing models and techniques aimed at solving it. We introduce two extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), a model-free Q-learning based method, which make it signiï¬cantly more data-efï¬cient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to ï¬nd control policies that robustly grasp objects and stack them. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
# I. INTRODUCTION
Dexterous manipulation is a fundamental challenge in robotics. Researchers have long been seeking a way to enable robots to robustly and ï¬exibly interact with ï¬xed and free objects of different shapes, materials, and surface properties in the context of a broad range of tasks and environmental conditions. Such ï¬exibility is very difï¬cult to achieve with manually designed controllers. The recent resurgence of neural networks and âdeep learningâ has inspired hope that these methods will be as effective in the control domain as they are for perception. And indeed, in simulation, recent work has used neural networks to learn solutions to a variety of control problems from scratch (e.g. [7, 20, 32, 31, 11, 17]).
While the ï¬exibility and generality of learning approaches is promising for robotics, these methods typically require a large amount of data that grows with the complexity of the task. What is feasible on a simulated system, where hundreds of millions of control steps are possible [23], does not necessarily transfer to real robot applications due to unrealistic learning times. One solution to this problem is to restrict the generality of the controller by incorporating task speciï¬c knowledge, e.g. in the form of dynamic movement primitives [30], or in the form of strong teaching signals, e.g. kinesthetic teaching of trajectories [24]. Recent works have had some success learning ï¬exible neural network policies directly on real robots (e.g. [18, 5, 39]), but tasks as complex as grasping-and-stacking remain daunting.
An important issue for the application of learning methods in robotics is to understand how to make the best use of collected data, which can be expensive to obtain, both in terms of time and money. To keep learning times reasonably low even in complex scenarios, it is crucial to ï¬nd a practical compromise between the generality of the controller and the necessary restrictions of the task setup. This is the gap that we aim to ï¬ll in this paper: exploring the potential of a learning approach that keeps prior assumptions low while keeping data consumption in reasonable bounds. Simultaneously, we are interested in approaches that are broadly applicable, robust, and practical.
In this paper we provide a simulation study that investigates the possibility of learning complex manipulation skills end- to-end with a general purpose model-free deep reinforcement learning algorithm. The express goal of this work is to assess the feasibility of performing analogous end-to-end learning experiments on real robotics hardware and to provide guidance with respect to the choice of learning algorithm and experi- mental setup and the performance that we can hope to achieve. The task which we consider to this end is that of picking up a Lego brick from the table and stacking it onto a second nearby brick using a robotic arm with 9 degrees of freedom (DoF), six in the arm and three for the ï¬ngers in the gripper. In addition to having a high-dimensional state and action space, the task exempliï¬es several of the challenges that are encountered in real-world manipulation problems. Firstly, it involves contact-rich interactions between the robotic arm and two freely moving objects. Secondly it requires mastering several sub-skills (reaching, grasping, and stacking). Each of these sub-skills is challenging in its own right as they require both precision (for instance, successful stacking requires ac- curate alignment of the two bricks) and as well as robust generalization over a large state space (e.g. different initial positions of the bricks and the initial conï¬guration of the arm). Finally, there exist non-trivial and long-ranging dependencies between the solutions for different subtasks: for instance, the ability to successfully stack the brick in the later part of the task depends critically on having picked up the brick in a sensible way beforehand.
On the algorithm side we build on the Deep Deterministic Policy Gradient (DDPG; [20]), a general purpose model-free reinforcement learning algorithm for continuous action spaces, and extend it in two ways (section V): ï¬rstly, we improve the the data efï¬ciency of the algorithm by scheduling updates
> i YY Da
Fig. 1: Simulation rendering of the Lego task in different completion stages (also corresponding to different subtasks): (a) starting state, (b) reaching, (c) grasping, (also StackInHand starting state) and (d) stacking
of the network parameters independently of interactions with the environment. Secondly, we overcome the computational and experimental bottlenecks of single-machine single-robot learning by introducing a distributed version of DDPG which allows data collection and network training to be spread out over multiple computers and robots.
reward. The latter have been routinely applied in robotics, in part because they straightforwardly handle continuous and high-dimensional action spaces [3] and applications include manipulation [26, 13, 25, 37, 18, 5, 39, 8], locomotion e.g. [16, 21], and a range of other challenges such as helicopter ï¬ight [1].
We further propose two broadly applicable strategies that allow us to inject prior knowledge into the learning process in order to help reliably ï¬nd solutions to complex tasks and further reduce the amount of environmental interaction. The ï¬rst of these strategies is a recipe for designing effective shaping rewards for compositional tasks (section VI), while the second (section VII) uses a suitable bias in the distribution of initial states to achieve an effect akin to a curriculum or a form of apprenticeship learning.
In combination these contributions allow us to reliably learn robust policies for the full task from scratch in less than 10 million environment transitions. This corresponds to less than 10 hours of interaction time on 16 robots, thus entering a regime that no longer seems unrealistic with modern experimental setups. In addition, when states from successful trajectories are used as the start states for learning trials the full task can be learned with 1 million transitions (i.e. less than 1 hour of interaction on 16 robots). To our knowledge our results provide the ï¬rst demonstration of solving complex manipulation problems involving multiple freely moving ob- jects. They are also encouraging as a sensible lower bound for real-world experiments suggesting that it may indeed be possible to learn such non-trivial manipulation skills directly on real robots.
One limitation that has hampered policy search methods is that they can scale poorly with the number of parameters that need to be estimated. This limitation, and other constraints when working with real robotics hardware has led research to focus on the use of manually engineered and restrictive features and movement representations, particularly trajectory- based ones such as spline based dynamic movement primitives. Simplifying the policy space can make learning on real hard- ware tractable, but it also limits the kinds of problems that can be solved. In order to solve a problem such as picking up and manipulating an object, more expressive function classes are likely to be needed.
The use of rich and ï¬exible function approximators such as neural networks in RL dates back many years, e.g. [38, 35, 12, 10]. In the last few years there has been a resurgence of interest in end-to-end training of neural networks for challenging control problems, and several algorithms, both value and policy focused have been developed and applied to challenging problems including continuous control, e.g. [22, 23, 6, 7, 20, 32, 31, 11, 17]. These methods work well with large neural networks and can learn directly from raw visual input streams. With few exceptions, e.g. [10, 5, 18, 39], they have been considered too data-inefï¬cient for robotics applications.
# II. RELATED WORK
Reinforcement learning approaches solve tasks through re- peated interactions with the environment guided by a reward signal that indicates the success or failure of a trial. A wide variety of techniques have been developed that exploit this idea [34], with a broad distinction often made between value- based and policy search methods. While the former estimate and improve a value function, policy search methods directly optimize the parameters of a policy to maximize cumulative
One exception are guided policy search methods (GPS) [18, 39]. These have recently been applied to several manip- ulation problems and employ a teacher algorithm to locally optimize trajectories which are then summarized by a neu- ral network policy. GPS algorithms gain data-efï¬ciency by employing aggressive local policy updates and by performing extensive training of their neural network policy before col- lecting more real-world data. The teacher can use model-based [18] or model-free [39] trajectory optimization. The former can struggle in situations with strong discontinuities in the
dynamics, and both rely on access to a well deï¬ned and fully observed state space.
Model-free value function approaches offer an alternative way to handle to the issue of data-efï¬ciency in robotics. Such approaches enable effective reuse of data and do not require full access to the state space or to a model of the environment. One recent work [5], closely related to the ideas followed in this paper, provides a proof of concept demonstration that value-based methods using neural network approximators can be used for robotic manipulation in the real world . This work applied a Q-learning approach [7] to a door opening task in which a robotic arm ï¬tted with an unactuated hook needed to reach to a handle and pull a door to a given angle. The starting state of the arm and door were ï¬xed across trials and the reward structure was smooth and structured, with one term expressing the distance from the hook to the handle and a second term expressing the distance of the door to the desired angle. This task was learned in approximately 2 hours across 2 robots pooling their experience into a shared replay buffer. This work thus made use of a complementary solution to the need for large amounts of interaction data: the use of experimental rigs that allow large scale data collection, e.g. [27], including the use of several robots from which experience are gathered in parallel [19, 5, 39]. This can be combined with single machine or distributed training depending on whether the bottleneck is primarily one of data collection or also one of network training [23].
Finally, the use of demonstration data has played an impor- tant role in robot learning, both as a means to obtain suitable cost functions [2, 14, 4, 8] but also to bootstrap and thus speed up learning. For the latter, kinesthetic teaching is widely used [26, 13, 25, 39]. It integrates naturally with trajectory-based movement representations but the need for a human operator to be able to guide the robot through the full movement can be limiting. Furthermore, when the policy representation is not trajectory based (e.g. direct torque control with neural networks) the use of human demonstration trajectories may be less straightforward (e.g. since the associated controls are not available).
# III. BACKGROUND
In this section we brieï¬y formalize the learning problem, summarize the DDPG algorithm, and explain its relationship to several other Q-function based reinforcement learning (RL) algorithms.
The RL problem consists of an agent interacting with an environment in a sequential manner to maximize the expected sum of rewards. At time t the agent observes the state x, of the system and produces a control u, = 7(2x1;6) according to policy 7 with parameters 6. This leads the environment to tran- sition to a new state x;,, according to the dynamics 2,4) ~ p(-|xz, Us), and the agent receives a reward r, = r(x;, uz). The goal is to maximize the expected sum of discounted rewards J(0) =Exnpy [p71 txt, us), where p(6) is the distribu- tion over trajectories tT = (xp, uo, 21, U1, -..) induced by the current policy: p(T) = p(%0) [ps9 p(we|teâ1, T(@1-13 9).
DPG [33] is a policy gradient algorithm for continuous action spaces that improves the deterministic policy function Ï via backpropagation of the action-value gradient from a learned approximation to the Q-function. Speciï¬cally, DPG maintains a parametric approximation Q(xt, ut; Ï) to the action value function QÏ(xt, ut) associated with Ï and Ï is chosen to minimize
Eves uevess)~p (Q(ae, ues >) â ye)â] ()
where y, = r(@z, Ue) + YQ(Xt41, 7(Lt41)). P is usually close to the marginal transition distribution induced by 7 but often not identical. For instance, during learning u; may be chosen to be a noisy version of 7(2x;;9), e.g. up = (x4; 0) + ⬠where e ~ N (0,07) and f is then the transition distribution induced by this noisy policy.
The policy parameters θ are then updated according to
Ad x E a :a)2 0 2 x Ea u)~p ay eeu o) ag ⢠(as )| - (2)
DDPG is an improvement of the original DPG algo- rithm adding experience replay and target networks: Experi- ence is collected into a buffer and updates to 6 and ¢ (eqs. 2) are computed using mini-batch updates with random samples from this buffer. Furthermore, a second set of âtarget- networksâ is maintained with parameters 6â and ¢â. These are used to compute y; in eqn. (1) and their parameters are slowly updated towards the current parameters 0, ¢. Both measures significantly improve the stability of DDPG.
DDPG bears a relation to several other recent model free RL algorithms: The NAF algorithm [7] which has recently been applied to a real-world robotics problem [5] can be viewed as a DDPG variant where the Q-function is quadratic in the action so that the optimal action can be easily recovered directly from the Q-function, making a separate representation of the policy unnecessary. DDPG and especially NAF are the continuous action counterparts of DQN [22], a Q-learning algorithm that recently re-popularized the use of experience replay and target networks to stabilize learning with powerful function approximators such as neural networks. DDPG, NAF, and DQN all interleave mini-batch updates of the Q-function (and the policy for DDPG) with data collection via interaction with the environment. These mini-batch based updates set DDPG and DQN apart from the otherwise closely related NFQ and NFQCA algorithms for discrete and continuous actions respectively. NFQ [29] and NFQCA [9] employ the same basic update as DDPG and DQN, however, they are batch algorithms that perform updates less frequently and fully re-ï¬t the Q-function and the policy network after every episode with several hundred iterations of gradient descent with Rprop [28] and using full-batch updates with the entire replay buffer. The aggressive training makes NFQCA data efï¬cient, but the full batch updates can become impractical with large networks, large observation spaces, or when the number of training episodes is large. Finally, DPG can be seen as the deterministic limit of a particular instance of the stochastic value gradients (SVG) family [11], which
also computes policy gradient via back-propagation of value gradients, but optimizes stochastic policies.
Discrete Continuous Mini-batch learning Target networks Full-batch learning with Rprop Parameter resetting DQN NFQ DDPG, NAF NFQCA
One appealing property of the above family of algorithms is that the use of a Q-function facilitates off-policy learning. This allows decoupling the collection of experience data from the updates of the policy and value networks, a desirable property given that experience is expensive to collect in a robotics setup. In this context, because neural network training is often slow, decoupling allows us to make many parameter update steps per step in the environment, ensuring that the networks are well ï¬t to the data that is currently available.
# IV. TASK AND EXPERIMENTAL SETUP
The full task that we consider in this paper is to use the arm to pick up one Lego Duplo brick from the table and stack it onto the remaining brick. This âcompositeâ task can be decomposed into several subtasks, including grasping and stacking. In our experiments we consider the full task as well as the two sub-tasks in isolation as shown in the table below:
Starting state Reward Grasp StackInHand Stack Both bricks on table Brick 1 above table Brick 1 in gripper Both bricks on table Bricks stacked Bricks stacked
In every episode the arm starts in a random conï¬guration with the positioning of gripper and brick appropriate for the task of interest. We implement the experiments in a physically plausible simulation in MuJoCo [36] with the simulated arm being closely matched to a real-world Jaco arm1 setup in our lab. Episodes are terminated after 150 steps, with each step corresponding to 50ms of physical simulation time. This means that the agent has 7.5 seconds to perform the task. Un- less otherwise noted we give a reward of one upon successful completion of the task and zero otherwise.
The observation vector provided to the agent contains information about the angles and angular velocities of the 6 joints of the arm and 3 ï¬ngers of the gripper. In addition, we provide information about the position and orientation of the two bricks and relative distances of the two bricks to the pinch position of the gripper, i.e. roughly the position where the ï¬n- gertips would meet if the ï¬ngers are closed. The 9-dimensional continuous action directly sets the velocities of the arm and ï¬nger joints. In experiments not reported in this paper we have tried using an observation vector containing only the raw state of the brick in addition to the arm conï¬guration (i.e. without the vector between the end-effector and brick) and found that
1Jaco is a robotics arm developed by Kinova Robotics
this increased the number of environment interactions needed roughly by a factor of two to three.
The only hyper-parameter that we optimize for each ex- perimental condition is the learning rate. For each condition we train and measure the performance of 10 agents with different random initial network parameters. After every 30 is evaluated for 10 episodes. training episodes the agent We used the mean performance at each evaluation phase as the performance measure presented in all plots. We found empirically that 10 episodes of evaluation gave a reasonable proxy for performance in the studied tasks. In the plots the line shows the mean performance for the set and the shaded regions correspond to the range between the worst and best performing agent in the set. In all plots the x-axis represents the number of environment transitions seen so far at an evaluation point (in millions) and the y-axis represent episode return.
A video of the full setup and examples of policies tasks can be found here: solving the component and full https://www.youtube.com/watch?v=8QnD8ZM0YCo.
V. ASYNCHRONOUS DPG WITH VARIABLE REPLAY STEPS
In this section we study two methods for extending the DDPG algorithm and ï¬nd that they can have signiï¬cant effect on data and computation efï¬ciency, in some cases making the difference between ï¬nding a solution to a task or not.
a) Multiple mini-batch replay steps: Deep neural net- works can require many steps of gradient descent to converge. In a supervised learning setting this affects purely computa- tion time. In reinforcement learning, however, neural network training is interleaved with the acquisition of interaction expe- rience, and the nature of the latter is affected by the state of the former â and vice versa â so the situation is more complicated. To gain a better understanding of this interaction we modiï¬ed the original DDPG algorithm as described in [20] to perform a ï¬xed but conï¬gurable number of mini-batch updates per step in the environment. In [20] one update was performed after each new interaction step.
We refer to DDPG with a conï¬gurable number of update steps as DPG-R and tested the impact of this modiï¬cation on the two primitive tasks Grasp and StackInHand. The results are shown in Fig. 2. It is evident that the number of update steps has a dramatic effect on the amount of experience data required for learning successful policies. After one million interactions the original version of DDPG with a single update step (blue traces) appears to have made no progress towards a successful policy for stacking, and only a small number of controllers have learned to grasp. Increasing the number of updates per interaction to 5 greatly improves the results (green traces), and with 40 updates (purple) the ï¬rst successful policies for stacking and grasping are obtained after 200,000 and 300,000 interactions respectively (corresponding to 1,300 and 2,000 episodes). It is task dependent and the dependence between update steps and convergence is clearly not linear, in both cases we continue to see a reduction in total environment interaction up to 40 update steps, the maximum used in the experiment.
One may speculate as to why changing the number of updates per environment step has such a pronounced effect. One hypothesis is that, loosely speaking and drawing an analogy to supervised learning, insufï¬cient training leads to underï¬tting of the policy and value network with respect to the already collected training data. Unlike in supervised learning, however, where the dataset is typically ï¬xed, the quality of the policy directly feeds back into the data acquisition process since the policy network is used for exploration, thus affecting the quality the data used in future iterations of network training.
We have observed in various experiments (not listed here) that other aspects of the network architecture and training process can have a similar effect on the extent of underï¬tting. Some examples include the type of non-linearities used in the network layers, the size of layers and the learning rate. It is important to note that one cannot replicate the effect of multiple replay steps simply by increasing the learning rate. In practice we ï¬nd that attempts to do so make training unstable.
140 Grasp 140 StackinHand 120 120 100 100 80 80 Ci) C1) 40 40 20 20 â0 02 04 06 os â10 02 04 06 08
Fig. 2: Mean episode return as a function of number of transitions seen (in millions) of DPG-R (single worker) on the Grasp (left) and StackInHand (right) task with 1 (blue), 5 (green), 10 (red), 20 (yellow) and 40 (purple) mini-batch updates per environment step
b) Asynchronous DPG: While increasing the number of update steps relative to the number of environment interactions greatly improves the data efï¬ciency of the algorithm it can also strongly increase the computation time. In the extreme case, in simulation, when the overall run time is dominated by the network updates it may scale linearly with the number of replay steps. In this setting it is desirable to be able to parallelize the update computations.
In a real robotics setup the overall run time is typically dominated by the collection of robot interactions. In this case it is desirable to be able to collect experience from multiple robots simultaneously (e.g. as in [39, 5]).
We therefore develop an asynchronous version of DPG that allows parallelization of training and environment interaction by combining multiple instances of an DPG-R actor and critic that each share their network parameters and can be conï¬gured to either share or have independent experience replay buffers. This is inspired by the A3C algorithm proposed in [23], and also analogous to [5, 39]. We found that this strategy is also an effective way to share parameters for DPG. That is, we employ asynchronous updates whereby each worker has its own copy
of the parameters and uses it for computing gradients which are then applied to a shared parameter instance without any synchronization. We use the Adam optimizer [15] with local non-shared ï¬rst-order statistics and a single shared instance of second-order statistics. The pseudo code of the asynchronous DPG-R is shown in algorithm box 1.
# Algorithm 1 (A)DPG-R algorithm
Initialize global shared critic and actor network parameters: 62" and oHâ Pseudo code for each learner thread: Initialize critic network Q(s,a|@@) and actor p.(s|6â) with weights 02 and 6â. Initialize target network Qâ and yuâ with weights: 0? â 62, oH" â oH Initialize replay buffer R for episode = 1, M do Receive initial observation state s1 for t= 1, T do Select action a, = ju(s;|0") +N; according to the current policy and exploration noise Perform action a;, observe reward r;, and new state St41 Store transition (s;, a4,74, 8:41) in R for update = 1, R do Sample a random minibatch of JN transitions (s;,@;, 7%, 5:41) from R Set yi = ri + 7Q' (Sint, MH (sit1|O")|02 ) Perform asynchronous update of the shared param- eters of the critic by minimizing the loss: L= kD iyi â Osi, a;/02)*) Perform asynchronous update of shared parameters of actor policy using the sampled gradient:
1 Vogut Hla © aD VaQ(s, 4/09) |Vou(sl0")|
Copy the shared parameters to the local ones: 0? â 62", gH ge" Every S update steps, update the target networks: 0? â 62, oH" â 6H end for
# end for
# end for
Figure 3 compares the performance of ADPG-R for different number of update steps and 16 workers (all workers perform- ing both data collection and computing updates). Similar to Fig. 2 we ï¬nd that increasing the ratio of update steps per environment steps improves data efï¬ciency, although the effect appears to be somewhat less pronounced than for DPG-R.
Figure 4 (top row) directly compares the single-worker and asynchronous version of DPG-R. In both cases we choose the best performing number of replay steps and learning rate. As we can see, the use of multiple workers does not affect overall
StackInHand 40 40 Oe 120 120 100 100 80 80 Ci) C1) 40 40 20 20 20 05 10 15 20 25 30 35 â0 05 10 15 20 25 30 35
Fig. 3: Mean episode return as a function of number of transitions seen (in millions) of ADPG-R (16 workers) on the Grasp (left) and StackInHand (right) task. Different colored traces indicate number of replay step as in Fig. 2
data efï¬ciency for StackInHand but it reduced roughly in half for Grasp, with the note that the single worker still hasnât quite converged.
Figure 4 (bottom row) plots the same data but as a function of environment steps per worker. This measure corresponds to the optimal wall clock efï¬ciency that we can achieve, under the assumption that communication time between workers is negligible compared to environment interaction and gradient computation (this usually holds up to a certain degree of parallelization). This theoretical wall clock time for running an experiment with 16 workers is about 16x lower for Stack- InHand and roughly 8x lower for Grasp.
Overall these results show that distributing neural network training and data collection across multiple computers and robots can be an extremely effective way of reducing the overall run time of experiments and thus making it feasible to run more challenging experiments. We make extensive use of asynchronous DPG for remaining the experiments.
o nH uo Grasp uo StackinHand 2 120 100 100 Cy Cy â1 6 @ â 6 40 40 2 2 0 0 oo 860s 10 15 20 00 os 10 15 20 Grasp StackinHand 40 40 pany 1 10 100 100 a Ey) â1 ea Py â 6 40 40 20 20 t) t)
o nH uo Grasp uo StackinHand 2 120 100 100 Cy Cy â1 6 @ â 6 40 40 2 2 0 0 oo 860s 10 15 20 00 os 10 15 20
Grasp StackinHand 40 40 pany 1 10 100 100 a Ey) â1 ea Py â 6 40 40 20 20 t) t)
Fig. 4: Figure with two panels: (a) Grasp; (b) StackInHand; 16 workers vs single worker in data (total for all workers) and âwallclockâ (per-worker) time in millions of transitions with best replay step and learning rate selection.
# VI. COMPOSITE SHAPING REWARDS
In the previous section we discussed how the ability of DDPG to exploit information that is available in the acquired interaction data affects learning speed. One important factor that determines what information is available from this data is the nature of the reward function. The reward function in the previous section was âsparseâ or âpureâ reward where a reward of 1 was given for states that correspond to successful task completion (brick lifted above 3cm for grasp; for stack) and 0 otherwise. For this reward to be useful for learning it is of course necessary that the agent is able to enter this goal region in state space with whatever exploration strategy is chosen. This was indeed the case for the two subtasks in isolation, but it is highly unlikely for the full task: without further guidance na¨ıve random exploration is very unlikely to lead to a successful grasp and stack as we also experimentally verify in Fig. 5.
One commonly used solution to this problem is to provide informative shaping rewards that allow a learning signal to be obtained even with simple exploration strategies, e.g. by embedding information about the value function in the reward function for every transition acquired from the environment. For instance, for a simple reaching problem with a robotic arm we could deï¬ne a shaping reward that takes into account the distance between the end-effector and the target.
While this a convenient way of embedding prior knowledge the solution and is a widely and successfully used about approach for simple problems it comes with several caveats, especially for complex sequential or compositional tasks such as the one we are interested in here.
Firstly, while a suitable shaping reward may be easy to construct for simple problems for more complex composite tasks, such as the one considered in this paper, a suitable reward function is often non-obvious and may require con- siderable effort and experimentation. Secondly, and related to the previous point, the use of a shaping reward typically alters the solution to the optimization problem.
The effect of this can be benign but especially when it comes to complex tasks a small mistake may lead to complete failure of learning as we will demonstrate below. Thirdly, in a robotics setup not all information that would be desirable to deï¬ne a good shaping reward may be easily available. For instance, in the manipulation problem considered in this paper determining the position of the Lego bricks requires extra instrumentation of the experimental setup.
In this section we propose and analyze several possible reward functions for our full Stack task, aiming to provide a recipe that can be applied to other tasks with similar compositional structure. Shaping rewards are typically deï¬ned based on some notion of distance from or progress towards a goal state. We attempt to transfer this idea to our compositional setup via, what we call, composite (shaping) rewards. These reward functions return an increasing reward as the agent com- pletes components of the full task. They are either piecewise constant or smoothly varying across different regions of the
Sparse reward components Subtask Reach Brick 1 Grasp Brick 1 Stack Brick 1 Description hypothetical pinch site position of the ï¬ngers is in a box around the ï¬rst brick position the ï¬rst brick is located at least 3cm above the table surface, which is only possible if the arm is holding the brick bricks stacked Reward 0.125 0.25 1.00 Smoothly varying reward components distance of the pinch site to the ï¬rst brick - non-linear bounded while grasped: distance of the ï¬rst brick to the stacking site of the second brick - non-linear bounded Reaching to brick 1 Reaching to stack [0, 0.125] [0.25, 0.5]
TABLE I: Composite reward function
state space that correspond to completed subtasks. In the case of Stack we use the reward components described in table I. These reward components can be combined in different ways. We consider three different composite rewards in ad- ditional to the original sparse task reward: Grasp shaping: Grasp brick 1 and Stack brick 1, i.e. the agent receives a reward of 0.25 when the brick 1 has been grasped and a reward of 1.0 after completion of the full task. Reach and grasp shaping: Reach brick 1, Grasp brick 1 and Stack brick 1, i.e. the agent receives a reward of 0.125 when being close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a reward of 1.0 after completion of the full task. Full composite shaping: the sparse reward components as be- fore in combination with the distance-based smoothly varying components.
Figure 5 shows the results of learning with the above reward functions (blue traces). The ï¬gure makes clear that learning with the sparse reward only does not succeed for the full task. Introducing an intermediate reward for grasping allows the agent to learn to grasp but learning is very slow. The time to successful grasping can be substantially reduced by giving a distance based reward component for reaching to the ï¬rst brick, but learning does not progress beyond grasping. Only with an additional intermediate reward component as in continuous reach, grasp, stack the full task can be solved.
Although the above reward functions are speciï¬c to the particular task, we expect that the idea of a composite reward function can be applied to many other tasks thus allow- ing learning for to succeed even for challenging problems. Nevertheless, great care must be taken when deï¬ning the reward function. We encountered several unexpected failure cases while designing the reward function components: e.g. reach and grasp components leading to a grasp unsuitable for stacking, agent not stacking the bricks because it will stop receiving the grasping reward before it receives reward for stacking and the agent ï¬ips the brick because it gets a grasping reward calculated with the wrong reference point on the brick. We show examples of these in the video: https://www.youtube.com/watch?v=8QnD8ZM0YCo.
# VII. LEARNING FROM INSTRUCTIVE STATES
In the previous section we have described a strategy for designing effective reward functions for complex composi- tional tasks which alleviate the burden of exploration. We have also pointed out, however, that designing shaping rewards can be error prone and may rely on privileged information. In this section we describe a different strategy for embedding prior knowledge into the training process and improving exploration that reduces the reliance on carefully designed reward functions.
Speciï¬cally we propose to let the distribution of states at which the learning agent is initialized at the beginning of an episode reï¬ect the compositional nature of the task: In our case, instead of initializing the agent always at the beginning of the full task with both bricks on the table we can, for instance, choose to initialize the agent occasionally with the brick already in its hand and thus prepared for stacking in the same way as when learning the subtask StackInHand in section V. Trajectories of policies solving the task will have to visit this region of space before stacking the bricks and we can thus think of this initialization strategy as initializing the agent closer to the goal.
More generally, we can choose to initialize episodes with states taken from anywhere along or close to successful tra- jectories. Suitable states can be either manually deï¬ned (as in section V), or they can be obtained from a human demonstrator or a previously trained agent that can partially solve the task. This can be seen as a form of apprenticeship learning in which we provide teacher information by inï¬uencing the state visitation distribution.
We perform experiments with two alternative methods for generating the starting states. The ï¬rst one uses manually deï¬ned initial states and amounts to the possibility discussed above: we initialize the learning agent in either the original starting states with both bricks located on the table or in states where the ï¬rst brick is already in the gripper as if the agent just performed a successful grasp and lifted the brick. These two sets of start states correspond to those used in section V. The second method for generating instructive starting states can also be used on a real robot provided a human demonstra- tor or a pre-trained policy are available. It aims at initializing the learning agent along solution trajectory states in a more ï¬ne-grained fashion. We sample a random number of steps for each episode between one and the expected number of steps required to solve the task from the original starting states and then run the demonstrator for this number of steps. The ï¬nal state of this process is then used as a starting state initialization for the learning agent which then acts in the environment for the remainder of the episode.
The results of these experiments are shown in Figure 5. It shows results for the four reward functions considered in the previous section when combined with the simple augmented start state distribution. While there is still no learning for the basic sparse reward case, results obtained with all other reward functions are improved. In particular, even for the second
Stack - No shaping 140 012 3 45 678 9
Stack - Grasp shaping 40 120 100
Stack - No shaping 140 Stack - Grasp shaping 40 120 100 012 3 45 678 9 14g __Stack - Reach and Grasp shaping 40 120 100
14g __Stack - Reach and Grasp shaping
40 120 100
Fig. 5: Four panels with (a) no progress without extra shaping (b, c, d) different shaping strategies for the composite task with starting states with both bricks on the table (blue), manually deï¬ned initial states (green) and initial states continuously on solution trajectories (red). On all plots, x-axis is millions of transitions of total experience and y-axis is mean episode return. Policies with mean return over 100 robustly perform the full Stack from different starting states.
simplest reward function (Grasp shaping) we now obtain some controllers that can solve the full task. Learning with the full composite shaping reward is faster and more robust than without the use of instructive states.
The top left plot of Figure 5 (red trace) shows results for the case where the episode is initialized anywhere along trajectories from a pre-trained controller. We use this start state distribution in combination with the basic sparse reward for the overall case (Stack without shaping). Episodes were conï¬gured to be 50 steps, shorter than in the previous experiments, to be better suited to this setup with assisted exploration. During testing we still used episodes with 150 steps as before (so the traces are comparable). We can see a large improvement in performance in comparison to the two-state method variant even in the absence of any shaping rewards. We can learn a robust policy for all seeds within a total of 1 million environment transitions. This corresponds to less than 1 hour of interaction time on 16 simulated robots.
Overall these results suggest that an appropriate start state distribution does not only greatly speed up learning, it also allows simpler reward function to be used. In our ï¬nal ex- periment the simplest reward function, only indicating overall experimental success, was sufï¬cient to solve the task. Con- sidering the difï¬culties that can be associated with designing good shaping rewards this is an encouraging results.
The robustness of the policies that we can train to the starting state variation are also quite encouraging. Table II lists the success rate by task from 1000 trials. You can ï¬nd a video
Grasp StackInHand Stack Success rate (1000 random starts) 99.2% 98.2% 95.5%
TABLE II: Robustness of learned policies.
with trained policies performing the Grasp, StackInHand and Stack tasks from different initial states in the supplementary material.
# VIII. CONCLUSION
We have introduced two extensions to the DDPG algorithm which make it a powerful method for learning robust policies for complex continuous control tasks. Speciï¬cally, we have shown that by decoupling the frequency of network updates from the environment interaction we can substantially improve data-efï¬ciency, in some cases makes the that difference between ï¬nding a solution or not. The asynchronous version of DDPG which allows data collection and network training to be distributed over several computers and (simu- lated) robots has provided us with a close to linear speed up in wall-clock time for 16 parallel workers.
In addition, we presented two methods that help to guide the learning process towards good solutions and thus reduce the pressure on exploration strategies and speed up learning. The ï¬rst, composite rewards, is a recipe for constructing effective reward functions for tasks that consist of a sequence of sub- tasks. The second, instructive starting states, can be seen as a lightweight form of apprenticeship learning that facilitates learning of long horizon tasks even with sparse rewards, a property of many real-world problems. Taken together, the algorithmic changes and exploration shaping strategies have allowed us to learn robust policies for the Stack task within a number of transitions that is feasible to collect in a real- robot system within a few days, or in signiï¬cantly less time if multiple robots were used for training.
It is of course a challenge to judge the transfer of results in simulation to the real world. We have taken care to design a physically realistic simulation, and in initial experiments, which we have performed both in simulation and on the physical robot, we generally ï¬nd a good correspondence of performance and learning speed between simulation and real world. This makes us optimistic that our performance numbers also hold when going to the real world. A second caveat of our simulated setup is that it currently uses information about the state of the environment, which although not impossible to obtain on a real robot, may require additional instrumentation of the experimental setup, e.g. to determine the position of the two bricks in the work space. To address this second issue we are currently focusing on end-to-end learning directly from raw visual information. Here, we have some ï¬rst results showing the feasibility of learning policies for grasping with a success rate of about 80% across different starting conditions. We view the algorithms and techniques presented here as an important step towards applying versatile deep reinforcement
learning methods for real-robot dexterous manipulation with perception.
# REFERENCES
[1] J Andrew Bagnell and Jeff G Schneider. Autonomous helicopter control using reinforcement learning policy In Robotics and Automation, 2001. search methods. Proceedings 2001 ICRA. IEEE International Conference on, volume 2, pages 1615â1620. IEEE, 2001.
[2] A. Boularias, J. Kober, and J. Peters. Relative entropy In JMLR Workshop inverse reinforcement learning. and Conference Proceedings Volume 15: AISTATS 2011, pages 182â189, Cambridge, MA, USA, April 2011. MIT Press.
[3] Marc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1â142, 2013.
[4] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 49â58, 2016. URL http://jmlr.org/proceedings/papers/v48/ï¬nn16.html. [5] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manip- ulation. arXiv preprint arXiv:1610.00633, 2016.
[6] Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochastic neural networks. International Conference on Learning Representations (ICLR), 2016.
[7] Shixiang Gu, Tim Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with model-based In International Conference on Machine acceleration. Learning (ICML), 2016.
[8] Abhishek Gupta, Clemens Eppner, Sergey Levine, and Pieter Abbeel. Learning dexterous manipulation for a soft robotic hand from human demonstrations. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016, Daejeon, South Korea, October 9-14, 2016, pages 3786â3793, 2016.
[9] Roland Hafner and Martin Riedmiller. Reinforcement learning in feedback control. Machine learning, 84(1-2): 137â169, 2011.
[10] Roland Hafner and Martin A. Riedmiller. Neural rein- forcement learning controllers for a real robot applica- tion. In 2007 IEEE International Conference on Robotics and Automation, ICRA 2007, 10-14 April 2007, Roma, Italy, pages 2098â2103, 2007.
[11] Nicolas Heess, Gregory Wayne, David Silver, Tim Lill- icrap, Tom Erez, and Yuval Tassa. Learning continuous In Ad- control policies by stochastic value gradients. vances in Neural Information Processing Systems (NIPS), pages 2926â2934, 2015. [12] K. J. Hunt, D. Sbarbaro, R.
ËZbikowski, and P. J. Gawthrop. Neural networks for control systems: A
survey. Automatica, 28(6):1083â1112, November 1992. ISSN 0005-1098.
[13] M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning force control policies for compliant manipula- tion. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011), Sept. 25-30, San Francisco, CA, 2011. URL http://www-clmc.usc.edu/ publications/K/kalakrishnan-IROS2011.
[14] M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal. Learning objective functions for manipulation. In IEEE International Conference on Robotics and Automation, 2013.
[15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[16] Nate Kohl and Peter Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Proceedings of the IEEE International Conference on Robotics and Automation, May 2004.
[17] Sergey Levine and Pieter Abbeel. Learning neural net- work policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems (NIPS), pages 1071â1079, 2014.
[18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
[19] Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. CoRR, abs/1603.02199, 2016.
[20] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforce- International Conference on Learning ment learning. Representations (ICLR), 2016.
[21] Takamitsu Matsubara, Jun Morimoto, Jun Nakanishi, Masa-aki Sato, and Kenji Doya. Learning cpg- based biped locomotion with a policy gradient method. Robotics and Autonomous Systems, 54(11):911â920, 2006.
[22] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, 2015. [23] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous In Interna- methods for deep reinforcement learning. tional Conference on Machine Learning (ICML), 2016. J. Kober, O. Kroemer, and J. Pe- ters. Learning to select and generalize striking move- (3):263â279, 2013. ments URL http://www.ias.informatik.tu-darmstadt.de/uploads/ Publications/Muelling IJRR 2013.pdf.
[25] P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and
S. Schaal. Skill learning and task outcome prediction for manipulation. In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13, 2011.
[26] Jan Peters and Stefan Schaal. Policy gradient methods for robotics. In International Conference on Intelligent Robots and Systems (IROS), pages 2219â2225. IEEE, 2006.
Supersizing self- supervision: Learning to grasp from 50k tries and 700 robot hours. CoRR, abs/1509.06825, 2015. URL http: //arxiv.org/abs/1509.06825.
[28] M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algo- In H. Ruspini, editor, Proceedings of the IEEE rithm. International Conference on Neural Networks (ICNN), pages 586 â 591, San Francisco, 1993.
[29] Martin A. Riedmiller. Neural ï¬tted Q iteration - ï¬rst experiences with a data efï¬cient neural reinforcement In Machine Learning: ECML 2005, learning method. 16th European Conference on Machine Learning, Porto, Portugal, October 3-7, 2005, Proceedings, pages 317â 328, 2005.
[30] Stefan Schaal. Dynamic Movement Primitives -A Frame- work for Motor Control in Humans and Humanoid Robotics, pages 261â280. Springer Tokyo, Tokyo, 2006. ISBN 978-4-431-31381-6. doi: 10.1007/4-431-31381-8 23. URL http://dx.doi.org/10.1007/4-431-31381-8 23.
[31] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy optimiza- tion. In International Conference on Machine Learning (ICML), pages 1889â1897, 2015.
[32] John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. Interna- tional Conference on Learning Representations (ICLR), 2016.
[33] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning (ICML), 2014.
[34] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cam- bridge, 1998.
[35] Gerald Tesauro. Temporal difference learning and td- gammon. Commun. ACM, 38(3):58â68, 1995.
[36] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: In 2012 A physics engine for model-based control. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026â5033. IEEE, 2012.
[37] Herke van Hoof, Tucker Hermans, Gerhard Neumann, and Jan Peters. Learning robot in-hand manipulation with In 15th IEEE-RAS International Con- tactile features. ference on Humanoid Robots, Humanoids 2015, Seoul, South Korea, November 3-5, 2015, pages 121â127, 2015. [38] Paul J. Webros. Neural networks for control. chapter A
Menu of Designs for Reinforcement Learning over Time, pages 67â95. 1990. ISBN 0-262-13261-3.
[39] Ali Yahya, Adrian Li, Mrinal Kalakrishnan, Yevgen Chebotar, and Sergey Levine. Collective robot rein- forcement learning with distributed asynchronous guided policy search. CoRR, abs/1610.00673, 2016. URL http://arxiv.org/abs/1610.00673.
# APPENDIX
A. Reward function
In this section we provide further details regarding the reward functions described in section VI. For our experiments we derived these from the state vector of the simulation, but they could also be obtained through instrumentation in hardware. The reward functions are deï¬ned in terms of the following quantities:
b(1) ⢠sB1 ⢠sB2
z : height of brick 1 above table {x,y,z}: x,y,z positions of site located roughly in the center of brick 1 {x,y,z}: x,y,z positions of site located just above brick 2, at the position where sB1 will be located when brick 1 is stacked on top of brick 2. {x,y,z}: x,y,z positions of the pinch site of the hand â roughly the position where the ï¬ngertips would meet if the ï¬ngers are closed..
°
1) Sparse reward components: Using the above we can deï¬ne the following conditions for the successful completion of subtasks:
a) Reach Brick 1: The pinch site of the ï¬ngers is within a virtual box around the ï¬rst brick position.
reach =(|sB1 x â sP x | < âreach x ) ⧠(|sB1 y â sP y | < âreach y ) ⧠(|sB1 z â sP z | < âreach z
),
# where âreach
{x,y,z} denote the half-lengths of the sides of the virtual box for reaching.
b) Grasp Brick 1: Brick 1 is located above the table surface by a threshold, θ, that is possible only if the arm is the brick has been lifted.
grasp =b(1) z > θ
c) Stack: Brick 1 is stacked on brick 2. This is expressed as a box constraint on the displacement between brick 1 and brick 2 measured in the coordinate system of brick 2.
stack =(|C (2) x (sB1 â sB2)| < âstack x ) ⧠(|C (2) y (sB1 â sB2)| < âstack y ) ⧠(|C (2) z (sB1 â sB2)| < âstack z
),
{x,y,z} denote the half-lengths of the sides of the virtual box for stacking, and C (2) is the rotation matrix that projects where âstack a vector into the coordinate system of brick 2. This projection into the coordinate system of brick 2 is necessary since brick 2 is allowed to move freely. It ensures that the box constraint is considered relative to the pose of brick 2. While this criterion for a successful stack is quite complicated to express in terms of sites, it could be easily implemented in hardware e.g. via a contact sensor attached to brick 2.
2) Shaping components: The full composite reward also includes two distance based shaping components that guide the hand to the brick 1 and then brick 1 to brick 2. These could be approximate and would be relatively simple to implement with a hardware visual system that can only roughly identify the centroid of an object. The shaping components of the reward are given as follows:
a) Reaching to brick 1: :
rgi(s®1, s?) = 1 âtanh?(w4||s?! â s? ||)
b) Reaching to brick 2 for stacking:
rgo(s®!, s??) = 1 â tanh? (wo||s?! â s??|\9).
3) Full reward: Using the above components the reward functions from section VI: Stack, Grasp shaping, Reach and grasp shaping, and Full composite shaping can be expressed as in equations (3, 4, 5, 6) below. These make use of the predicates
above to determine whether which subtasks have been completed and return a reward accordingly.
r(b(1) z , sP , sB1, sB2) = if stack(b(1) otherwise z , sP , sB1, sB2)
1 0
r(b(1) z , sP , sB1, sB2) = if stack(b(1) z , sP , sB1, sB2) z , sP , sB1, sB2) ⧠grasp(b(1) z , sP , sB1, sB2)
1 0.25 if ¬stack(b(1) 0 otherwise if stack(b(1) 1 if ¬stack(b(1) 0.25 0.125 if ¬(stack(b(1) 0 otherwise
r(b(1) z , sP , sB1, sB2) = z , sP , sB1, sB2) z , sP , sB1, sB2) ⧠grasp(b(1) z , sP , sB1, sB2) ⨠grasp(b(1) z , sP , sB1, sB2) z , sP , sB1, sB2)) ⧠reach(b(1) z , sP , sB1, sB2) (5)
r(b(1) z , sP , sB1, sB2) = if stack(b(1) if ¬stack(b(1) if ¬(stack(b(1) otherwise z , sP , sB1, sB2) z , sP , sB1, sB2) ⧠grasp(b(1) z , sP , sB1, sB2) ⨠grasp(b(1) 1 0.25 + 0.25rS2(sB1, sP ) 0.125 0 + 0.125rS1(sB1, sP ) z , sP , sB1, sB2) z , sP , sB1, sB2)) ⧠reach(b(1)
(3)
(4)
# z , sP , sB1, sB2)
(6) | {
"id": "1504.00702"
} |
1704.01444 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | 7 1 0 2
r p A 6 ] G L . s c [
2 v 4 4 4 1 0 . 4 0 7 1 : v i X r a
# Learning to Generate Reviews and Discovering Sentiment
# Alec Radford 1 Rafal Jozefowicz 1 Ilya Sutskever 1
Abstract We explore the properties of byte-level recur- rent language models. When given sufï¬cient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Speciï¬cally, we ï¬nd a single unit which performs sentiment analysis. These representations, learned in an unsupervised man- ner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efï¬cient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct inï¬uence on the generative process of the model. Simply ï¬xing its value to be pos- itive or negative generates samples with the cor- responding positive or negative sentiment.
it is now commonplace to reuse these representations on a broad suite of related tasks - one of the most successful examples of transfer learning to date (Oquab et al., 2014).
There is also a long history of unsupervised representation learning (Olshausen & Field, 1997). Much of the early re- search into modern deep learning was developed and val- idated via this approach (Hinton & Salakhutdinov, 2006) (Huang et al., 2007) (Vincent et al., 2008) (Coates et al., 2010) (Le, 2013). Unsupervised learning is promising due to its ability to scale beyond only the subsets and domains of data that can be cleaned and labeled given resource, pri- vacy, or other constraints. This advantage is also its difï¬- culty. While supervised approaches have clear objectives that can be directly optimized, unsupervised approaches rely on proxy tasks such as reconstruction, density estima- tion, or generation, which do not directly encourage useful representations for speciï¬c tasks. As a result, much work has gone into designing objectives, priors, and architectures meant to encourage the learning of useful representations. We refer readers to Goodfellow et al. (2016) for a detailed review.
# 1. Introduction and Motivating Work
Representation learning (Bengio et al., 2013) plays a crit- ical role in many modern machine learning systems. Rep- resentations map raw data to more useful forms and the choice of representation is an important component of any application. Broadly speaking, there are two areas of re- search emphasizing different details of how to learn useful representations.
The supervised training of high-capacity models on large labeled datasets is critical to the recent success of deep learning techniques for a wide range of applications such as image classiï¬cation (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and machine transla- tion (Wu et al., 2016). Analysis of the task speciï¬c rep- resentations learned by these models reveals many fasci- Image classiï¬ers nating properties (Zhou et al., 2014). learn a broadly useful hierarchy of feature detectors re- representing raw pixels as edges, textures, and objects (Zeiler & Fergus, 2014). In the ï¬eld of computer vision,
1OpenAI, San Francisco, California, USA. Correspondence to: Alec Radford <alec@openai.com>.
Despite these difï¬culties, there are notable applications of unsupervised learning. Pre-trained word vectors are a vi- tal part of many modern NLP systems (Collobert et al., 2011). These representations, learned by modeling word co-occurrences, increase the data efï¬ciency and general- ization capability of NLP systems (Pennington et al., 2014) (Chen & Manning, 2014). Topic modelling can also dis- cover factors within a corpus of text which align to human interpretable concepts such as art or education (Blei et al., 2003).
How to learn representations of phrases, sentences, and Inspired by the documents is an open area of research. success of word vectors, Kiros et al. (2015) propose skip- thought vectors, a method of training a sentence encoder by predicting the preceding and following sentence. The representation learned by this objective performs competi- tively on a broad suite of evaluated tasks. More advanced training techniques such as layer normalization (Ba et al., 2016) further improve results. However, skip-thought vec- tors are still outperformed by supervised models which di- rectly optimize the desired performance metric on a spe- ciï¬c dataset. This is the case for both text classiï¬cation
Generating Reviews and Discovering Sentiment
tasks, which measure whether a speciï¬c concept is well en- coded in a representation, and more general semantic sim- ilarity tasks. This occurs even when the datasets are rela- tively small by modern standards, often consisting of only a few thousand labeled examples.
In contrast to learning a generic representation on one large dataset and then evaluating on other tasks/datasets, Dai & Le (2015) proposed using similar unsupervised objec- tives such as sequence autoencoding and language model- ing to ï¬rst pretrain a model on a dataset and then ï¬netune it for a given task. This approach outperformed training the same model from random initialization and achieved state of the art on several text classiï¬cation datasets. Combin- ing language modelling with topic modelling and ï¬tting a small supervised feature extractor on top has also achieved strong results on in-domain document level sentiment anal- ysis (Dieng et al., 2016).
tation to various degrees of out-of-domain data and tasks.
# 2. Dataset
Much previous work on language modeling has evaluated on relatively small but competitive datasets such as Penn Treebank (Marcus et al., 1993) and Hutter Prize Wikipedia (Hutter, 2006). As discussed in Jozefowicz et al. (2016) performance on these datasets is primarily dominated by regularization. Since we are interested in high-quality sen- timent representations, we chose the Amazon product re- view dataset introduced in McAuley et al. (2015) as a train- ing corpus. In de-duplicated form, this dataset contains over 82 million product reviews from May 1996 to July 2014 amounting to over 38 billion training bytes. Due to the size of the dataset, we ï¬rst split it into 1000 shards con- taining equal numbers of reviews and set aside 1 shard for validation and 1 shard for test.
Considering this, we hypothesize two effects may be com- bining to result in the weaker performance of purely unsu- pervised approaches. Skip-thought vectors were trained on a corpus of books. But some of the classiï¬cation tasks they are evaluated on, such as sentiment analysis of reviews of consumer goods, do not have much overlap with the text of novels. We propose this distributional issue, combined with the limited capacity of current models, results in represen- tational underï¬tting. Current generic distributed sentence representations may be very lossy - good at capturing the gist, but poor with the precise semantic or syntactic details which are critical for applications.
The experimental and evaluation protocols may be under- estimating the quality of unsupervised representation learn- ing for sentences and documents due to certain seemingly insigniï¬cant design decisions. Hill et al. (2016) also raises concern about current evaluation tasks in their recent work which provides a thorough survey of architectures and ob- jectives for learning unsupervised sentence representations - including the above mentioned skip-thoughts.
1.40 â LSTM (valid) --=+ LSTM (train) ââ mLSTM (valid) <== mLSTM (train) 1.35 a 8 bits per character 8 115 1105 200000 400000 600000 800000 # of updates 1000000
In this work, we test whether this is the case. We focus in on the task of sentiment analysis and attempt to learn an unsupervised representation that accurately contains this concept. Mikolov et al. (2013) showed that word-level re- current language modelling supports the learning of useful word vectors and we are interested in pushing this line of work. As an approach, we consider the popular research benchmark of byte (character) level language modelling due to its further simplicity and generality. We are also in- terested in evaluating this approach as it is not immediately clear whether such a low-level training objective supports the learning of high-level representations. We train on a very large corpus picked to have a similar distribution as our task of interest. We also benchmark on a wider range of tasks to quantify the sensitivity of the learned represen-
Figure 1. The mLSTM converges faster and achieves a better re- sult within our time budget compared to a standard LSTM with the same hidden state size
# 3. Model and Training Details
Many potential recurrent architectures and hyperparameter settings were considered in preliminary experiments on the dataset. Given the size of the dataset, searching the wide space of possible conï¬gurations is quite costly. To help alleviate this, we evaluated the generative performance of smaller candidate models after a single pass through the dataset. The model chosen for the large scale experiment is a single layer multiplicative LSTM (Krause et al., 2016) with 4096 units. We observed multiplicative LSTMs to converge faster than normal LSTMs for the hyperparam-
Generating Reviews and Discovering Sentiment
eter settings that were explored both in terms of data and wall-clock time. The model was trained for a single epoch on mini-batches of 128 subsequences of length 256 for a total of 1 million weight updates. States were initialized to zero at the beginning of each shard and persisted across updates to simulate full-backpropagation and allow for the forward propagation of information outside of a given sub- sequence. Adam (Kingma & Ba, 2014) was used to ac- celerate learning with an initial 5e-4 learning rate that was decayed linearly to zero over the course of training. Weight normalization (Salimans & Kingma, 2016) was applied to the LSTM parameters. Data-parallelism was used across 4 Pascal Titan X gpus to speed up training and increase effec- tive memory size. Training took approximately one month. The model is compact, containing approximately as many parameters as there are reviews in the training dataset. It also has a high ratio of compute to total parameters com- pared to other large scale language models due to operating at a byte level. The selected model reaches 1.12 bits per byte.
Table 1. Small dataset classiï¬cation accuracies
METHOD MR CR SUBJ MPQA NBSVM [49] SKIPTHOUGHT [23] SKIPTHOUGHT(LN) SDAE [12] CNN [21] ADASENT [56] BYTE MLSTM 79.4 77.3 79.5 74.6 81.5 83.1 86.9 81.8 81.8 83.1 78.0 85.0 86.3 91.4 93.2 92.6 93.7 90.8 93.4 95.5 94.6 86.3 87.9 89.3 86.9 89.6 93.3 88.5
# 4. Experimental Setup and Results
Our model processes text as a sequence of UTF-8 encoded bytes (Yergeau, 2003). For each byte, the model updates its hidden state and predicts a probability distribution over the next possible byte. The hidden state of the model serves as an online summary of the sequence which encodes all information the model has learned to preserve that is rele- vant to predicting the future bytes of the sequence. We are interested in understanding the properties of the learned en- coding. The process of extracting a feature representation is outlined as follows:
94 92. byte mLSTM (ours) 574 90 CT-LSTM ensemble Neural Semantic Encoder Paragram-SL999 LSTM. Test Accuracy Dynamic Memory Network CNN multichannel Recurrent Neural Tensor Network Li Regularized L2 Regularized 84 107 10? 10? Labeled Training Examples
Figure 2. Performance on the binary version of SST as a function of labeled training examples. The solid lines indicate the aver- age of 100 runs while the sharded regions indicate the 10th and 90th percentiles. Previous results on the dataset are plotted as dashed lines with the numbers indicating the amount of examples required for logistic regression on the byte mLSTM representa- tion to match their performance. RNTN (Socher et al., 2013) CNN (Kim, 2014) DMN (Kumar et al., 2015) LSTM (Wieting et al., 2015) NSE (Munkhdalai & Yu, 2016) CT-LSTM (Looks et al., 2017)
⢠Since newlines are used as review delimiters in the training dataset, all newline characters are replaced with spaces to avoid the model resetting state.
⢠Any leading whitespace is removed and replaced with a newline+space to simulate a start token. Any trailing whitespace is removed and replaced with a space to simulate an end token. The text is encoded as a UTF- 8 byte sequence.
⢠Model states are initialized to zeros. The model pro- cesses the sequence and the ï¬nal cell states of the mL- STM are used as a feature representation. Tanh is ap- plied to bound values between -1 and 1.
We follow the methodology established in Kiros et al. (2015) by training a logistic regression classiï¬er on top of our modelâs representation on datasets for tasks including semantic relatedness, text classiï¬cation, and paraphrase de- tection. For the details on these comparison experiments, we refer the reader to their work. One exception is that we use an L1 penalty for text classiï¬cation results instead of L2 as we found this performed better in the very low data regime.
# 4.1. Review Sentiment Analysis
Table 1 shows the results of our model on 4 standard text classiï¬cation datasets. The performance of our model is noticeably lopsided. On the MR (Pang & Lee, 2005) and
Generating Reviews and Discovering Sentiment
CR (Hu & Liu, 2004) sentiment analysis datasets we im- prove the state of the art by a signiï¬cant margin. The MR and CR datasets are sentences extracted from Rotten Toma- toes, a movie review website, and Amazon product reviews (which almost certainly overlaps with our training corpus). This suggests that our model has learned a rich represen- tation of text from a similar domain. On the other two datasets, SUBJâs subjectivity/objectivity detection (Pang & Lee, 2004) and MPQAâs opinion polarity (Wiebe et al., 2005) our model has no noticeable advantage over other unsupervised representation learning approaches and is still outperformed by a supervised approach.
To better quantify the learned representation, we also test on a wider set of sentiment analysis datasets with differ- ent properties. The Stanford Sentiment Treebank (SST) (Socher et al., 2013) was created speciï¬cally to evaluate more complex compositional models of language. It is de- rived from the same base dataset as MR but was relabeled via Amazon Mechanical and includes dense labeling of the phrases of parse trees computed for all sentences. For the binary subtask, this amounts to 76961 total labels com- pared to the 6920 sentence level labels. As a demonstration of the capability of unsupervised representation learning to simplify data collection and remove preprocessing steps, our reported results ignore these dense labels and computed parse trees, using only the raw text and sentence level la- bels.
The representation learned by our model achieves 91.8% signiï¬cantly outperforming the state of the art of 90.2% by a 30 model ensemble (Looks et al., 2017). As visualized in Figure 2, our model is very data efï¬cient. It matches the performance of baselines using as few as a dozen la- beled examples and outperforms all previous results with only a few hundred labeled examples. This is under 10% of the total sentences in the dataset. Confusingly, despite a 16% relative error reduction on the binary subtask, it does not reach the state of the art of 53.6% on the ï¬ne-grained subtask, achieving 52.9%.
# 4.2. Sentiment Unit
Table 2. IMDB sentiment classiï¬cation
METHOD ERROR FULLUNLABELEDBOW (MAAS ET AL., 2011) NB-SVM TRIGRAM (MESNIL ET AL., 2014) SENTIMENT UNIT (OURS) SA-LSTM (DAI & LE, 2015) BYTE MLSTM (OURS) TOPICRNN (DIENG ET AL., 2016) VIRTUAL ADV (MIYATO ET AL., 2016) 11.11% 8.13% 7.70% 7.24% 7.12% 6.24% 5.91%
We conducted further analysis to understand what repre-
lm Negative reviews © Positive reviews 1000 count
Figure 3. Histogram of cell activation values for the sentiment unit on IMDB reviews.
sentations our model learned and how they achieve the ob- served data efï¬ciency. The beneï¬t of an L1 penalty in the low data regime (see Figure 2) is a clue. L1 regulariza- tion is known to reduce sample complexity when there are many irrelevant features (Ng, 2004). This is likely to be the case for our model since it is trained as a language model and not as a supervised feature extractor. By inspecting the relative contributions of features on various datasets, we discovered a single unit within the mLSTM that directly corresponds to sentiment. In Figure 3 we show the his- togram of the ï¬nal activations of this unit after processing IMDB reviews (Maas et al., 2011) which shows a bimodal distribution with a clear separation between positive and negative reviews. In Figure 4 we visualize the activations of this unit on 6 randomly selected reviews from a set of 100 high contrast reviews which shows it acts as an on- line estimate of the local sentiment of the review. Fitting a threshold to this single unit achieves a test accuracy of 92.30% which outperforms a strong supervised results on the dataset, the 91.87% of NB-SVM trigram (Mesnil et al., 2014), but is still below the semi-supervised state of the art of 94.09% (Miyato et al., 2016). Using the full 4096 unit representation achieves 92.88%. This is an improvement of only 0.58% over the sentiment unit suggesting that almost all information the model retains that is relevant to senti- ment analysis is represented in the very compact form of a single scalar. Table 2 has a full list of results on the IMDB dataset.
# 4.3. Capacity Ceiling
Encouraged by these results, we were curious how well the modelâs representation scales to larger datasets. We try our approach on the binary version of the Yelp Dataset
Generating Reviews and Discovering Sentiment
to the point of being ridiculous.
# I
found this to be a charmil the cast is wond
Tyne Daly's performance, though I'm not generally a fan of her work. Finally, , especially the dorky three in the bar. The movie is suitable for the whole fami
# Judy
Tt never happened. In "It Should Happen to You" (I can't think of a blander title, by the way), Holliday does yet one more variation on the dumb blonde who's maybe not so duml
mle a i an ce
100 98 5 96 5 944 Test Accuracy 924 90 10! 102 10? 104 10° Labeled Training Examples
Figure 5. Performance on the binary version of the Yelp reviews dataset as a function of labeled training examples. The modelâs performance plateaus after about ten labeled examples and only slow improves with additional data.
Team Spirit it it misses the warmth o yf
Table 3. Microsoft Paraphrase Corpus
God bless this made for TV sequel,
METHOD ACC F1 SKIPTHOUGHT (KIROS ET AL., 2015) SDAE (HILL ET AL., 2016) MTMETRICS [31] BYTE MLSTM 73.0 76.4 77.4 75.0 82.0 83.4 84.1 82.8
Figure 4. Visualizing the value of the sentiment cell as it processes six randomly selected high contrast IMDB reviews. Red indicates negative sentiment while green indicates positive sentiment. Best seen in color.
Challenge in 2015 as introduced in Zhang et al. (2015). This dataset contains 598,000 examples which is an or- der of magnitude larger than any other datasets we tested on. When visualizing performance as a function of number of training examples in Figure 5, we observe a âcapacity ceilingâ where the test accuracy of our approach only im- proves by a little over 1% across a four order of magnitude increase in training data. Using the full dataset, we achieve 95.22% test accuracy. This better than a BoW TFIDF base- line at 93.66% but slightly worse than the 95.64% of a lin- ear classiï¬er on top of the 500,000 most frequent n-grams up to length 5.
The observed capacity ceiling is an interesting phenomena and stumbling point for scaling our unsupervised represen- tations. We think a variety of factors are contributing to cause this. Since our model is trained only on Amazon reviews, it is does not appear to be sensitive to concepts speciï¬c to other domains. For instance, Yelp reviews are of
Table 4. SICK semantic relatedness subtask
METHOD r Ï MSE SKIPTHOUGHT [23] SKIPTHOUGHT(LN) TREE-LSTM [47] BYTE MLSTM 0.858 0.858 0.868 0.792 0.792 0.788 0.808 0.725 0.269 0.270 0.253 0.390
businesses, where details like hospitality, location, and at- mosphere are important. But these ideas are not present in reviews of products. Additionally, there is a notable drop in the relative performance of our approach transitioning from sentence to document datasets. This is likely due to our model working on the byte level which leads to it fo- cusing on the content of the last few sentences instead of the whole document. Finally, as the amount of labeled data increases, the performance of the simple linear model we train on top of our static representation will eventually satu- rate. Complex models explicitly trained for a task can con- tinue to improve and eventually outperform our approach with enough labeled data.
With this context, the observed results make a lot of sense.
Generating Reviews and Discovering Sentiment
Sentiment ï¬xed to positive Sentiment ï¬xed to negative Just what I was looking for. Nice ï¬tted pants, exactly matched seam to color contrast with other pants I own. Highly recommended and also very happy! The package received was blank and has no barcode. A waste of time and money. This product does what it is supposed to. I always keep three of these in my kitchen just in case ever I need a replacement cord. Great little item. Hard to put on the crib without some kind of embellishment. My guess is just like the screw kind of attachment I had. Best hammock ever! Stays in place and holds itâs shape. Comfy (I love the deep neon pictures on it), and looks so cute. They didnât ï¬t either. Straight high sticks at the end. On par with other buds I have. Lesson learned to avoid. Dixie is getting her Doolittle newsletter weâll see another new one coming out next year. Great stuff. And, hereâs the contents - information that we hardly know about or forget. great product but no seller. couldnât ascertain a cause. Broken product. I am a proliï¬c consumer of this company all the time. I love this weapons look . Like I said beautiful !!! I rec- ommend it to all. Would suggest this to many roleplayers , And I stronge to get them for every one I know. A must watch for any man who love Chess!
Like the cover, Fits good. . However, an annoying rear piece like garbage should be out of this one. I bought this hoping it would help with a huge pull down my back & the black just doesnât stay. Scrap off everytime I use it.... Very disappointed.
Table 5. Random samples from the model generated when the value of sentiment hidden state is ï¬xed to either -1 or 1 for all steps. The sentiment unit has a strong inï¬uence on the modelâs generative process.
On a small sentence level dataset of a known domain (the movie reviews of Stanford Sentiment Treebank) our model sets a new state of the art. But on a large, document level dataset of a different domain (the Yelp reviews) it is only competitive with standard baselines.
# 4.4. Other Tasks
Besides classiï¬cation, we also evaluate on two other stan- dard tasks: semantic relatedness and paraphrase detection. While our model performs competitively on Microsoft Re- search Paraphrase Corpus (Dolan et al., 2004) in Table 3, it performs poorly on the SICK semantic relatedness task (Marelli et al., 2014) in Table 4. It is likely that the form and content of the semantic relatedness task, which is built on top of descriptions of images and videos and contains sentences such as âA sea turtle is hunting for ï¬shâ is ef- fectively out-of-domain for our model which has only been trained on the text of product reviews.
# 4.5. Generative Analysis
Although the focus of our analysis has been on the prop- erties of our modelâs representation, it is trained as a gen- erative model and we are also interested in its generative capabilities. Hu et al. (2017) and Dong et al. (2017) both designed conditional generative models to disentangle the content of text from various attributes like sentiment or
tense. We were curious whether a similar result could be achieved using the sentiment unit. In Table 5 we show that by simply setting the sentiment unit to be positive or neg- ative, the model generates corresponding positive or nega- tive reviews. While all sampled negative reviews contain sentences with negative sentiment, they sometimes contain sentences with positive sentiment as well. This might be reï¬ective of the bias of the training corpus which contains over 5x as many ï¬ve star reviews as one star reviews. Nev- ertheless, it is interesting to see that such a simple manipu- lation of the modelâs representation has a noticeable effect on its behavior. The samples are also high quality for a byte level language model and often include valid sentences.
# 5. Discussion and Future Work
It is an open question why our model recovers the con- cept of sentiment in such a precise, disentangled, inter- pretable, and manipulable way. It is possible that senti- ment as a conditioning feature has strong predictive capa- bility for language modelling. This is likely since senti- ment is such an important component of a review. Previous work analysing LSTM language models showed the exis- tence of interpretable units that indicate position within a line or presence inside a quotation (Karpathy et al., 2015). In many ways, the sentiment unit in this model is just a scaled up example of the same phenomena. The update equation of an LSTM could play a role. The element-wise
Generating Reviews and Discovering Sentiment
operation of its gates may encourage axis-aligned repre- sentations. Models such as word2vec have also been ob- served to have small subsets of dimensions strongly asso- ciated with speciï¬c tasks (Li et al., 2016).
Our work highlights the sensitivity of learned representa- tions to the data distribution they are trained on. The results make clear that it is unrealistic to expect a model trained on a corpus of books, where the two most common gen- res are Romance and Fantasy, to learn an encoding which preserves the exact sentiment of a review. Likewise, it is unrealistic to expect a model trained on Amazon product reviews to represent the precise semantic content of a cap- tion of an image or a video.
There are several promising directions for future work highlighted by our results. The observed performance plateau, even on relatively similar domains, suggests im- proving the representation model both in terms of architec- ture and size. Since our model operates at the byte-level, hierarchical/multi-timescale extensions could improve the quality of representations for longer documents. The sen- sitivity of learned representations to their training domain could be addressed by training on a wider mix of datasets with better coverage of target tasks. Finally, our work encourages further research into language modelling as it demonstrates that the standard language modelling objec- tive with no modiï¬cations is sufï¬cient to learn high-quality representations.
of Machine Learning Research, 12(Aug):2493â2537, 2011.
Dai, Andrew M and Le, Quoc V. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems, pp. 3079â3087, 2015.
Dieng, Adji B, Wang, Chong, Gao, Jianfeng, and Pais- ley, John. Topicrnn: A recurrent neural network with long-range semantic dependency. arXiv preprint arXiv:1611.01702, 2016.
Dolan, Bill, Quirk, Chris, and Brockett, Chris. Unsuper- vised construction of large paraphrase corpora: Exploit- ing massively parallel news sources. In Proceedings of the 20th international conference on Computational Lin- guistics, pp. 350. Association for Computational Lin- guistics, 2004.
Dong, Li, Huang, Shaohan, Wei, Furu, Lapata, Mirella, Zhou, Ming, and Ke, Xu. Learning to generate prod- uct reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pp. 623â632. Associa- tion for Computational Linguistics, 2017.
Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. 2016.
Hill, Felix, Cho, Kyunghyun, and Korhonen, Anna. Learn- ing distributed representations of sentences from unla- belled data. arXiv preprint arXiv:1602.03483, 2016.
# References
Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Ge- arXiv preprint offrey E. arXiv:1607.06450, 2016. Layer normalization.
Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine in- telligence, 35(8):1798â1828, 2013.
Blei, David M, Ng, Andrew Y, and Jordan, Michael I. La- tent dirichlet allocation. Journal of machine Learning research, 3(Jan):993â1022, 2003.
Chen, Danqi and Manning, Christopher D. A fast and accurate dependency parser using neural networks. In EMNLP, pp. 740â750, 2014.
Coates, Adam, Lee, Honglak, and Ng, Andrew Y. An analysis of single-layer networks in unsupervised feature learning. Ann Arbor, 1001(48109):2, 2010.
Collobert, Ronan, Weston, Jason, Bottou, L´eon, Karlen, Michael, Kavukcuoglu, Koray, and Kuksa, Pavel. Natu- ral language processing (almost) from scratch. Journal
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, An- drew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks for acoustic mod- eling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29 (6):82â97, 2012.
Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reduc- ing the dimensionality of data with neural networks. sci- ence, 313(5786):504â507, 2006.
Hu, Minqing and Liu, Bing. Mining and summarizing In Proceedings of the tenth ACM customer reviews. SIGKDD international conference on Knowledge dis- covery and data mining, pp. 168â177. ACM, 2004.
Hu, Zhiting, Yang, Zichao, Liang, Xiaodan, Salakhutdinov, Ruslan, and Xing, Eric P. Controllable text generation. arXiv preprint arXiv:1703.00955, 2017.
Huang, Fu Jie, Boureau, Y-Lan, LeCun, Yann, et al. Un- supervised learning of invariant feature hierarchies with applications to object recognition. In Computer Vision and Pattern Recognition, 2007. CVPRâ07. IEEE Confer- ence on, pp. 1â8. IEEE, 2007.
Generating Reviews and Discovering Sentiment
Hutter, Marcus. The human knowledge compression con- test. 2006. URL http://prize. hutter1. net, 2006.
Jozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Madnani, Nitin, Tetreault, Joel, and Chodorow, Martin. Re- examining machine translation metrics for paraphrase identiï¬cation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pp. 182â190. Association for Computational Linguistics, 2012.
Karpathy, Andrej, Johnson, Justin, and Fei-Fei, Li. Vi- sualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078, 2015.
Kim, Yoon. Convolutional neural networks for sentence classiï¬cation. arXiv preprint arXiv:1408.5882, 2014.
Kingma, Diederik and Ba, Jimmy. method for stochastic optimization. arXiv:1412.6980, 2014. A arXiv preprint Adam:
Kiros, Ryan, Zhu, Yukun, Salakhutdinov, Ruslan R, Zemel, Richard, Urtasun, Raquel, Torralba, Antonio, and Fidler, Sanja. Skip-thought vectors. In Advances in neural in- formation processing systems, pp. 3294â3302, 2015.
Krause, Ben, Lu, Liang, Murray, Iain, and Renals, Steve. arXiv Multiplicative lstm for sequence modelling. preprint arXiv:1609.07959, 2016.
Marcus, Mitchell P, Marcinkiewicz, Mary Ann, and San- torini, Beatrice. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993.
Marelli, Marco, Bentivogli, Luisa, Baroni, Marco, Bernardi, Raffaella, Menini, Stefano, and Zamparelli, Roberto. Semeval-2014 task 1: Evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. SemEval-2014, 2014.
McAuley, Julian, Pandey, Rahul, and Leskovec, Jure. Infer- ring networks of substitutable and complementary prod- ucts. In Proceedings of the 21th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, pp. 785â794. ACM, 2015.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Mesnil, Gr´egoire, Mikolov, MarcâAurelio, and Bengio, Yoshua. generative and discriminative techniques timent analysis of movie reviews. arXiv:1412.5335, 2014. Tomas, Ranzato, Ensemble of sen- for arXiv preprint
Kumar, Ankit, Irsoy, Ozan, Su, Jonathan, Bradbury, James, English, Robert, Pierce, Brian, Ondruska, Peter, Gulra- jani, Ishaan, and Socher, Richard. Ask me anything: Dy- namic memory networks for natural language process- ing. CoRR, abs/1506.07285, 2015.
Mikolov, Tomas, Yih, Wen-tau, and Zweig, Geoffrey. Lin- guistic regularities in continuous space word representa- tions. 2013.
Le, Quoc V. Building high-level features using large scale unsupervised learning. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Confer- ence on, pp. 8595â8598. IEEE, 2013.
Miyato, Takeru, Dai, Andrew M, and Goodfellow, Ian. Ad- versarial training methods for semi-supervised text clas- siï¬cation. arXiv preprint arXiv:1605.07725, 2016.
Munkhdalai, Tsendsuren and Yu, Hong. Neural semantic encoders. arXiv preprint arXiv:1607.04315, 2016.
Li, Jiwei, Monroe, Will, and Jurafsky, Dan. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.
Looks, Moshe, Herreshoff, Marcello, Hutchins, DeLesley, and Norvig, Peter. Deep learning with dynamic compu- tation graphs. arXiv preprint arXiv:1702.02181, 2017.
Maas, Andrew L, Daly, Raymond E, Pham, Peter T, Huang, Dan, Ng, Andrew Y, and Potts, Christopher. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technologies- Volume 1, pp. 142â150. Association for Computational Linguistics, 2011.
Ng, Andrew Y. Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twenty- ï¬rst international conference on Machine learning, pp. 78. ACM, 2004.
Olshausen, Bruno A and Field, David J. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311â3325, 1997.
Oquab, Maxime, Bottou, Leon, Laptev, Ivan, and Sivic, Josef. Learning and transferring mid-level image repre- sentations using convolutional neural networks. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717â1724, 2014.
Generating Reviews and Discovering Sentiment
Pang, Bo and Lee, Lillian. A sentimental education: Senti- ment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meet- ing on Association for Computational Linguistics, pp. 271. Association for Computational Linguistics, 2004.
the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Yergeau, Francois. Utf-8, a transformation format of iso 10646. 2003.
Pang, Bo and Lee, Lillian. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pp. 115â 124. Association for Computational Linguistics, 2005.
Jeffrey, Socher, Richard, and Manning, Christopher D. Glove: Global vectors for word repre- sentation. In EMNLP, volume 14, pp. 1532â1543, 2014.
Salimans, Tim and Kingma, Diederik P. Weight normaliza- tion: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Infor- mation Processing Systems, pp. 901â901, 2016.
Socher, Richard, Perelygin, Alex, Wu, Jean Y, Chuang, Jason, Manning, Christopher D, Ng, Andrew Y, Potts, Christopher, et al. Recursive deep models for seman- tic compositionality over a sentiment treebank. Citeseer, 2013.
Zeiler, Matthew D and Fergus, Rob. Visualizing and under- In European confer- standing convolutional networks. ence on computer vision, pp. 818â833. Springer, 2014.
Zhang, Xiang, Zhao, Junbo, and LeCun, Yann. Character- level convolutional networks for text classiï¬cation. In Advances in neural information processing systems, pp. 649â657, 2015.
Zhao, Han, Lu, Zhengdong, and Poupart, Pascal. Self- adaptive hierarchical sentence model. arXiv preprint arXiv:1504.05070, 2015.
Zhou, Bolei, Khosla, Aditya, Lapedriza, Agata, Oliva, Aude, and Torralba, Antonio. Object detectors emerge in deep scene cnns. arXiv preprint arXiv:1412.6856, 2014.
Tai, Kai Sheng, Socher, Richard, and Manning, Christo- Improved semantic representations from tree- arXiv pher D. structured long short-term memory networks. preprint arXiv:1503.00075, 2015.
Vincent, Pascal, Larochelle, Hugo, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Extracting and composing robust features with denoising autoencoders. In Proceed- ings of the 25th international conference on Machine learning, pp. 1096â1103. ACM, 2008.
Wang, Sida and Manning, Christopher D. Baselines and bigrams: Simple, good sentiment and topic classiï¬ca- In Proceedings of the 50th Annual Meeting of tion. the Association for Computational Linguistics: Short Papers-Volume 2, pp. 90â94. Association for Computa- tional Linguistics, 2012.
Wiebe, Janyce, Wilson, Theresa, and Cardie, Claire. An- notating expressions of opinions and emotions in lan- guage. Language resources and evaluation, 39(2):165â 210, 2005.
Wieting, John, Bansal, Mohit, Gimpel, Kevin, and Livescu, Karen. Towards universal paraphrastic sentence embed- dings. arXiv preprint arXiv:1511.08198, 2015.
Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Googleâs neural machine translation system: Bridging | {
"id": "1612.08220"
} |
1704.00805 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | 8 1 0 2
g u A 1 2 ] C O . h t a m [ 4 v 5 0 8 0 0 . 4 0 7 1 : v i X r a
# On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning
Bolin Gao and Lacra Pavel
AbstractâIn this paper, we utilize results from convex analysis and monotone operator theory to derive additional properties of the softmax function that have not yet been covered in the existing literature. In particular, we show that the softmax function is the monotone gradient map of the log-sum-exp function. By exploiting this connection, we show that the inverse temper- ature parameter λ determines the Lipschitz and co-coercivity properties of the softmax function. We then demonstrate the usefulness of these properties through an application in game- theoretic reinforcement learning.
# I. INTRODUCTION
The softmax function is one of the most well-known func- tions in science and engineering and has enjoyed widespread usage in fields such as game theory [1], [2], [3], reinforcement learning [4] and machine learning [5], [6]. From a game theory and reinforcement learning perspective, the softmax function maps the raw payoff or the score (or Q-value) associated with a payoff to a mixed strategy [1], [2], [4], whereas from the perspective of multi-class logistic regression, the softmax function maps a vector of logits (or feature variables) to a posterior probability distribution [5], [6]. The broader engineering applications involving the softmax function are numerous; interesting examples can be found in the fields of VLSI and neuromorphic computing, see [35], [36], [37], [39]. The term âsoftmaxâ is a portmanteau of âsoftâ and âargmaxâ [5]. The function first appeared in the work of Luce [12], although its coinage is mostly credited to Bridle [13]. Depending on the context in which the softmax function appears, it also goes by the name of Boltzmann distribution [1], [4], [34], Gibbs map [22], [46], logit map, logit choice tule, logit response function [1], [2], [3], [19], [14], [23], [57] or (smooth) perturbed best response function [44], [56]. The reader should take care in distinguishing the softmax function used in this paper from the log-sum-exp function, which is often also referred to as the âsoftmaxâ (since the log-sum-exp is a soft approximation of the vector-max function [7], [24]). There are many factors contributing to the wide-spread usage of the softmax function. In the context of reinforcement learning, the softmax function ensures a trade-off between ex- ploitation and exploration, in that every strategy in an agentâs possession has a chance of being explored. Unlike some other choice mechanisms such as e-greedy [4], the usage of
softmax selection rule1 is favorably supported by experimental literature in game theory and reinforcement learning as a plausible model for modeling real-life decision-making. For instance, in [20], the authors noted that the behavior of mon- keys during reinforcement learning experiments is consistent with the softmax selection rule. Furthermore, the input-output behavior of the softmax function has been compared to lateral inhibition in biological neural networks [5]. For additional discussions on the connections between softmax selection rule and the neurophysiology of decision-making, see [30], [31], [32], [33]. From the perspective of game theory, the softmax function characterizes the so-called âlogit equilibriumâ, which accounts for incomplete information and random perturbation of the payoff during gameplay and has been noted for having better versatility in describing the outcomes of gameplay as compared to the Nash equilibrium [3], [14].
learning softmax rule strategy
Fig. 1: High-level representation of a game-theoretic multi-agent reinforcement learning scheme with the softmax selection rule. In this learning scenario, the players each choose some strategy, play the game and receive real-valued payoffs. The players then use some learning rule to independently convert the payoffs into scores. Finally, each player uses the softmax to select the next strategy.
Despite the intuitions that researchers have acquired with respect to the usage of the softmax function, it is apparent that the understanding of its mathematical properties is still lacking. For instance, in the analysis of stateless multi-agent reinforcement learning schemes (Figure 1), when the action selection rule is taken as the softmax function, is of interest which, if any, properties of softmax can allow us to conclude convergence of the learning algorithm towards a solution of the game (e.g., a Nash or logit equilibrium). Although the desired properties that can be used to conclude such convergence are fairly mundane, virtually no reference to these properties can be found within the existing body of literature. With regard to applications in the context of
B. Gao and L. Pavel are with the Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, M5S bolin.gao@mail.utoronto.ca, 3G4, pavel@ece.utoronto.ca
1In this paper, we refer to the softmax function interchangeably as the softmax operator, softmax map, softmax choice, softmax selection rule, or simply, the softmax.
1
reinforcement and machine learning, the adjustment of the temperature constant of the softmax function is still performed on a rule-of-thumb basis. It has also been brieï¬y speculated in [42] that proper adjustment of the temperature constant can be used for game-theoretic reinforcement learning algorithms to achieve higher expected payoff. Therefore, an adaptive mechanism for scheduling the temperature constant would be desirable for many applications. Clearly, these questions can only be afï¬rmatively answered by uncovering new properties of the softmax function.
The goal of this paper is to expand on the known mathe- matical properties of the softmax function and demonstrate how they can be utilized to conclude the convergence of learning algorithm in a simple application of game-theoretic reinforcement learning. For additional examples and more involved applications, see our related paper [21]. We perform our analysis and derive new properties by using tools from convex analysis [7], [24] and monotone operator theory [25], [26]. It has been known that stateless multi-agent reinforce- ment learning that utilizes the softmax selection rule has close connections with the ï¬eld of evolutionary game theory [9], [10], [22], [23], [54], [20], [58]. Therefore, throughout this paper, we motivate some of the results through insights from the ï¬eld of evolutionary game theory [15], [16], [17]. It is our hope that researchers across various disciplines can apply our results presented here to their domain-speciï¬c problems.
The organization of this paper is as follows. Section II intro- duces notation convention for the rest of the paper. Section III introduces the deï¬nition of the softmax function, its different representations as well as a brief survey of several of its known properties from the existing literature. Section IV provides the background to convex optimization and monotone operator theory. In Section V, we derive additional properties of the softmax function. Section VI provides an analysis of a stateless continuous-time score-based reinforcement learning scheme within a single-player game setup to illustrate the application of these properties. Section VII provides the conclusion and some open problems for further investigation.
# II. NOTATIONS
The notations used in this paper are as follows: ¢ The p-norm of a vector is denoted as || - ||), 1 « The n â 1 dimensional unit simplex is denoted where, APâ! := {x ⬠R"|||x||, = 1,2; > O}. The (relative) interior of A"! is denoted
. ⤠â
< p<
â¤
« The n â 1 dimensional unit simplex is denoted by Aâ~1, where, APâ! := {x ⬠R"|||x||, = 1,2; > O}. « The (relative) interior of A"! is denoted by int(Aâ~!), where, int(A"~1) := {x ⬠R"|||x||1 = 1,2; > O}.
# x {
int(A"~1) := {x ⬠R"|||x||1 = 1,2; > O}. Râ denotes the i canonical basis of Râ, eg., e; =
# x {
. 1 = 1, xi > 0 }
â
e e; ⬠Râ denotes the i canonical basis of Râ, [0,...,1,...,0] T where 1 occupies the i position. + The vector of ones is denoted as 1 := [1,...,1] " vector of zeros is denoted as 0 := [0, eey 0] -
# and the
⢠Matrices are denoted using bold capital letters such as A. In general, a vector in the unconstrained space Rn will be denoted using z, while a vector in the n 1 dimensional unit simplex will be denoted using x. All logarithms are assumed to be base e.
2
# III. REVIEW OF THE SOFTMAX FUNCTION AND ITS KNOWN PROPERTIES
While the softmax function may take on different appear- ances depending on the application, its base model is that of a vector-valued function, whose individual component consists of an exponential evaluated at an element of a vector, which is normalized by the summation of the exponential of all the elements of that vector. In this section, we present several well- known and equivalent representations of the softmax function, and review some of its properties that are either immediate based on its deï¬nition or have been covered in the existing literature.
A. Representations of the Softmax function
The most well-known and widely-accepted version of the softmax function is as follows [5], [37], [40], [41], [43], [59].
Deï¬nition 1. The softmax function is given by Ï : Rn int(ânâ1),
exp(A2z1) o(z) = a : A>, (1) Xo exp(Az;) exp(AzZn) j=l
where λ is referred to as the inverse temperature constant.
Remark 1. The softmax function is commonly presented in the literature as the individual components of (1),
oi(2) = exp(Az;) a sis SY exp(Az;) j=l i<n. (2)
When A = 1, we refer to (1) as the standard softmax function. As X â> 0, the output of o converges point- wise to the center of the simplex, i.e., a uniform probability distribution. On the other hand, as A â oo, the output of a converges point-wise to e; ⬠R", where j = argmax e; z, 1<i<n provided that the difference between two or more components of z is not too small [23], [37]. We note that elsewhere in the literature, the reciprocal of A is also commonly used. Remark 2. In R?, (2) reduces to the logistic function in terms of 2 â 25, exp(Az;) 1 oi(2) IF
exp(λzi) exp(λzi) + exp(λzj) 1 λ(zi â = 1 + exp( zj)) â , j = i.
(3) Furthermore, we note that (2) can be equivalently represented as,
oi(z) = exp(Az; â log(j 1 exp(Az;)))- (4)
# Ïi(z) = exp(λzi â
While (4) is seldom used as a representation of the softmax function, the author noted that (4) represents an exponential family, which is the solution of the replicator dynamics of evolutionary game theory [2], [16], [17]. We will expand on the connections between the replicator dynamics and the softmax function in section V.
log-sum-exp negative entropy softmax (1st component) softmax (2nd component) tf Y tttjj;};}3}/4 \ \ \ WH â LA;
Fig. 2: Plots of the log-sum-exp, negative entropy and both components of softmax function over R2 with λ = 1. The red curve on the negative entropy plot is the restriction of the negative entropy over the 1-dimensional simplex, â1.
Another important representation of the softmax function can be derived by considering the âargmax functionâ under entropy regularization.â Let z ⬠R", and consider the argmax of «'z over the simplex,
M(z) = argmax a! z. weAn-} (5)
over int(Aâ~!) [48], by strong concavity of the argument of (6), it can be shown that by invoking the Karush-Kuhn-Tucker (KKT) conditions, the unique maximizer of (6) is the softmax function evaluated at z ⬠Râ, ie., n argmax [a' zâ 7! > x; log(ax;)] = o(z). weAnr-t j=l (8)
â λâ1
When there is a unique largest element in the vector z, it is clear that M returns the basis vector corresponding to the entry of that element, that is, M(z) = ej, where j = argmax ez. This solution corresponds to a vertex of 1<i<n the simplex. In general, however, (5) is set-valued; to see this, simply consider the case where two or more components of z are equal.
For many learning related applications, it is highly desirable for M (z) to be singled-valued [22], [23], [41], [49], [50]. The most common approach to achieve this is by employing a so-called regularizer function Ï to (5), which yields the regularized argmax function:3
M(z) = argmax [alz-w wear} ))- (6)
It has been noted in [20], [39], [51], [52] that the argument of the left-hand side of (8),
n a'zâX1Y a; log(x;), ja (9)
represents the so called âfree energyâ in statistical thermo- dynamics. In light of this connection, from a game-theoretic perspective, the softmax function can be thought of as pro- viding the mixed strategy with the maximum entropy which maximizes the payoff of a game [20].
It is also worth noting that the maximum of (9) over the simplex is by deï¬nition the Legendre-Fenchel transform of the negative entropy function [24, p. 102], also commonly referred to as the log-sum-exp function, which is given by lse : Rn
A common choice of the regularizer is the negative entropy function restricted to the simplex, which under the convention 0 log(0) = 0, is given by Ï : Rn
â
⪠{
â}
n Aâ? Y a; log(xzj),A>0 xe Art ja (7) +00 cg An}.
(7)
â
â} lse(z) := λâ1 log(
⪠{
n Ise(z) = d-Hog(3o exp(Az;)),A > 0. j=l (10)
When λ = 1, we refer to (10) as the standard log-sum-exp function.
It is well-known that the log-sum-exp is an approximation to the vector-max function [7, p. 72], [24, p. 27],
When λ = 1, we refer to (7) as the standard negative entropy function.
Since negative entropy is λâ1-strongly convex4 in
2As pointed out in [5, p. 182], the softmax function is a soft approximation of the argmax function, z ++ argmax a! z, not that of the âmaxâ function. 2eEAn-1
the regularizer is also referred to as an admissible deterministic perturbation [2, p. 189], penalty function [22], [23], smoothing function [44] or Bregman function [48]. For detailed construction of the regularizer, see [22], [23], [47].
4Recall that a function f is jz-strongly convex in ||-||,, if there exists 4 > 0, st. f(0z+(1â0)2') < Of (z)+ (1-49) f(zâ) -â Koa âA)\lzâ 2! ||2 for all z,z' ⬠dom f and 6 ⬠[0,1]. f is -strongly concave if âf is p-strongly convex.
||
||1
vecmax(z) := max . z1, . . . , zn} { Rn, vecmax(z)
That is, for any z ⬠R"â, vecmax(z) < Ise(z) < vecmax(z) +7! log(n), which can be shown by considering n. exp(A vecmax(z)) < > exp(Az;) < nexp(A vecmax(z)). =1
Due to this reason, the log-sum-exp is sometimes referred to as the âsoftmax functionâ in optimization-oriented literature. We note that the dual or convex conjugate of the log- sum-exp function (10) is the negative entropy restricted to the simplex, given by (7) [7, p. 93][24, p. 482][52]. We illustrate the log-sum-exp function as well as the negative entropy and the softmax function in Figure 2. By Fenchel-
3
Young inequality, the log-sum-exp function is bounded below by a linear function,
Ise(z) > x! zâ W(x), V2 ⬠A"1,2 ERâ. (11)
â¥
â
â
â
â
Further consequences of the duality between the negative entropy and the log-sum-exp function as well as its role in this time. Interested game theory will not be explored at readers may refer to [38], [52] or any standard textbooks on convex analysis, for example, [7], [24], [28].
Finally, we provide a probabilistic characterization of the softmax function. Let â¬;,i ⬠{1,...,n} be independent and identically distributed random variables with a Gumbel distribution given by,
Pre < ¢] = exp(âexp(âAe â 9), (12) 0.57721 is the Euler-Mascheroni constant. It can
â
â
â
where γ be shown that for any vector z â Rn [2, p. 194][19],
â
Pr (13) i = argmax z; + | =0;(z). 1<j<n
In game theory terms, (13) represents the probability of choosing the pure strategy that maximizes the payoff or score Rn, after the payoff or score has been perturbed by a z stochastic perturbation.
B. Properties of the Softmax - State of the Art
We briefly comment on some properties of the softmax function that are either immediate or have been covered in the existing literature. First, 0 maps the origin of Râ to the barycenter of A"~1, that is, 7(0) = n~11. The softmax func- tion o is surjective but not injective, as it can easily be shown that for any z, z+c1 ⬠Râ, Vc ⬠R, we have o(z+c1) = o(z). By definition, ||o(z)||) = o(z)'1=1,Vz eR".
In a recent paper, the authors of [43] noted that Ï(P(z)) = PÏ(z), where P is any permutation matrix, and that the standard softmax function satisï¬es a type of âcoordinate non- Rn, and expansivenessâ property, whereby given a vector z â 1 zi, then 0 suppose that zj ⥠zi). (zj â 2 ⤠The last property can be derived by exploiting the properties of the hyperbolic tangent function. It was also noted that these properties of the softmax function bear similarities with the Euclidean projection onto ânâ1 [43].
In a direction that is tangential to the aim of this paper, the authors of [40] is interested in finding a bound on the softmax function. It can be shown that, n exp(Azi) 1 =
n exp(Azi) 1 oi(z) = 2 » (4) a Lh TFexp(-ney â 2% Bends) ee erOG aM
where (14) is referred as âone-vs-eachâ bound, which can be generalized to bounds on arbitrary probabilities [40]. From (3), we see that this inequality is tight for n = 2.
4
IV. REVIEW OF CONVEX OPTIMIZATION AND MONOTONE OPERATOR THEORY
In this section we review some of the definitions and results from convex optimization and monotone operator theory that will be used in the derivation of new properties of the softmax function. Since the following definitions are standard, readers who are familiar with these subjects can skip this section without any loss of continuity. Most of the proofs of the propositions in this section can be found in references such as [7], [24], [25], [26], [27], [28]. Throughout this section, we assume that Râ is equipped with the standard n inner product (z,2â) == >> 22; with the induced 2-norm i=l
(z,2â) /(z,z). C?
lzll2 == /(z,z). We assume the domain of f, dom f, is convex. C1, C? denote the class of continuously-differentiable and twice continuously-differentiable functions, respectively.
Rn R is convex if, Deï¬nition 2. A function f : dom f
â
F(â),
f(0z + (1â8)2") < Of (2) + (1-9) F(â), (15)
# â dom f and θ
â¤
â
for all z, zâ ⬠dom f and @ ⬠[0,1] and strictly convex if (15) holds strictly whenever z 4 zâ and 6 ⬠(0,1).
â
The convexity of a C 2 function f is easily determined through its Hessian 2f .
â
Lemma 1. Let f be C 2. Then f is convex if and only if dom f is convex and its Hessian is positive semideï¬nite, that is, for all z
â
â
# ol
ol V? f(z)u > 0, (16)
â 2f (z) is positive deï¬nite for all z
â¥
and strictly convex if dom f . â â
Next, we introduce the concept of a monotone operator and its related properties. A monotone operator is usually taken as a set-valued relation, however, it is also natural for the deï¬nitions related to a monotone operator to be directly applied to single-valued maps [26].
Deï¬nition 3. ([26, p. 154]) An operator (or mapping) F : Rn
Rn is said to be: if,
D â ⢠pseudo monotone on
â
D 0 =
F(zâ)"(2-2)>0 = F(z)"(2-2) >0,Vz,2' â¬D. (17)
⢠pseudo monotone plus on and, D if it is pseudo monotone on D
F(z')"(zâ2') > O and F(z)'(zâ2/) =0 18 => F(z) = F(zâ),Vz,2' ⬠D. (18)
# â ⢠monotone on
# â D
if,
if,
# D
(F(z) â F(2â)) (2-2) >0,Vz,2°â¬D. (19)
F(2â)) (2-2) >0,Vz,2°â¬D. if it is monotone on D
# â ⢠monotone plus on
â
â¥
â D and,
monotone plus on D if it is monotone on D and,
(z-2') =0
# D
(F(z2)-F(2â)) (z-2') =0 = F(z) = F(zâ), Vz,2' â¬D. (20)
.
⢠strictly monotone on if,
F(2â))"(2-2/)
(F(z) â F(2â))"(2-2/) > 0,Vz,2° â¬D,z #2â. 2D
0,Vz,2°
â
â
# â D
Clearly, strictly monotone implies monotone plus, which in turn implies monotone, pseudo monotone plus and pseudo monotone. By deï¬nition, every strictly monotone operator is an injection. We refer to an operator F as being (strictly) anti-monotone if F is (strictly) monotone. The following proposition provides a natural connection between C 1, convex functions and monotone gradient maps.
Lemma 2. A C 1 function f is convex if and only if
(Viz) - VEâ) (2-2) > 0,Vz,2' â¬domf, (22)
0,Vz,2'
(
â
â â
â
â¥
â
and strictly convex if and only if,
(Vi (2) â VF(2)) (2-2) > 0,Vz, 2â ⬠dom f,z £2â. (23)
Next, we introduce the notions of Lipschitz continuity and the two concepts are related co-coercivity, and show that through the gradient of a convex function.
Deï¬nition 4. An operator (or mapping) F : is said to be D â Rn â Rn
⢠Lipschitz (or L-Lipschitz) if there exists a L > 0 such that,
|F(2) â Flo < Llle - 2'll2,Vz,2' ⬠D. (24)
# Flo
# Llle
â
â¤
â
# â D
If L = 1 in (24), then F is referred to as nonexpansive. (0, 1), then F is referred to as contractive. Otherwise, if L â ⢠co-coercive (or 1 L -co-coercive) if there exists a L > 0 such
that,
1
5
. â D (25) If L = 1 in (25), then F is referred to as ï¬rmly nonexpan- sive.
L -co-coercive oper- ator is L-Lipschitz, in particular, every ï¬rmly nonexpansive operator is nonexpansive. However, the reverse need not be z is nonexpansive but not ï¬rmly true, for example f (z) = nonexpansive. Fortunately, the Baillon-Haddad theorem ([27, p. 40], Theorem 3.13) provides the condition for when a L- Lipschitz operator is also 1
Theorem 1. (Baillon-Haddad theorem) Let f : dom f Rn â R be a C 1, convex function on dom f and such that f is â f is L-Lipschitz continuous for some L > 0, then -co-coercive. â
â 1 L
Finally, we will introduce the notion of maximal monotonic- ity. Let H : R" â 2®" be the set-valued map, where 2°â denotes the power set of Râ. Let the graph of H be given by graH := {(u,v) ⬠R" x R"|v = Hu}. The set-valued map H is said to be monotone if (uâu')'(vâv') > 0,v⬠H(u),v' ⬠H(uâ).
â
(22)
(23)
5
Deï¬nition 5. ([25, p. 297]) Let H : Rn be monotone. Then H is maximal monotone if there exists no monotone operator G : Rn such that gra G properly contains gra H, i.e., for every (u, v)
â
# Ã gra H) (u
(u,v) ⬠gra & (V(u',v') ⬠graH) (uâw')'(vâvâ) > 0. (26)
By Zornâs Lemma, every monotone operator can be ex- tended to a maximal monotone operator [24, p. 535], [25, p. 297]. For the scope of this paper, we are interested when a single-valued map is maximal monotone. The following proposition provides a simple characterization of this result [24, p. 535].
Lemma 3. If a continuous mapping F : Rn Rn is mono- tone, it is maximal monotone. In particular, every differentiable monotone mapping is maximal monotone.
V. DERIVATION OF PROPERTIES OF SOFTMAX FUNCTION
In this section we derive several properties of the softmax function using tools from convex analysis and monotone op- erator theory introduced in the previous section. We begin by establishing the connection between the log-sum-exp function and the softmax function.
It has long been known that the softmax function is the gradient map of a convex potential function [37], however, the fact that its potential function is the log-sum-exp function (i.e., (10)) is rarely discussed.5 We make this connection clear with the following proposition.
Proposition 1. The softmax function is the gradient of the log-sum-exp function, that is, Ï(z) =
â
Proof. Evaluating the partial derivative of lse at each compo- exp(λzi) j=1 exp(λzj)
gradient, we have,
Olse(z) exp() Ox 1 Vise(2)=] : |=azâ~â] : | =o). lse(z exp(Az; a se) X (Az) exp(Azn) O2n
â
Next, we calculate the Hessian of the log-sum-exp function (and hence the Jacobian of the softmax function).
Proposition 2. The Jacobian of the softmax function and Hessian of the log-sum-exp function is given by:
J[o(z)] = V? lse(z) = A(diag(o(z)) â o(z)o(z)"), (27)
â
â
where (27) is a symmetric positive semideï¬nite matrix and satisï¬es J[Ï(z)]1 = 0, that is, 1 is the eigenvector associated with the zero eigenvalue of J[Ï(z)].
5Although not explicitly stated, this relationship could also be found in [7, p. 93] and various other sources.
Proof. The diagonal entries of 2 lse are given by,
n A Jexp(Az) 2 exp(Az;) â exp(Azi)? 0? Ise(z) j=l awe a ; oF (22 exp(23))? j=
and the off-diagonal entries of partials, â 2 lse are given by the mixed
0? Ise(z) _ âAexp(Azp) exp(Azi) O08 (3 exp (zi)? j=l
Assembling the partial derivatives, we obtain the Hessian of lse and the Jacobian of Ï:
J[o(z)] = V? Ise(z) = A(diag(o(z)) â o(z)o(z)"). (28)
â
â
The symmetry of J[Ï(z)] comes from the symmetric struc- ture of the diagonal and outer product terms. The positive semi-deï¬niteness of J[Ï(z)] follows from an application of the Cauchy-Schwarz inequality [7, p. 74]. It can be shown through direct computation that J[Ï(z)]1 = 0 or alternatively refer to [2, p. 213].
Remark 3. This result was previous noted in references such as [37], [38] and can be found in [2, p. 195][7, p. 74]. As a trivial consequence of Proposition 2, we can write the individual components of J[Ï(z)] as,
Ïj(z)), (29)
Jij[Ï(z)] = λÏi(z)(δij â
where δij is the Kronecker delta function. This representation is preferred for machine learning related applications and is loosely referred to as the âderivative of the softmaxâ [11].
Remark 4. Using the Jacobian of the softmax function given in (27), we provide the following important observation that connects the ï¬eld of evolutionary game theory with convex analysis and monotone operator theory. Let x = Ï(z), then we have,
Vv? Ise(z)|,-4(2) = A(diag(x) â ax"). (30)
Vv? Ise(z)|,-4(2) = A(diag(x) â ax"). (30) We note that this is precisely the matrix term appearing in the replicator dynamics [2, p. 229], [45], that is,
& = V" lse(z)| u = A(diag(x) â 2a! )u, (31) 2=0(2)
â â ânâ1 is a mixed strategy and u
Rn is a payoff where x vector. We note that the matrix term was referred to as the replicator operator in [56]. To the best of our knowledge, the implications of this connection has not been discussed in the evolutionary game theory community.
Lemma 4. The log-sum-exp function is C 2, convex and not strictly convex on Rn.
The convexity of the log-sum-exp function is well-known [7] and follows from Proposition 2. To show that log-sum-exp
is not strictly convex, take z and z +c1, where z then, â Rn, c â R,
lse(z + c1) = lse(z) + c. (32)
Thus, lse is afï¬ne along the line given by z+c1, which implies that the log-sum-exp function is not strictly convex. This result is also noted in [24, p. 48].
Proposition 3. The softmax function is monotone, that is,
(o(2) -o(2')) (2-7) 20,¥z,2° ER", (33)
20,¥z,2°
â and not strictly monotone on Rn.
â
â¥
â
Proof. Monotonicity of Ï follows directly from the convexity of the log-sum-exp function. Since the log-sum-exp function is not strictly convex on Rn, therefore by Lemma 2, Ï fails to be strictly monotone. Alternatively, since every strictly monotone operator is injective, therefore Ï is not strictly monotone on Rn.
The monotonicity of Ï allows us to state a stronger result.
Corollary 1. The softmax function is a maximal monotone operator, that is, there exists no monotone operator such that its graph properly contains the graph of the softmax function.
Proof. This directly follows from Ï being a continuous, mono- tone map, see Lemma 3.
Next, we show that under appropriate conditions, the soft- max function is a contraction in 2.
Lemma 5. ([8, p. 58], Theorem 2.1.6) A C 2, convex function f : Rn R has a Lipschitz continuous gradient with Lipschitz constant L > 0 if for all z, v
â 2f (z)v
0<vu! V?f(z)u < Llv|l3. (34)
0
â¤
â
â¤
Proposition 4. The softmax function is L-Lipschitz with re- spect to || \|2 with L = 4, that is, for all z,z' ⬠R",
â
llo(z) â o(2â)Il2 S Alle = 2'lla, (35)
o(2â)Il2
llo(z) â o(2â)Il2 S Alle = 2'lla, where is the inverse temperature constant.
# llo(z)
â¤
â
Proof. Given the Hessian of lse in Proposition 2, we have for all z, v
â
v! V? Ise(z)v = veoi(z) â (oS vj0i(z))?). (36)
Since the second term on the right hand side of (36) is nonnegative, therefore,
n n vl VW? Ise(z)u <A) v?0;(z) < Asupf{ai(z)} Ov? i=l i=l = v0! V? Ise(z)u < Alful|3. (37)
< Alful|3. {1,...,n},Vz
â
â = 1,
Rn. By 2 lse(z) is positive semideï¬nite. Hence using where sup Lemma 4, Lemma 1 and (37), we have, Ïi(z) { â , } z â i â } â â {
0<v' V? Ise(z)u < Allull3. (38)
0<v' V? Ise(z)u < By Lemma 5, a is Lipschitz with L = 2.
â¤
â
â¤
6
|
We note that Proposition 4 can also be established by using Theorem 4.2.1. in [28, p. 240], which resorts to using duality between the negative entropy and the log-sum-exp function.
As a minor consequence of Proposition 4, by the Cauchy- Schwarz inequality, we have,
(o(2) âo(2')) "(@- 2) <All 2/18. 9)
â L -co-coercive with â 2 2,
1 Corollary 2. The softmax function is ;-co-coercive with respect to ||- ||2 with L = 4, that is, for all z,z' ⬠R",
(o(2) = o(2")) "(2 = 21) 2 5 llo(2) â 0218, (40)
where λ is the inverse temperature constant.
Proof. Follows directly from Baillon - Haddad Theorem, see Theorem 1.
Proposition 4 and Corollary 2 show that the inverse tem- perature constant λ is crucial in determining the Lipschitz and co-coercive properties of the softmax function. We summarize 2 in the following corollary. these properties with respect to
Corollary 3. The softmax function is λ-Lipschitz and 1 coercive for any λ > 0, in particular, ⢠Nonexpansive and ï¬rmly nonexpansive for λ = 1, ⢠Contractive for λ (0, 1), â where λ is the inverse temperature constant.
$-co-
Finally, we state an additional consequence of Ï being a Lipschitz, monotone operator with a symmetric Jacobian matrix over all of Rn.
Corollary 4. The softmax function is monotone plus, pseudo monotone plus and pseudo monotone on Rn.
Proof. This follows from the chain of implications in [26, p. 164].
VI. APPLICATION IN GAME-THEORETIC REINFORCEMENT LEARNING
Fig. 3: Feedback representation of the exponentially-discounted rein- forcement learning scheme (EXP-D-RL).
In this section we demonstrate an application of these new properties of the softmax function in the context of stateless continuous-time reinforcement learning in ï¬nite games. For clarity and ease of notation, we perform our analysis in a single-player setup. For extension to N -player games, higher- order extension, and addition simulations, refer to our related
7
paper [21]. For other related work in this direction, see [22], [23], [46].
Consider a game G with a single player. We note that type of game is also known as âplay against natureâ and is identifiable with single-population matching in population games [2]. The player is equipped with an action set A = {1,...,n} and continuous payoff function / : A â R. A mixed strategy profile is given by 7 = [z1, see 2)! ⬠Aââ!. The playerâs expected payoff of using x is given by,
U(x) = So aii(z) =«x'U(x), iâ¬A
# U
where u = U(x) = [Ui ' ⬠Râ is referred to the payoff vector at x. ..Un
Starting at t = 0, we assume that the player repeatedly interacts with the game and aggregates his raw payoff u = Rn via the U (x) â learning rule,
# t
t zi(t) =e *2i(0) + [> war, Vie A, i 4)
R is the payoff to the ith strategy where ui = Ui(x) R is the score variable associated with the ith and zi â strategy. This form of aggregation as given by (41) is known as exponentially-discounted learning rule, under which the player allocates exponentially more weight to recent observations of the payoff [22], [23].
Taking the time derivative of (41) yields the score dynamics,
zi, (42)
# Ëzi = ui â
i â
, â A
We refer to (42) as the exponentially-discounted score dy- namics, a set of differential equations whose solutions capture the evolution of the playerâs scores over time. This form of score dynamics was investigated in [20], [22], [54], [57]. Since Ui(x) is continuous over a compact domain, therefore there ex- ânâ1. ists some constant M > 0 such that x â Then it can be shown using standard arguments that | ⤠is max } a compact, positively invariant set (solution remains in ⦠for all time).
We can express (42) using stacked-vector notation as,
Ëz = u z, (43)
â
where z = [z,.. en] Suppose that the score variable z is mapped to the strategy x from the softmax selection rule, ie., x = o(z), then the payoff vector can be written as u = U(x) = U(o(z)). Expressing the composition between the softmax selection rule with the payoff vector as U o o(z) := U(o(z)), then we can also write (43) as,
Ëz = (U Ï)(z) z. (44)
â
The overall exponentially-discounted reinforcement learning scheme (EXP-D-RL) can be represented as a closed-loop n identity feedback system in Figure 3, where In is the n
Ã
1 s + 1 is the transfer C. The closed-loop system matrix, function of (42) from ui to zi, s is equivalently represented by, is the Kronecker product, and â â
Ëz = u z, u = U (x), x = Ï(z). â (45)
From (44), we see that the equilibria of the overall closed- loop system (45) are the fixed points of the map z ++ (Uo a)(z). This fixed-point condition can be restated as,
£=0 > T=", uw =U(Z"), a* =0(2"). (46)
£=0 > T=",
The existence of the fixed point is guaranteed by the Brouwerâs Fixed Point Theorem provided that U oa is a continuous function with bounded range [57]. Since 2* = u*, therefore the fixed point Z* is mapped through o to a logit equilibrium [14], [22].
Proposition 5. Z* = o(Z*) = o(U*) is the logit equilibrium of the game G.
# G
Hence, the convergence of the solution of the score dynam- ics z(t) towards the ï¬xed point of U Ï implies convergence of the induced strategy x(t) = Ï(t) towards a logit equilibrium point xâ of the game. In the following, we provide different assumptions on the payoff function U or the composition between the payoff function and the softmax operator U Ï under which the induced strategy converges. For background on dynamical systems and Lyapunov theory, see [55]. This analysis was inspired by [57].
the co-coercive property of the soft- max function to provide the convergence conditions of the exponentially-discounted score dynamics (43) in a general class of games. Consider the exponentially-discounted re- inforcement learning scheme as depicted in Figure 3. We proceed by imposing the following assumption on the payoff of the game.
Assumption 1. The payoff U is anti-monotone, that is, for all x,x' ⬠Ant
â
(a â 2)" (U(x) âU(aâ)) <0. (47)
â
â
â¤
Theorem 2. Let G be a game with playerâs learning scheme as given by EXP-D-RL, (45) (Figure 3). Assume there are a finite number of isolated fixed-points Z* of U o 0, then under Assumption 1, the playerâs score z(t) converges to a rest point 2*. Moreover, x(t) = o(2(t)) converges to a logit equilibrium &* = 0(2*) of G.
# G
Proof. First, recall that solutions z(t) of remain bounded and Q = {z ⬠R"|||z]2 < VM} is a compact, positively invariant set. Let z* be a rest point, z* = u* = U(o(z*)), z* =o(2*).
Next, consider the Lyapunov function given by the Bregman divergence generated by the log-sum-exp function (10),
Vz (z) = Ise(z) â Ise(Z*) â VIse(Z*)'(z -â2*), (48)
â
â â
â
Recall that by Lemma 4, Ise is convex and by Proposition 1, V lse(z) = o(z). By convexity of Ise, Vz+(z) > 0,Vz ⬠Râ. Using ||o(z) ||, = o(z)'1 = 1 and lse(z + 1c) = lse(z) + ¢, it can be shown that Vz-(Z* + 1c) = 0,Vc ⬠R, so Vzs(-) is positive semidefinite, but not positive definite.
Taking the time derivative of V,+(z) along the solution of (44) yields,
(44) yields, Ve-(2)
Ve-(2) =VVa(2) "2 =(0(z) â o(2"))" (-z+u) =(0(z) â o(2"))" (-2 + -2* +0) =~ (0(2) â 0(2))" (2-2) + (o(2) 0)"
â
â
â
â
By Corollary 2, a is co-coercive, therefore, , 1 ok Vee(2) $= 5llo(2) â o(2*)I8 + (0(2) -
, 1 ok = â Vee(2) $= 5llo(2) â o(2*)I8 + (0(2) - 0)" (w=).
Since u = U(o(z)),w = U(o(%)), « = o(z), and a* = 0(z*), therefore (47) implies that Vz«(z) < â}]lo(z) â o(2*)||3, thus Ve«(z) < 0,Vz ⬠Râ, and Vz-(z) = 0, for all 2â¬E= {z ⬠No(z) = o(2"*)}. On E the dynamics of (44) reduces to,
No(z) = o(2"*)}. 2=U(o(%*))
2=2*
2=U(o(%*)) -2=2* =z.
# z.
â
â
Therefore z(t) â> Z* as t > oo, for any z(0) ⬠â¬. Thus, no other solution except Z* can stay forever in â¬, and the largest invariant subset M C ⬠consists only of equilibria. Since (44) has a finite number of isolated equilibria 2*, by LaSalleâs invariance principle [55], it follows that for any z(0) ⬠Q, z(t) converges to one of them. By continuity of o, x(t) converges to Z* = o(z*) as t â oo. For an alternative proof using Barbalatâs lemma, see [21]. i |
08t 1 | 05 0.65 047 a(t) = ai(2(?)) | a(t) = 02(2(t)) a(t) = on(2()) 0 L L L L L L 0 10 20 30 40 50 60
Fig. 4: Convergence of the induced strategy x(t) towards the logit equilibrium of the standard RPS game. The red curve shows the evolution of the strategy in the interior of the simplex.
Example 1. We note that Assumption 1 is equivalent to game being a stable game [2, p. 79], [18]. The representative
# G
8
(u-7)
â
game from the class of stable games is the standard Rock- Paper-Scissors (RPS) game given by the payoff matrix,
A = 0 1 1 â 1 0 1 1 1 , â 0 (49)
â
which generates the payoff vector U(x) = Aa. We present a simulation of the standard RPS game under the exponentially- discounted score dynamics (43) with A = 1. The resulting induced strategy x(t) is shown in Figure 4, which by Theo- rem 2 (which uses the co-coercivity property of the softmax function) is guaranteed to converge to the logit equilibrium of the RPS game, which is given by 7 = [1/3 1/3 1/3)". In this game, the logit equilibrium coincides with the Nash equilibrium.
Next, we rely on a slightly modiï¬ed result in [57] to show that the Lipschitzness of the softmax function can be directly used to conclude the convergence of the score dynamics (43) for certain classes of games.
Assumption 2. Uoc is ||-||..-contractive, that is, there exists a constant L ⬠(0,1) such that for all score variables z, zâ ⬠Râ,
â Ï)(z)
â
I(U oa)(z) â Uea)(Z)Iloo < Lll2 = 2'|loo- (50)
Proposition 6. (Theorem 4, [57]) Under Assumption 2, the unique fixed point Z* of U oa is globally asymptotically stable for (44). Moreover, x(t) = o(z(t)) converges to the logit equilibrium &* = o(2*) of the game G.
# G
The above proposition essentially states that the conver- gence of the exponentially-discounted score dynamics (44) relies on, individually, the Lipschitzness of the softmax func- tion Ï and the gameâs payoff vector U . We illustrate this dependency using the following example.
Example 2. By equivalence of norms, ¢ is ||-||,.-contractive if nd < 1. Then for any game where the payoff vector U is a || - || o-contraction, U oc is a || - |.o-contraction. Proposition 6 implies that the induced strategy x(t) = o(z(t)) converges to the logit equilibrium 7* ⬠A"â?.
â
# VII. CONCLUSION AND OPEN PROBLEMS
In this paper we have presented a thorough analysis of the softmax function using tools from convex analysis and monotone operator theory. We have shown that the softmax function is the monotone gradient map of the log-sum-exp function and that the inverse temperature parameter λ deter- mines the Lipschitz and co-coercivity properties of the softmax function. These properties allow for convenient constructions of convergence guarantees for score dynamics in general classes of games (see [21]). We note that the structure of the reinforcement learning scheme is similar to those that arises in bandit and online learning (such as the Follow- the-Regularized-Leader (FTRL) and mirror descent algorithm [49]). We hope that researchers could adapt our results pre- sented here and apply them to their domain-speciï¬c problems.
9
Finally, for many applications in reinforcement learning, it is desirable to use a generalized version of the softmax function given by,
oi(z) = exp(\i2i) ~ wl<i<n. © exp(jz;) j=l (51)
Here, each strategy i is associated with an inverse temperature constant λi > 0, which can be adjusted independently to improve an agentâs learning performance. The relationship between the individual parameters λi with the convergence properties of score dynamics under the choice rule given by (51) has been investigated in [57] but is not yet fully characterized at this point. It is of interest to extend the results presented in this paper for generalized versions of the softmax function [60] or adopt a monotone operator theoretic approach to analyze alternative forms of choice maps [61].
# REFERENCES
[1] H. Young and S. Zamir, Handbook of Game Theory, Volume 4, 1st ed. Amsterdam: Elsevier, North-Holland, 2015.
[2] W. H. Sandholm, Population Games and Evolutionary Dynamics. Cam- bridge, MA, USA: MIT Press, 2010.
[3] J. Goeree, C. Holt and T. Palfrey, Quantal response equilibrium: A Stochastic Theory of Games. Princeton University Press, 2016.
[4] R. Sutton and A. Barto, Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press, 1998.
[5] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016.
[6] C. M. Bishop, Pattern Recognition and Machine Learning. Secaucus, NJ, USA: Springer, 2006.
[7] S. Boyd and L. Vandenberghe, Convex optimization, 1st ed. Cambridge, UK: Cambridge University Press, 2004.
[8] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course. Norwell, MA: Kluwer, 2004.
[9] D. Bloembergen, K. Tuyls, D. Hennes, and M. Kaisers, âEvolutionary dynamics of multi-agent learning: A surveyâ, J. Artif. Intell. Res., vol. 53, no. 1, pp. 659-697, May 2015.
[10] G. Weiss, Multiagent Systems, 2nd ed. Cambridge, MA, USA: MIT Press, 2013.
[11] E. Alpaydin, Introduction to Machine Learning, 3rd ed. The MIT Press, 2014, p. 264.
[12] R. Luce, Individual Choice Behavior: A Theoretical Analysis. NY, Wiley, 1959.
[13] J. Bridle, âProbabilistic Interpretation of Feedforward Classiï¬cation Network Outputs, with Relationships to Statistical Pattern Recognitionâ, Neurocomputing: Algorithms, Architectures and Applications, F. Soulie and J. Herault, eds., pp. 227-236, 1990.
[14] R. McKelvey and T. Palfrey, Quantal response equilibria for normal form games, 1st ed. Pasadena, Calif.: Division of the Humanities and Social Sciences, California Institute of Technology, 1994.
[15] J. Smith and G. Price, âThe Logic of Animal Conï¬ictâ, Nature, vol. 246, no. 5427, pp. 15-18, 1973.
[16] J. Hofbauer, K. Sigmund, âEvolutionary Games and Population Dynam- icsâ, Cambridge University Press, 1998.
[17] J. Weibull, â Evolutionary Game Theoryâ. MIT Press, Cambridge, 1995. [18] J. Hofbauer and W. H. Sandholm, âStable games and their dynamicsâ, Journal of Economic Theory, vol. 144, no. 4, pp. 1665-1693, 2009. [19] J. Hofbauer and E. Hopkins, âLearning in perturbed asymmetric gamesâ,
Games Economic Behav., vol. 52, pp. 133-152, 2005.
[20] A. Kianercy and A. Galstyan, âDynamics of Boltzmann Q-learning in two-player two-action gamesâ, Phys. Rev. E, vol. 85, no. 4, pp. 1145- 1154, 2012.
[21] B. Gao and L. Pavel, âOn Passivity, Reinforcement Learning and Higher- Order Learning in Multi-Agent Finite Gamesâ, arXiv:1808.04464 [cs, math], Aug. 2018.
[22] P. Coucheney, B. Gaujal and P. Mertikopoulos, âPenalty-Regulated Dynamics and Robust Learning Procedures in Gamesâ, Mathematics of Operations Research, vol. 40, no. 3, pp. 611-633, 2015.
[23] P. Mertikopoulos and W. Sandholm, âLearning in Games via Reinforce- ment and Regularizationâ, Mathematics of Operations Research, vol. 41, no. 4, pp. 1297-1324, 2016.
[24] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis. Berlin: Springer-Verlag, 1998.
[25] H. Bauschke and P. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, 1st ed. New York: Springer, 2011.
[26] F. Facchinei and J.-S. Pang, Finite-dimensional Variational Inequalities and Complementarity Problems. Vol. I, Springer Series in Operations Research, Springer-Verlag, New York, 2003.
[27] J. Peypouquet. Convex optimization in normed spaces: theory, methods and examples. Springer, 2015.
[28] J. B. Hiriart-Urruty and C. Lemar´echal: Fundamentals of Convex Anal- ysis. SpringerVerlag, Berlin 2001.
[29] J. Baillon and G. Haddad, âQuelques propri´et´es des op´erateurs angle- born´es etn-cycliquement monotonesâ, Israel Journal of Mathematics, vol. 26, no. 2, pp. 137-150, 1977.
[30] N. Daw, J. OâDoherty, P. Dayan, B. Seymour and R. Dolan, âCortical substrates for exploratory decisions in humansâ, Nature, vol. 441, no. 7095, pp. 876-879, 2006.
[31] D. Lee, âNeuroeconomics: Best to go with what you know?â, Nature, vol. 441, no. 7095, pp. 822-823, 2006.
[32] J. D. Cohen, S. M. McClure, and A. J. Yu, âShould I stay or should I go? How the human brain manages the trade-off between exploitation and explorationâ, Philosph. Trans. Roy. Soc. B: Bio. Sci., vol. 362, no. 1481, pp. 933-942, 2007.
[33] P. Bossaerts and C. Murawski, âFrom behavioural economics to neuroe- conomics to decision neuroscience: the ascent of biology in research on human decision makingâ, Current Opinion in Behavioral Sciences, vol. 5, pp. 37-42, 2015.
[34] D. Koulouriotis and A. Xanthopoulos, âReinforcement learning and evolutionary algorithms for non-stationary multi-armed bandit problemsâ, Applied Mathematics and Computation, vol. 196, no. 2, pp. 913-922, 2008.
[35] R. Zunino, P. Gastaldo, âAnalog implementation of the softmax func- tionâ, In IEEE International Symposium on Circuits and Systems, vol 2, pp II-117, 2002.
[36] A. L. Yuille and D. Geiger, âWinner-Take-All Mechanismsâ, In The Handbook of Brain Theory and Neural Networks, Ed. M. Arbib, MIT Press, 1995.
[37] I. M. Elfadel and J. L. Wyatt Jr., âThe softmax nonlinearity: Derivation using statistical mechanics and useful properties as a multiterminal analog circuit elementâ, In Advances in Neural Information Processing Systems 6, J. Cowan, G. Tesauro, and C. L. Giles, Eds. San Mateo, CA: Morgan Kaufmann, 1994, pp. 882-887.
[38] I. M. Elfadel, âConvex Potentials and their Conjugates in Analog Mean- Field Optimizationâ, Neural Computation, vol. 7, no. 5, pp. 1079-1104, 1995.
[39] T. Genewein and D. A. Braun, âBio-inspired feedback-circuit implemen- tation of discrete, free energy optimizing, winner-take-all computationsâ, Biological, vol. 110, no. 2, pp. 135-150, Jun. 2016.
[40] T. Michalis, âOne-vs-each approximation to softmax for scalable esti- mation of probabilitiesâ, In Advances in Neural Information Processing Systems 29, pp. 4161-4169. 2016.
[41] P. Reverdy and N. Leonard, âParameter Estimation in Softmax Decision- Making Models With Linear Objective Functionsâ, IEEE Transactions on Automation Science and Engineering, vol. 13, no. 1, pp. 54-67, 2016. [42] M. Kaisers, K. Tuyls, F. Thuijsman, S. Parsons, âAn evolutionary model of multi-agent learning with a varying exploration rate (Short Paper)â, Proc. of 8th Int. Conf. on Autonomous Agents and Multiagent Systems (AA-MAS 2009), Decker, Sichman, Sierra and Castelfranchi (eds.), Budapest, Hungary, pp. 1255-1256., 2009.
[43] A. Martins and R. F. Astudillo. âFrom softmax to sparsemax: A sparse model of attention and multi-label classiï¬cationâ, arXiv:1602.02068 [cs.CL], Feb. 2016.
[44] D. Leslie and E. Collins, âIndividual Q-Learning in Normal Form Gamesâ, SIAM Journal on Control and Optimization, vol. 44, no. 2, pp. 495-514, 2005.
[45] W. Sandholm, E. Dokumacı and R. Lahkar, âThe projection dynamic and the replicator dynamicâ, Games and Economic Behavior, vol. 64, no. 2, pp. 666-683, 2008.
10
[46] R. Laraki and P. Mertikopoulos, âHigher order game dynamicsâ, Journal of Economic Theory, vol. 148, no. 6, pp. 2666-2695, 2013.
[47] F. Alvarez, J. Bolte and O. Brahic, âHessian Riemannian Gradient Flows in Convex Programmingâ, SIAM Journal on Control and Optimization, vol. 43, no. 2, pp. 477-501, 2004.
[48] A. Beck and M. Teboulle, âMirror descent and nonlinear projected sub- gradient methods for convex optimizationâ, Operations Research Letters, vol. 31, no. 3, pp. 167-175, 2003.
[49] S. Shalev-Shwartz, âOnline Learning and Online Convex Optimizationâ, Foundations and Trends in Machine Learning, vol. 4, no. 2, pp. 107-194, 2011.
[50] E. Hazan, âIntroduction to Online Convex Optimizationâ, Foundations and Trends in Optimization, vol. 2, no. 3-4, pp. 157-325, 2016.
[51] A. Rangarajan, âSelf-annealing and self-annihilation: unifying determin- istic annealing and relaxation labelingâ, Pattern Recognition, vol. 33, no. 4, pp. 635-649, 2000.
formulation of boosting al- gorithmsâ, IEEE Trans. Pattern Anal. Mach. Intell. Feb. 25, 2010, 10.1109/TPAMI.2010.47.
[53] M. Harper, âThe replicator equation as an inference dynamicâ, arXiv:0911.1763 [math.DS], May. 2010.
[54] Y. Sato and J. Crutchï¬eld, âCoupled Replicator Equations for the dynamics of learning in multiagent systemsâ, Physical Review E, vol. 67, no. 1, 2003.
[55] H. K. Khalil, Nonlinear Systems, 3rd ed., Upper Siddle River, NJ: Prentice-Hall, 2002.
[56] E. Hopkins, âTwo Competing Models of How People Learn in Gamesâ, Econometrica, vol. 70, no. 6, pp. 2141-2166, 2002.
[57] R. Cominetti, E. Melo and S. Sorin, âA payoff-based learning procedure and its application to trafï¬c gamesâ, Games and Economic Behavior, vol. 70, no. 1, pp. 71-83, 2010.
[58] K. Tuyls, K. Verbeeck and T. Lenaerts, âA selection-mutation model for Q-learning in multi-agent systemsâ, in Proc. of the 2nd Int. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS), pp. 693-700, 2003.
59 M. Tokic and G. Palm, âValue-difference based exploration: Adaptive control between â¬-greedy and softmaxâ, in KI 2011: Advances in Arti- ficial Intelligence, vol. 7006. Heidelberg, Germany: Springer, 2011, pp. 335-346.
[60] P. Mertikopoulos, E. V. Belmega, and A. L. Moustakas, âMatrix expo- nential learning: Distributed optimization in MIMO systemsâ, in ISITâ12: Proceedings of the 2012 IEEE International Symposium on Information Theory, 2012, pp. 3028-3032.
[61] K. Asadi and M. L. Littman, âAn Alternative Softmax Operator for Reinforcement Learningâ, arXiv:1612.05628 [cs, stat], Dec. 2016. | {
"id": "1612.05628"
} |
1704.00648 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | 7 1 0 2 n u J 8 ] G L . s c [
2 v 8 4 6 0 0 . 4 0 7 1 : v i X r a
# Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations
Eirikur Agustsson ETH Zurich aeirikur@vision.ee.ethz.ch
Fabian Mentzer ETH Zurich mentzerf@student.ee.ethz.ch
# Michael Tschannen ETH Zurich michaelt@nari.ee.ethz.ch
Lukas Cavigelli ETH Zurich cavigelli@iis.ee.ethz.ch
Radu Timofte ETH Zurich timofter@vision.ee.ethz.ch
# Luc Van Gool KU Leuven ETH Zurich vangool@vision.ee.ethz.ch
Luca Benini ETH Zurich benini@iis.ee.ethz.ch
# Abstract
We present a new approach to learn compressible representations in deep archi- tectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two chal- lenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.
# Introduction
In recent years, deep neural networks (DNNs) have led to many breakthrough results in machine learning and computer vision [20, 28, 9], and are now widely deployed in industry. Modern DNN models often have millions or tens of millions of parameters, leading to highly redundant structures, both in the intermediate feature representations they generate and in the model itself. Although overparametrization of DNN models can have a favorable effect on training, in practice it is often desirable to compress DNN models for inference, e.g., when deploying them on mobile or embedded devices with limited memory. The ability to learn compressible feature representations, on the other hand, has a large potential for the development of (data-adaptive) compression algorithms for various data types such as images, audio, video, and text, for all of which various DNN architectures are now available.
DNN model compression and lossy image compression using DNNs have both independently attracted a lot of attention lately. In order to compress a set of continuous model parameters or features, we need to approximate each parameter or feature by one representative from a set of quantization levels (or vectors, in the multi-dimensional case), each associated with a symbol, and then store the assignments (symbols) of the parameters or features, as well as the quantization levels. Representing each parameter of a DNN model or each feature in a feature representation by the corresponding quantization level will come at the cost of a distortion D, i.e., a loss in performance (e.g., in classiï¬cation accuracy for a classiï¬cation DNN with quantized model parameters, or in reconstruction error in the context of autoencoders with quantized intermediate feature representations). The rate R, i.e., the entropy of the symbol stream, determines the cost of encoding the model or features in a bitstream.
To learn a compressible DNN model or feature representation we need to minimize D + βR, where β > 0 controls the rate-distortion trade-off. Including the entropy into the learning cost function can be seen as adding a regularizer that promotes a compressible representation of the network or feature representation. However, two major challenges arise when minimizing D + βR for DNNs: i) coping with the non-differentiability (due to quantization operations) of the cost function D + βR, and ii) obtaining an accurate and differentiable estimate of the entropy (i.e., R). To tackle i), various methods have been proposed. Among the most popular ones are stochastic approximations [39, 19, 6, 32, 4] and rounding with a smooth derivative approximation [15, 30]. To address ii) a common approach is to assume the symbol stream to be i.i.d. and to model the marginal symbol distribution with a parametric model, such as a Gaussian mixture model [30, 34], a piecewise linear model [4], or a Bernoulli distribution [33] (in the case of binary symbols).
In this paper, we propose a uniï¬ed end-to-end learning frame- work for learning compressible representations, jointly op- timizing the model parameters, the quantization levels, and the entropy of the resulting symbol stream to compress ei- ther a subset of feature representations in the network or the model itself (see inset ï¬gure). We address both challenges i) and ii) above with methods that are novel in the context DNN model and feature compression. Our main contributions are:
DNN model compression
x x F1( Fb · ; w1) x(1) x(Kâ1) FK( · z = [w1, w2, . . . , wK] ⦠... ⦠F1 data compression z = x(b) FK ⦠... ; wK) ⦠Fb+1 z: vector to be compressed x(K) x(K)
We provide the ï¬rst uniï¬ed view on end-to-end learned compression of feature representations and DNN models. These two problems have been studied largely independently in the literature so far.
Our method is simple and intuitively appealing, relying on soft assignments of a given scalar or vector to be quantized to quantization levels. A parameter controls the âhardnessâ of the assignments and allows to gradually transition from soft to hard assignments during training. In contrast to rounding-based or stochastic quantization schemes, our coding scheme is directly differentiable, thus trainable end-to-end.
Our method does not force the network to adapt to speciï¬c (given) quantization outputs (e.g., integers) but learns the quantization levels jointly with the weights, enabling application to a wider set of problems. In particular, we explore vector quantization for the ï¬rst time in the context of learned compression and demonstrate its beneï¬ts over scalar quantization.
Unlike essentially all previous works, we make no assumption on the marginal distribution of the features or model parameters to be quantized by relying on a histogram of the assignment probabilities rather than the parametric models commonly used in the literature.
We apply our method to DNN model compression for a 32-layer ResNet model [13] and full- resolution image compression using a variant of the compressive autoencoder proposed recently in [30]. In both cases, we obtain performance competitive with the state-of-the-art, while making fewer model assumptions and signiï¬cantly simplifying the training procedure compared to the original works [30, 5].
The remainder of the paper is organized as follows. Section 2 reviews related work, before our soft-to-hard vector quantization method is introduced in Section 3. Then we apply it to a compres- sive autoencoder for image compression and to ResNet for DNN compression in Section 4 and 5, respectively. Section 6 concludes the paper.
# 2 Related Work
There has been a surge of interest in DNN models for full-resolution image compression, most notably [32, 33, 3, 4, 30], all of which outperform JPEG [35] and some even JPEG 2000 [29] The pioneering work [32, 33] showed that progressive image compression can be learned with convolutional recurrent neural networks (RNNs), employing a stochastic quantization method during training. [3, 30] both rely on convolutional autoencoder architectures. These works are discussed in more detail in Section 4.
In the context of DNN model compression, the line of works [12, 11, 5] adopts a multi-step procedure in which the weights of a pretrained DNN are ï¬rst pruned and the remaining parameters are quantized using a k-means like algorithm, the DNN is then retrained, and ï¬nally the quantized DNN model is encoded using entropy coding. A notable different approach is taken by [34], where the DNN
2
compression task is tackled using the minimum description length principle, which has a solid information-theoretic foundation.
It is worth noting that many recent works target quantization of the DNN model parameters and possibly the feature representation to speed up DNN evaluation on hardware with low-precision arithmetic, see, e.g., [15, 23, 38, 43]. However, most of these works do not speciï¬cally train the DNN such that the quantized parameters are compressible in an information-theoretic sense.
Gradually moving from an easy (convex or differentiable) problem to the actual harder problem during optimization, as done in our soft-to-hard quantization framework, has been studied in various contexts and falls under the umbrella of continuation methods (see [2] for an overview). Formally related but motivated from a probabilistic perspective are deterministic annealing methods for maximum entropy clustering/vector quantization, see, e.g., [24, 42]. Arguably most related to our approach is [41], which also employs continuation for nearest neighbor assignments, but in the context of learning a supervised prototype classiï¬er. To the best of our knowledge, continuation methods have not been employed before in an end-to-end learning framework for neural network-based image compression or DNN compression.
# 3 Proposed Soft-to-Hard Vector Quantization
# 3.1 Problem Formulation
Preliminaries and Notations. We consider the standard model for DNNs, where we have an architecture F : Rd1 RdK+1 composed of K layers F = FK F1, where layer Fi maps Rdi , wK] as the â parameters of the network and we denote the intermediate layer outputs of the network as x(0) := x(i) := Fi(x(iâ1)), such that F (x) = x(K) and x(i) is the feature vector produced x by layer Fi.
x1, = { , ( X The parameters of the network are learned w.r.t. training data X RdK+1, by minimizing a real-valued loss y1, { , yN = Y be decomposed as a sum over the training data plus a regularization term, · · · } â L Y Rd1 and labels · · · ; F ). Typically, the loss can , xN } â
1x LIND F) = MFO), ¥e) + AR(W), () Na
where ¢( F(x), y) is the sample loss, \ > 0 sets the regularization strength, and R(W) is a regularizer (e.g., R(W) = >; ||w:l|? for ly regularization). In this case, the parameters of the network can be learned using stochastic gradient descent over mini-batches. Assuming that the data 1â, Y on which the network is trained is drawn from some distribution Px y, the loss can be thought of as an estimator of the expected loss E[¢(F(X), Y) + \R(W)]. In the context of image classification, R® would correspond to the input image space and R¢*+# to the classification probabilities, and @ would be the categorical cross entropy.
We say that the deep architecture is an autoencoder when the network maps back into the input space, with the goal of reproducing the input. In this case, dj = dy 4 and F(x) is trained to approximate x, e.g., with a mean squared error loss ((F(x), y) = ||F(x) â y||?. Autoencoders typically condense the dimensionality of the input into some smaller dimensionality inside the network, i.e., the layer with the smallest output dimension, x) © RY | has dy < d1, which we refer to as the âbottleneckâ. Compressible representations. We say that a weight parameter w; or a feature xâ) has a compress- ible representation if it can be serialized to a binary stream using few bits. For DNN compression, we want the entire network parameters W to be compressible. For image compression via an autoencoder, we just need the features in the bottleneck, xâ), to be compressible.
â
Suppose we want to compress a feature representation z autoencoder) given an input x. Assuming that the data X will be a sample from a continuous random variable Z. , Y Rd in our network (e.g., x(b) of an â is drawn from some distribution PX,Y, z
To store z with a ï¬nite number of bits, we need to map it to a discrete space. Speciï¬cally, we map z to a sequence of m symbols using a (symbol) encoder E : Rd [L]m, where each symbol is an index ranging from 1 to L, i.e., [L] := . The reconstruction of z is then produced by a } Rd. Since z is (symbol) decoder D : [L]m
â
3
a sample from Z, the symbol stream E(z) is drawn from the discrete probability distribution PE(Z). Thus, given the encoder E, according to Shannonâs source coding theorem [7], the correct metric for compressibility is the entropy of E(Z):
H(E(Z)) = â eâ[L]m P (E(Z) = e) log(P (E(Z) = e)). (2)
Our generic goal is hence to optimize the rate distortion trade-off between the expected loss and the entropy of E(Z):
nin, Ex.v[(P),Y) + AR(W)] + BH(E(2)), @)
where ËF is the architecture where z has been replaced with Ëz, and β > 0 controls the trade-off between compressibility of z and the distortion it imposes on ËF .
However, we cannot optimize (3) directly. First, we do not know the distribution of X and Y. Second, the distribution of Z depends in a complex manner on the network parameters W and the distribution of X. Third, the encoder E is a discrete mapping and thus not differentiable. For our ï¬rst approximation we consider the sample entropy instead of H(E(Z)). That is, given the data and X [L]m some ï¬xed network parameters W, we can estimate the probabilities P (E(Z) = e) for e â Lm. If z is the via a histogram. For this estimate to be accurate, we however would need bottleneck of an autoencoder, this would correspond to trying to learn a single histogram for the entire discretized data space. We relax this by assuming the entries of E(Z) are i.i.d. such that we can instead compute the histogram over the L distinct values. More precisely, we assume that for e = (e1, l=1 pel , where pj is the histogram estimate
â [N ], el(zi) = j
el(zi) | l [m], i pj := |{ , â â mN }| (4)
where we denote the entries of E(z) = (e1(z), data point xi (3.1) into (2), , em(z)) and zi is the output feature z for training . We then obtain an estimate of the entropy of Z by substituting the approximation · · · â X
m m L H(E(Z))x- SO (i1».)} log (i1>. =âm > pj log pj = mH(p), (5) ee(L]⢠\I=1 I=1 j=l
where the ï¬rst (exact) equality is due to [7], Thm. 2.6.6, and H(p) := entropy for the (i.i.d., by assumption) components of E(Z) 1. â j=1 pj log pj is the sample
We now can simplify the ideal objective of @). by replacing the expected loss with the sample mean over ¢ and the entropy using the sample entropy H(p), obtaining N =>
N => e(F(x,). ys) + ARCW) + BmH(p). ) Na
We note that so far we have assumed that z is a feature output in F , i.e., z = x(k) for some k [K]. However, the above treatment would stay the same if z is the concatenation of multiple feature outputs. One can also obtain a separate sample entropy term for separate feature outputs and add them to the objective in (6).
In case z is composed of one or more parameter vectors, such as in DNN compression where z = W, z and Ëz cease to be random variables, since W is a parameter of the model. That is, opposed to the that produces another source ËZ which we want to be compressible, case where we have a source we want the discretization of a single parameter vector W to be compressible. This is analogous to compressing a single document, instead of learning a model that can compress a stream of documents. In this case, (3) is not the appropriate objective, but our simpliï¬ed objective in (6) remains appropriate. This is because a standard technique in compression is to build a statistical model of the (ï¬nite) data, which has a small sample entropy. The only difference is that now the histogram probabilities in (4) , i.e., N = 1 and zi = W in (4), and they count towards are taken over W instead of the dataset storage as well as the encoder E and decoder D.
1In fact, from [7], Thm. 2.6.6, it follows that if the histogram estimates pj are exact, (5) is an upper bound for the true H(E(Z)) (i.e., without the i.i.d. assumption).
4
Challenges. Eq. (6) gives us a uniï¬ed objective that can well describe the trade-off between com- pressible representations in a deep architecture and the original training objective of the architecture.
However, the problem of ï¬nding a good encoder E, a corresponding decoder D, and parameters W that minimize the objective remains. First, we need to impose a form for the encoder and decoder, and second we need an approach that can optimize (6) w.r.t. the parameters W. Independently of the choice of E, (6) is challenging since E is a mapping to a ï¬nite set and, therefore, not differentiable. This implies that neither H(p) is differentiable nor ËF is differentiable w.r.t. the parameters of z and layers that feed into z. For example, if ËF is an autoencoder and z = x(b), the output of the network will not be differentiable w.r.t. w1,
· ·
· ·
These challenges motivate the design decisions of our soft-to-hard annealing approach, described in the next section.
# 3.2 Our Method
Encoder and decoder form. For the encoder E : Rd vectors = into a matrix Z = [¯z(1), nearest neighbor in points in Rd/m, which we partition into the Voronoi tessellation over the centers Rd then simply constructs ËZ D : [L]m â picking the corresponding centers ËZ = [ce1 , · · · into Rd. We will interchangeably write Ëz = D(E(z)) and ËZ = D(E(Z)). The idea is then to relax E and D into continuous mappings via soft assignments instead of the hard nearest neighbor assignment of E.
R?/â to ||z â
Soft assignments. We deï¬ne the soft assignment of ¯z
as 2])
softmax(âo[||z â e1||?,..., ||z â ex ||7]) eRâ, (7) an at is the standard softmax operator, such that ¢(Z) has
e1||?,...,
(2) :=
, yL)j := where softmax(y1, positive entries and
||4(Z)||;
= 1. We denote the j-th entry of (Z) with @;(Z) and note that
oe 1 ifj = arg min,,.;,)||Z â c;|| l ,(Z) = Je (L) J ooo (2) {i otherwise
such that ËÏ(¯z) := limÏââ Ï(¯z) converges to a one-hot encoding of the nearest center to ¯z in therefore refer to ËÏ(¯z) as the hard assignment of ¯z to the soft assignment Ï(¯z).
Using soft assignment, we deï¬ne the soft quantization of ¯z as
L Q@) = )> ¢j4i(@) = C4), j=l
where we write the centers as a matrix C = [c1, assignment is taken with ËQ(¯z) := limÏââ ËQ(¯z) = ce(¯z), where e(¯z) is the center in Therefore, we can now write:
ËZ = D(E(Z)) = [ ËQ(¯z(1)), , ËQ(¯z(m))] = C[ ËÏ(¯z(1)), , ËÏ(¯z(m))].
· ·
· ·
Now, instead of computing ËZ via hard nearest neighbor assignments, we can approximate it with a smooth relaxation ËZ := C[Ï(¯z(1)), , Ï(¯z(m))] by using the soft assignments instead of the hard assignments. Denoting the corresponding vector form by Ëz, this gives us a differentiable approximation ËF of the quantized architecture ËF , by replacing Ëz in the network with Ëz. Entropy estimation. Using the soft assignments, we can similarly deï¬ne a soft histogram, by summing up the partial assignments to each center instead of counting as in (4):
m N 9 = oil), i=1 [=1
5
This gives us a valid probability mass function q = (q1, to p = (p1, , qL), which is differentiable but converges · · · , pL) as Ï
. â â
· ·
We can now deï¬ne the âsoft entropyâ as the cross entropy between p and q:
L A(¢) = H(p,q) = â 9 pj log q3 = H(p) + Dex (pla) j=l
where Dxz(p||¢) = 22; P;log(pj/qj) denotes the KullbackâLeibler divergence. Since Dxx(p||q) > 0, this establishes H(@) as an upper bound for H(p), where equality is obtained when p = q.
We have therefore obtained a differentiable âsoft entropyâ loss (w.r.t. q), which is an upper bound on the sample entropy H(p). Hence, we can indirectly minimize H(p) by minimizing ËH(Ï), treating the histogram probabilities of p as constants for gradient computation. However, we note that while qj is additive over the training data and the symbol sequence, log(qj) is not. This prevents the use of mini-batch gradient descent on ËH(Ï), which can be an issue for large scale learning problems. In this case, we can instead re-deï¬ne the soft entropy ËH(Ï) as H(q, p). As before, ËH(Ï) H(p) , but ËH(Ï) ceases to be an upper bound for H(p). The beneï¬t is that now ËH(Ï) can be as Ï â â decomposed as
L Nomb 4 f a( A() = H(qa.p) =- Ya logpj =-S> OS â hil log p;, (8) j=l i=1 l=1 j=l
# l=1 and the components l
such that we get an additive loss over the samples xi
[m].
[m]. such that we get an additive loss over the samples xi
# â X
â
Soft-to-hard deterministic annealing. Our soft assignment scheme gives us differentiable ap- proximations ËF and ËH(Ï) of the discretized network ËF and the sample entropy H(p), respectively. However, our objective is to learn network parameters W that minimize (6) when using the encoder and decoder with hard assignments, such that we obtain a compressible symbol stream E(z) which we can compress using, e.g., arithmetic coding [40].
To this end, we anneal Ï from some initial value Ï0 to inï¬nity during training, such that the soft approximation gradually becomes a better approximation of the ï¬nal hard quantization we will use. Choosing the annealing schedule is crucial as annealing too slowly may allow the network to invert the soft assignments (resulting in large weights), and annealing too fast leads to vanishing gradients too early, thereby preventing learning. In practice, one can either parametrize Ï as a function of the iteration, or tie it to an auxiliary target such as the difference between the network losses incurred by soft quantization and hard quantization (see Section 4 for details).
For a simple initialization of Ï0 and the centers ¯z(l) i i { | using SGD.
, we can sample the centers from the set
# C
by minimizing the cluster energy
{20 i ⬠[N],l ⬠[m]} and then cluster Z by minimizing the cluster energy }7,- 2 ||z â Q(z)||? using SGD.
Z ËQ(¯z)
# Image Compression
We now show how we can use our framework to realize a simple image compression system. For the architecture, we use a variant of the convolutional autoencoder proposed recently in [30] (see Appendix A.1 for details). We note that while we use the architecture of [30], we train it using our soft-to-hard entropy minimization method, which differs signiï¬cantly from their approach, see below.
Our goal is to learn a compressible representation of the features in the bottleneck of the autoencoder. Because we do not expect the features from different bottleneck channels to be identically distributed, we model each channelâs distribution with a different histogram and entropy loss, adding each entropy term to the total loss using the same β parameter. To encode a channel into symbols, we separate the channel matrix into a sequence of pw ph-dimensional patches. These patches (vectorized) form the Rd/mÃm, where m = d/(pwph), such that Z contains m (pwph)-dimensional points. columns of Z Having ph or pw greater than one allows symbols to capture local correlations in the bottleneck, which is desirable since we model the symbols as i.i.d. random variables for entropy coding. At test time, the symbol encoder E then determines the symbols in the channel by performing a nearest Rpwph , resulting in ËZ, as described above. During neighbor assignment over a set of L centers training we instead use the soft quantized ËZ, also w.r.t. the centers
# C
6
:=
0.20bpp / 0.91 / 0.69 / 23.88dB SHA (ours) 0.20bpp / 0.90 / 0.67 / 24.19dB BPG 0.20bpp / 0.88 / 0.63 / 23.01dB JPEG 2000 0.22bpp / 0.77 / 0.48 / 19.77dB JPEG
Figure 1: Top: MS-SSIM as a function of rate for SHA (Ours), BPG, JPEG 2000, JPEG, for each data set. Bottom: A visual example from the Kodak data set along with rate / MS-SSIM / SSIM / PSNR.
We trained different models using Adam [17], see Appendix A.2. Our training set is composed similarly to that described in [3]. We used a subset of 90,000 images from ImageNET [8], which 128 pixels, with a batch size of 15. we downsampled by a factor 0.7 and trained on crops of 128 To estimate the probability distribution p for optimizing (8), we maintain a histogram over 5,000 images, which we update every 10 iterations with the images from the current batch. Details about other hyperparameters can be found in Appendix A.2.
The training of our autoencoder network takes place in two stages, where we move from an identity function in the bottleneck to hard quantization. In the first stage, we train the autoencoder without any quantization. Similar to we gradually unfreeze the channels in the bottleneck during training (this gives a slight improvement over learning all channels jointly from the start). This yields an efficient weight initialization and enables us to then initialize op and C as described above. In the second stage, we minimize 6). jointly learning network weights and quantization levels. We anneal a by letting the gap between soft and hard quantization error go to zero as the number of iterations t goes to infinity. Let eg = ||F'(x) âx||2 be the soft error, e7 = || F(x) âxl|? be the hard error. With gap(t) = en âes we can denote the error between the actual the desired gap with eg(t) = gap(t) â T/(T +t) gap(0), such that the gap is halved after T iterations. We update o according to o(t + 1) = o(t) + Ke ec(t), where o(t) denotes o at iteration ¢. Fig.|3}in Appendix|A.4|shows the evolution of the gap, soft and hard loss as sigma grows during training. We observed that both vector quantization and entropy loss lead to higher compression rates at a given reconstruction MSE compared to scalar quantization and training without entropy loss, respectively (see Appendix[A.3]for details).
Evaluation. To evaluate the image compression performance of our Soft-to-Hard Autoencoder (SHA) method we use four datasets, namely Kodak [1], B100 [31], Urban100 [14], ImageNET100 (100 randomly selected images from ImageNET [25]) and three standard quality measures, namely peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) [37], and multi-scale SSIM (MS-SSIM), see Appendix A.5 for details. We compare our SHA with the standard JPEG, JPEG 2000, and BPG [10], focusing on compression rates < 1 bits per pixel (bpp) (i.e., the regime where traditional integral transform-based compression algorithms are most challenged). As shown in Fig. 1, for high compression rates (< 0.4 bpp), our SHA outperforms JPEG and JPEG 2000 in terms of MS-SSIM and is competitive with BPG. A similar trend can be observed for SSIM (see Fig. 4 in Appendix A.6 for plots of SSIM and PSNR as a function of bpp). SHA performs best on ImageNET100 and is most challenged on Kodak when compared with JPEG 2000. Visually, SHA-compressed images have fewer artifacts than those compressed by JPEG 2000 (see Fig. 1, and Appendix A.7).
Related methods and discussion. JPEG 2000 [29] uses wavelet-based transformations and adap- tive EBCOT coding. BPG [10], based on a subset of the HEVC video compression standard, is the
7
ACC COMP. METHOD [%] RATIO ORIGINAL MODEL 1.00 92.6 PRUNING + FT. + INDEX CODING + H. CODING [12] 92.6 4.52 92.6 18.25 PRUNING + FT. + K-MEANS + FT. + I.C. + H.C. [11] PRUNING + FT. + HESSIAN-WEIGHTED K-MEANS + FT. + I.C. + H.C. 92.7 20.51 92.7 22.17 PRUNING + FT. + UNIFORM QUANTIZATION + FT. + I.C. + H.C. 92.7 21.01 PRUNING + FT. + ITERATIVE ECSQ + FT. + I.C. + H.C. SOFT-TO-HARD ANNEALING + FT. + H. CODING (OURS) 92.1 19.15 SOFT-TO-HARD ANNEALING + FT. + A. CODING (OURS) 92.1 20.15
Table 1: Accuracies and compression factors for different DNN compression techniques, using a 32-layer ResNet on CIFAR-10. FT. denotes ï¬ne-tuning, IC. denotes index coding and H.C. and A.C. denote Huffman and arithmetic coding, respectively. The pruning based results are from [5].
current state-of-the art for image compression. It uses context-adaptive binary arithmetic coding (CABAC) [21].
Theis et al. [30] rounding to integers SHA (ours) vector quantization grad. of soft relaxation grad. of identity mapping Quantization Backpropagation Entropy estimation (soft) histogram Training material Operating points Gaussian scale mixtures high quality Flickr images ensemble ImageNET single model
The recent works of [30, 4] also showed competitive perfor- mance with JPEG 2000. While we use the architecture of [30], there are stark differences be- tween the works, summarized in the inset table. The work of [4] build a deep model using multiple generalized divisive normaliza- tion (GDN) layers and their inverses (IGDN), which are specialized layers designed to capture local joint statistics of natural images. Furthermore, they model marginals for entropy estimation using linear splines and also use CABAC[21] coding. Concurrent to our work, the method of [16] builds on the architecture proposed in [33], and shows that impressive performance in terms of the MS-SSIM metric can be obtained by incorporating it into the optimization (instead of just minimizing the MSE).
In contrast to the domain-speciï¬c techniques adopted by these state-of-the-art methods, our framework for learning compressible representation can realize a competitive image compression system, only using a convolutional autoencoder and simple entropy coding.
# 5 DNN Compression
For DNN compression, we investigate the ResNet [13] architecture for image classiï¬cation. We adopt the same setting as [5] and consider a 32-layer architecture trained for CIFAR-10 [18]. As in [5], our goal is to learn a compressible representation for all 464,154 trainable parameters of the model.
R464,154 and employ scalar quantization (m = d), We concatenate the parameters into a vector W â such that ZT = z = W. We started from the pre-trained original model, which obtains a 92.6% accuracy on the test set. We implemented the entropy minimization by using L = 75 centers and 20, i.e., giving chose β = 0.1 such that the converged entropy would give a compression factor 32/20 = 1.6 bits per weight. The training was performed with the same learning parameters as â the original model was trained with (SGD with momentum 0.9). The annealing schedule used was a simple exponential one, Ï(t + 1) = 1.001 Ï(t) with Ï(0) = 0.4. After 4 epochs of training, when Ï(t) has increased by a factor 20, we switched to hard assignments and continued ï¬ne-tuning at lower learning rate. 2 Adhering to the benchmark of [5, 12, 11], we obtain the compression a 10 factor by dividing the bit cost of storing the uncompressed weights as ï¬oats (464, 154 32 bits) with the total encoding cost of compressed weights (i.e., L 32 bits for the centers plus the size of the compressed index stream).
Our compressible model achieves a comparable test accuracy of 92.1% while compressing the DNN by a factor 19.15 with Huffman and 20.15 using arithmetic coding. Table 1 compares our results with state-of-the-art approaches reported by [5]. We note that while the top methods from the literature also achieve accuracies above 92% and compression factors above 20 , they employ a considerable amount of hand-designed steps, such as pruning, retraining, various types of weight clustering, special encoding of the sparse weight matrices into an index-difference based format and then ï¬nally use entropy coding. In contrast, we directly minimize the entropy of the weights in the training, obtaining a highly compressible representation using standard entropy coding.
2 We switch to hard assignments since we can get large gradients for weights that are equally close to two centers as ËQ converges to hard nearest neighbor assignments. One could also employ simple gradient clipping.
8
In Fig. 5 in Appendix A.8, we show how the sample entropy H(p) decays and the index histograms develop during training, as the network learns to condense most of the weights to a couple of centers when optimizing (6). In contrast, the methods of [12, 11, 5] manually impose 0 as the most frequent center by pruning 80% of the network weights. We note that the recent works by [34] also manages to tackle the problem in a single training procedure, using the minimum description length principle. In contrast to our framework, they take a Bayesian perspective and rely on a parametric assumption on the symbol distribution.
# 6 Conclusions
In this paper we proposed a uniï¬ed framework for end-to-end learning of compressed representations for deep architectures. By training with a soft-to-hard annealing scheme, gradually transferring from a soft relaxation of the sample entropy and network discretization process to the actual non- differentiable quantization process, we manage to optimize the rate distortion trade-off between the original network loss and the entropy. Our framework can elegantly capture diverse compression tasks, obtaining results competitive with state-of-the-art for both image compression as well as DNN compression. The simplicity of our approach opens up various directions for future work, since our framework can be easily adapted for other tasks where a compressible representation is desired.
9
# References
[1] Kodak PhotoCD dataset. http://r0k.us/graphics/kodak/, 1999. [2] Eugene L Allgower and Kurt Georg. Numerical continuation methods: an introduction,
volume 13. Springer Science & Business Media, 2012.
[3] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. arXiv preprint arXiv:1607.05006, 2016.
[4] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compres- sion. arXiv preprint arXiv:1611.01704, 2016.
[5] Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Towards the limit of network quantization. arXiv preprint arXiv:1612.01543, 2016.
[6] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123â3131, 2015.
[7] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
[9] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classiï¬cation of skin cancer with deep neural networks. Nature, 542(7639):115â118, 2017.
[10] Bellard Fabrice. BPG Image format. https://bellard.org/bpg/, 2014. [11] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural net- works with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[12] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In Advances in Neural Information Processing Systems, pages 1135â1143, 2015.
[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[14] Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5197â5206, 2015.
[15] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quan- tized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
[16] Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. arXiv preprint arXiv:1703.10114, 2017.
[17] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[18] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
[19] Alex Krizhevsky and Geoffrey E Hinton. Using very deep autoencoders for content-based image retrieval. In ESANN, 2011.
[20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[21] Detlev Marpe, Heiko Schwarz, and Thomas Wiegand. Context-based adaptive binary arithmetic coding in the h. 264/avc video compression standard. IEEE Transactions on circuits and systems for video technology, 13(7):620â636, 2003.
10
[22] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. Intâl Conf. Computer Vision, volume 2, pages 416â423, July 2001.
[23] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European Conference on Computer Vision, pages 525â542. Springer, 2016.
[24] Kenneth Rose, Eitan Gurewitz, and Geoffrey C Fox. Vector quantization by deterministic annealing. IEEE Transactions on Information theory, 38(4):1249â1257, 1992.
[25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
[26] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efï¬cient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874â1883, 2016.
[27] Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. Is the deconvolution layer the same as a convolutional layer? arXiv preprint arXiv:1609.07009, 2016.
[28] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mas- tering the game of go with deep neural networks and tree search. Nature, 529(7587):484â489, 2016.
[29] David S. Taubman and Michael W. Marcellin. JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer Academic Publishers, Norwell, MA, USA, 2001.
[30] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszar. Lossy image compression with compressive autoencoders. In ICLR 2017, 2017.
[31] Radu Timofte, Vincent De Smet, and Luc Van Gool. A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution, pages 111â126. Springer International Publishing, Cham, 2015.
[32] George Toderici, Sean M OâMalley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085, 2015.
[33] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. arXiv preprint arXiv:1608.05148, 2016.
[34] Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
[35] Gregory K Wallace. The JPEG still picture compression standard. IEEE transactions on consumer electronics, 38(1):xviiiâxxxiv, 1992.
[36] Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In Asilomar Conference on Signals, Systems Computers, 2003, volume 2, pages 1398â1402 Vol.2, Nov 2003.
[37] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600â612, April 2004.
[38] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074â2082, 2016.
[39] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229â256, 1992.
[40] Ian H. Witten, Radford M. Neal, and John G. Cleary. Arithmetic coding for data compression. Commun. ACM, 30(6):520â540, June 1987.
11
[41] Paul Wohlhart, Martin Kostinger, Michael Donoser, Peter M. Roth, and Horst Bischof. Optimiz- ing 1-nearest prototype classiï¬ers. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2013.
[42] Eyal Yair, Kenneth Zeger, and Allen Gersho. Competitive learning and soft competition for vector quantizer design. IEEE transactions on Signal Processing, 40(2):294â309, 1992. [43] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quanti- zation: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017.
12
# A Image Compression Details
# A.1 Architecture
We rely on a variant of the compressive autoencoder proposed recently in [30], using convolutional neural networks for the image encoder and image decoder 3. The ï¬rst two convolutional layers in the image encoder each downsample the input image by a factor 2 and collectively increase the number of channels from 3 to 128. This is followed by three residual blocks, each with 128 ï¬lters. Another convolutional layer then downsamples again by a factor 2 and decreases the number of channels to c, where c is a hyperparameter ([30] use 64 and 96 channels). For a w h-dimensional input image, the output of the image encoder is the w/8
Ã
Ã
The image decoder then mirrors the image encoder, using upsampling instead of downsampling, and deconvolutions instead of convolutions, mapping the bottleneck tensor into a w h-dimensional output image. In contrast to the âsubpixelâ layers [26, 27] used in [30], we use standard deconvolutions for simplicity.
# A.2 Hyperparameters
We do vector quantization to L = 1000 centers, using (pw, ph) = (2, 2), i.e., m = d/(2 2).We trained different combinations of β and c to explore different rate-distortion tradeoffs (measuring distortion in MSE). As β controls to which extent the network minimizes entropy, β directly controls bpp (see top left plot in Fig. 3). We evaluated all pairs (c, β) with c , } and selected 5 representative pairs (models) with average bpps roughly corresponding to uniformly spread points in the interval [0.1, 0.8] bpp. This deï¬nes a âquality indexâ for our model family, analogous to the JPEG quality factor.
We experimented with the other training parameters on a setup with c = 32, which we chose as follows. In the ï¬rst stage we train for 250k iterations using a learning rate of 1eâ4. In the second stage, we use an annealing schedule with T = 50k, KG = 100, over 800k iterations using a learning rate of 1eâ5. In both stages, we use a weak l2 regularizer over all learnable parameters, with λ = 1eâ12.
# A.3 Effect of Vector Quantization and Entropy Loss
32 T T T T T â Vector, 6>0 30F) -- Scalar, 8 >0 = [lon Vector, 8 = 0 = 287 JPEG [ad Z 26 a 24 22 L 1 L 1 0.2 0.4 0.6 rate [bpp]
Figure 2: PSNR on ImageNET100 as a function of the rate for 2 1 JPEG is included for reference. 2-dimensional centers (Vector), for 2-dimensional centers without entropy loss (β = 0). à 1-dimensional centers (Scalar), and for 2 à Ã
To investigate the effect of vector quantization, we trained models as described in Section 4, but instead of using vector quantization, we set L = 6 and quantized to 1 1-dimensional (scalar) centers, à i.e., (ph, pw) = (1, 1), m = d. Again, we chose 5 representative pairs (c, β). We chose L = 6 to get 1000. approximately the same number of unique symbol assignments as for 2
Ã
2 centers for c To investigate the effect of the entropy loss, we trained models using 2 à (as described above), but used β = 0. â
8, 16, 32, 48 { Fig. 2 shows how both vector quantization and entropy loss lead to higher compression rates at a given reconstruction MSE compared to scalar quantization and training without entropy loss, respectively.
}
3We note that the image encoder (decoder) refers to the left (right) part of the autoencoder, which encodes (decodes) the data to (from) the bottleneck (not to be confused with the symbol encoder (decoder) in Section 3).
13
â
# A.4 Effect of Annealing
Entropy Loss Soft and Hard PSNR [db] T 28 8 â B=4e -- §=6e 0 50k 100k 150k 200k 250k 300k 0 50k 100k 150k 200k 250k 300k 19 <1 _â8ap(t) 4 4 1 1 1 . 1 L 2 L 1 1 L 1
0 50k 100k 150k 200k 250k 300k
0
50k
100k
150k
200k
250k
300k
Figure 3: Entropy loss for three β values, soft and hard PSNR, as well as gap(t) and Ï as a function of the iteration t.
# A.5 Data Sets and Quality Measure Details
Kodak [1] is the most frequently employed dataset for analizing image compression performance 512 images covering a variety of subjects, locations and in recent years. It contains 24 color 768 lighting conditions.
B100 [31] is a set of 100 content diverse color 481 Dataset [22]. Ã 321 test images from the Berkeley Segmentation
Urban100 [14] has 100 color images selected from Flickr with labels such as urban, city, architecture, and structure. The images are larger than those from B100 or Kodak, in that the longer side of an image is always bigger than 992 pixels. Both B100 and Urban100 are commonly used to evaluate image super-resolution methods.
ImageNET100 contains 100 images randomly selected by us from ImageNET [25], also downsam- pled and cropped, see above.
Quality measures. PSNR (peak signal-to-noise ratio) is a standard measure in direct monotonous relation with the mean square error (MSE) computed between two signals. SSIM and MS-SSIM are the structural similarity index [37] and its multi-scale SSIM computed variant [36] proposed to measure the similarity of two images. They correlate better with human perception than PSNR.
We compute quantitative similarity scores between each compressed image and the corresponding uncompressed image and average them over whole datasets of images. For comparison with JPEG we used libjpeg4, for JPEG 2000 we used the Kakadu implementation5, subtracting in both cases the size of the header from the ï¬le size to compute the compression rate. For comparison with BPG we used the reference implementation6 and used the value reported in the picture_data_length header ï¬eld as ï¬le size.
4http://libjpeg.sourceforge.net/ 5http://kakadusoftware.com/ 6https://bellard.org/bpg/
14
# Image Compression Performance
1.00 pat 0.90F 4 32 reer 0.98 0.85 | 30 g 0.96 oso - He 0.94 _ oe ah a. a ZF o.gob BH 0.75 is > oa bal Z y : 4 22 B 26 . 4 £= 0.90 0.70 ia iad 4 7 , 5 7 0.88 Fa 0.65 24 of 0.86 1 77 a" 0.60 22 0.2 04 0.6 0.2 04 06 0.2 04 0.6 rate [bpp] rate [bpp] rate [bpp] 1.00 peer 0.90F 4 0.98 â 0.85 4 0.96 - = 0.80 od 0.94 _ 3 SB oo9 5 075 oz Be 0.92 EG 0.7 g = 0.90 0.70 = 0.88 0.65 0.86 , 0.60 0.2 04 0.6 0.2 04 06 02 04 0.6 rate [bpp] rate [bpp] rate [bpp] 1.00 0.90 0.98 0.85 0.96 Ss 0.80 Se 0.94 - ae a 0.75 BF 0.92 G 0.75 P= 0.90 0.70 0.88 0.65 0.86 beta 0.60 22 bait 0.2 04 0.6 0.2 04 06 0.2 04 06 rate [bpp] rate [bpp] rate [bpp] 1.00 0.90F Ey 32 - 0.98 â 0.85 - oh 30 0.96 - 2 0.94 s 0.80 Se E os s 3% 0.92 BR 0.75 f Â¥E ° & 26 SHA (ows) 0.90 0.70 ; * BPG 0.88 - 24 JPEG 2000 0.65 0.86 ; ' JPEG beta 0.60 agli rot 0.2 04 0.6 0.2 04 06 02 04 06 rate [bpp] rate [bpp] rate [bpp]
Figure 4: Average MS-SSIM, SSIM, and PSNR as a function of the rate for the ImageNET100, Urban100, B100 and Kodak datasets.
# Image Compression Visual Examples
An online supplementary of visual examples is available at http://www.vision.ee.ethz.ch/ ~aeirikur/compression/visuals2.pdf, showing the output of compressing the ï¬rst four images of each of the four datasets with our method, BPG, JPEG, and JPEG 2000, at low bitrates.
15
A.8 DNN Compression: Entropy and Histogram Evolution
# Entropy
# Histogram H
= 4.07
x10°
5 3.0 0 1k 2k 3k =i 0sid120sâsi=20,s-380sâs«âs=d5Ds«GDâs 70 3.0 x1o° Histogram H = 2.90 30 x19° Histogram H = 1.58 3. 1 : 1 r ; T r 3. 1 : ; 1 r r r 2.54 4 25h | 2.04 4 2.0F | 15h 4 15h 4 â0 10 20 300 «©40 «©5006 «66070 â0 10 20 30 40 50 60 70
Figure 5: We show how the sample entropy H(p) decays during training, due to the entropy loss term in (6), and corresponding index histograms at three time instants. Top left: Evolution of the sample entropy H(p). Top right: the histogram for the entropy H = 4.07 at t = 216. Bottom left and right: the corresponding sample histogram when H(p) reaches 2.90 bits per weight at t = 475 and the ï¬nal histogram for H(p) = 1.58 bits per weight at t = 520.
16 | {
"id": "1703.10114"
} |
1704.00109 | Snapshot Ensembles: Train 1, get M for free | Ensembles of neural networks are known to be much more robust and accurate
than individual networks. However, training multiple deep networks for model
averaging is computationally expensive. In this paper, we propose a method to
obtain the seemingly contradictory goal of ensembling multiple neural networks
at no additional training cost. We achieve this goal by training a single
neural network, converging to several local minima along its optimization path
and saving the model parameters. To obtain repeated rapid convergence, we
leverage recent work on cyclic learning rate schedules. The resulting
technique, which we refer to as Snapshot Ensembling, is simple, yet
surprisingly effective. We show in a series of experiments that our approach is
compatible with diverse network architectures and learning tasks. It
consistently yields lower error rates than state-of-the-art single models at no
additional training cost, and compares favorably with traditional network
ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain
error rates of 3.4% and 17.4% respectively. | http://arxiv.org/pdf/1704.00109 | Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170401 | 20170401 | 7 1 0 2
r p A 1 ] G L . s c [
1 v 9 0 1 0 0 . 4 0 7 1 : v i X r a
Published as a conference paper at ICLR 2017
# SNAPSHOT ENSEMBLES: TRAIN 1, GET M FOR FREE
Gao Huangâ, Yixuan Liâ, Geoff Pleiss Cornell University {gh349, yl2363}@cornell.edu, geoff@cs.cornell.edu
# Zhuang Liu Tsinghua University liuzhuangthu@gmail.com
# John E. Hopcroft, Kilian Q. Weinberger Cornell University jeh@cs.cornell.edu, kqw4@cornell.edu
# ABSTRACT
Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural net- work, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with tradi- tional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.
# INTRODUCTION
Stochastic Gradient Descent (SGD) (Bottou, 2010) and its accelerated variants (Kingma & Ba, 2014; Duchi et al., 2011) have become the de-facto approaches for optimizing deep neural networks. The popularity of SGD can be attributed to its ability to avoid and even escape spurious saddle-points and local minima (Dauphin et al., 2014). Although avoiding these spurious solutions is generally considered positive, in this paper we argue that these local minima contain useful information that may in fact improve model performance.
Although deep networks typically never converge to a global minimum, there is a notion of âgoodâ and âbadâ local minima with respect to generalization. Keskar et al. (2016) argue that local minima with ï¬at basins tend to generalize better. SGD tends to avoid sharper local minima because gradients are computed from small mini-batches and are therefore inexact (Keskar et al., 2016). If the learning- rate is sufï¬ciently large, the intrinsic random motion across gradient steps prevents the optimizer from reaching any of the sharp basins along its optimization path. However, if the learning rate is small, the model tends to converge into the closest local minimum. These two very different behaviors of SGD are typically exploited in different phases of optimization (He et al., 2016a). Initially the learning rate is kept high to move into the general vicinity of a ï¬at local minimum. Once this search has reached a stage in which no further progress is made, the learning rate is dropped (once or twice), triggering a descent, and ultimately convergence, to the ï¬nal local minimum.
It is well established (Kawaguchi, 2016) that the number of possible local minima grows expo- nentially with the number of parametersâof which modern neural networks can have millions. It is therefore not surprising that two identical architectures optimized with different initializations or minibatch orderings will converge to different solutions. Although different local minima often have very similar error rates, the corresponding neural networks tend to make different mistakes. This
âAuthors contribute equally.
1
Published as a conference paper at ICLR 2017
o5~ Single Model o4., Standard LR Schedule °5~ Snapshot Ensemble 04 Cyclic LR Schedule
o5~ Single Model o4., Standard LR Schedule
°5~ Snapshot Ensemble 04 Cyclic LR Schedule
Figure 1: Left: Illustration of SGD optimization with a typical learning rate schedule. The model converges to a minimum at the end of training. Right: Illustration of Snapshot Ensembling. The model undergoes several learning rate annealing cycles, converging to and escaping from multiple local minima. We take a snapshot at each minimum for test-time ensembling.
diversity can be exploited through ensembling, in which multiple neural networks are trained from different initializations and then combined with majority voting or averaging (Caruana et al., 2004). Ensembling often leads to drastic reductions in error rates. In fact, most high proï¬le competitions, e.g. Imagenet (Deng et al., 2009) or Kaggle1, are won by ensembles of deep learning architectures.
Despite its obvious advantages, the use of ensembling for deep networks is not nearly as wide- spread as it is for other algorithms. One likely reason for this lack of adaptation may be the cost of learning multiple neural networks. Training deep networks can last for weeks, even on high performance hardware with GPU acceleration. As the training cost for ensembles increases linearly, ensembles can quickly becomes uneconomical for most researchers without access to industrial scale computational resources.
In this paper we focus on the seemingly-contradictory goal of learning an ensemble of multiple neural networks without incurring any additional training costs. We achieve this goal with a training method that is simple and straight-forward to implement. Our approach leverages the non-convex nature of neural networks and the ability of SGD to converge to and escape from local minima on demand. Instead of training M neural networks independently from scratch, we let SGD converge M times to local minima along its optimization path. Each time the model converges, we save the weights and add the corresponding network to our ensemble. We then restart the optimization with a large learning rate to escape the current local minimum. More speciï¬cally, we adopt the cycling procedure suggested by Loshchilov & Hutter (2016), in which the learning rate is abruptly raised and then quickly lowered to follow a cosine function. Because our ï¬nal ensemble consists of snapshots of the optimization path, we refer to our approach as Snapshot Ensembling. Figure 1 presents a high-level overview of this method.
In contrast to traditional ensembles, the training time for the entire ensemble is identical to the time required to train a single traditional model. During testing time, one can evaluate and average the last (and therefore most accurate) m out of M models. Our approach is naturally compatible with other methods to improve the accuracy, such as data augmentation, stochastic depth (Huang et al., 2016b), or batch normalization (Ioffe & Szegedy, 2015). In fact, Snapshot Ensembles can even be ensembled, if for example parallel resources are available during training. In this case, an ensemble of K Snapshot Ensembles yields K Ã M models at K times the training cost.
We evaluate the efï¬cacy of Snapshot Ensembles on three state-of-the-art deep learning architectures for object recognition: ResNet (He et al., 2016b), Wide-ResNet (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2016a). We show across four different data sets that Snapshot Ensem- bles almost always reduce error without increasing training costs. For example, on CIFAR-10 and CIFAR-100, Snapshot Ensembles obtains error rates of 3.44% and 17.41% respectively.
1www.kaggle.com
2
Published as a conference paper at ICLR 2017
# 2 RELATED WORK
Neural network ensembles have been widely studied and applied in machine learning (Hansen & Salamon, 1990; Krogh et al., 1995). However, most of these prior studies focus on improving the generalization performance, while few of them address the cost of training ensembles.
As an alternative to traditional ensembles, so-called âimplicitâ ensembles have high efï¬ciency dur- ing both training and testing (Srivastava et al., 2014; Wan et al., 2013; Huang et al., 2016b; Singh et al., 2016; Krueger et al., 2016). The Dropout (Srivastava et al., 2014) technique creates an en- semble out of a single model by âdroppingâ â or zeroing â random sets of hidden nodes during each mini-batch. At test time, no nodes are dropped, and each node is scaled by the probability of surviving during training. Srivastava et al. claim that Dropout reduces overï¬tting by preventing the co-adaptation of nodes. An alternative explanation is that this mechanism creates an exponential number of networks with shared weights during training, which are then implicitly ensembled at test time. DropConnect (Wan et al., 2013) uses a similar trick to create ensembles at test time by dropping connections (weights) during training instead of nodes. The recently proposed Stochastic Depth technique (Huang et al., 2016b) randomly drops layers during training to create an implicit ensemble of networks with varying depth at test time. Finally, Swapout (Singh et al., 2016) is a stochastic training method that generalizes Dropout and Stochastic Depth. From the perspective of model ensembling, Swapout creates diversiï¬ed network structures for model averaging. Our pro- posed method similarly trains only a single model; however, the resulting ensemble is âexplicitâ in that the models do not share weights. Furthermore, our method can be used in conjunction with any of these implicit ensembling techniques.
Several recent publications focus on reducing the test time cost of ensembles, by transferring the âknowledgeâ of cumbersome ensembles into a single model (Bucilu et al., 2006; Hinton et al., 2015). Hinton et al. (2015) propose to use an ensemble of multiple networks as the target of a single (smaller) network. Our proposed method is complementary to these works as we aim to reduce the training cost of ensembles rather than the test-time cost.
Perhaps most similar to our work is that of Swann & Allinson (1998) and Xie et al. (2013), who introduce the hori- explore creating ensembles from slices of the learning trajectory. Xie et al. zontal and vertical ensembling method, which combines the output of networks within a range of training epochs. More recently, Jean et al. (2014) and Sennrich et al. (2016) show improvement by ensembling the intermediate stages of model training. Laine & Aila (2016) propose a temporal ensembling method for semi-supervised learning, which achieves consensus among models trained with different regularization and augmentation conditions for better generalization performance. Fi- nally, Moghimi et al. (2016) show that boosting can be applied to convolutional neural networks to create strong ensembles. Our work differs from these prior works in that we force the model to visit multiple local minima, and we take snapshots only when the model reaches a minimum. We believe this key insight allows us to leverage more power from our ensembles.
Our work is inspired by the recent ï¬ndings of Loshchilov & Hutter (2016) and Smith (2016), who show that cyclic learning rates can be effective for training convolutional neural networks. The au- thors show that each cycle produces models which are (almost) competitive to those learned with traditional learning rate schedules while requiring a fraction of training iterations. Although model performance temporarily suffers when the learning rate cycle is restarted, the performance eventu- ally surpasses the previous cycle after annealing the learning rate. The authors suggest that cycling perturbs the parameters of a converged model, which allows the model to ï¬nd a better local mini- mum. We build upon these recent ï¬ndings by (1) showing that there is signiï¬cant diversity in the local minima visited during each cycle and (2) exploiting this diversity using ensembles. We are not concerned with speeding up or improving the training of a single model; rather, our goal is to extract an ensemble of classiï¬ers while following the optimization path of the ï¬nal model.
# 3 SNAPSHOT ENSEMBLING
Snapshot Ensembling produces an ensemble of accurate and diverse models from a single training process. At the heart of Snapshot Ensembling is an optimization process which visits several local minima before converging to a ï¬nal solution. We take model snapshots at these various minima, and average their predictions at test time.
3
Published as a conference paper at ICLR 2017
Ensembles work best if the individual models (1) have low test error and (2) do not overlap in the set of examples they misclassify. Along most of the optimization path, the weight assignments of a neural network tend not to correspond to low test error. In fact, it is commonly observed that the validation error drops signiï¬cantly only after the learning rate has been reduced, which is typically done after several hundred epochs. Our approach is inspired by the observation that training neural networks for fewer epochs and dropping the learning rate earlier has minor impact on the ï¬nal test error (Loshchilov & Hutter, 2016). This seems to suggest that local minima along the optimization path become promising (in terms of generalization error) after only a few epochs. Cyclic Cosine Annealing. To converge to mul- tiple local minima, we follow a cyclic annealing schedule as proposed by Loshchilov & Hutter (2016). We lower the learning rate at a very fast pace, encouraging the model to converge towards its ï¬rst local minimum after as few as 50 epochs. The optimization is then contin- ued at a larger learning rate, which perturbs the model and dislodges it from the minimum. We repeat this process several times to obtain mul- tiple convergences. Formally, the learning rate α has the form:
Cifar10 (L=100,k=24, B=300 epochs) â â Standard Ir scheduling â Cosine annealing with restart Ir 0.1 10° 1 1 1 1 1 { | i] | g 2 10" > 4g 102 & 10% i \ Mode \ Model \ Model \ Model I Mode \ Model 104 1 | | I! | ° 8° 100 Epochs 200250300
a(t) = f (mod(tâ1,/T/M])), ()
where t is the iteration number, T' is the to- Epochs tal number of training iterations, and f is a monotonically decreasing function. In other words, we split the training process into M cy- cles, each of which starts with a large learning rate, which is annealed to a smaller learning Figure 2: Training loss of 100-layer DenseNet on CI- FAR10 using standard learning rate (blue) and M = 6 cosine annealing cycles (red). The intermediate mod- els, denoted by the dotted lines, form an ensemble at rate. The large learning rate a = f(0) pro- the end of training. vides the model enough energy to escape from a critical point, while the small learning rate a = f({I/M]) drives the model to a well behaved local minimum. In our experiments, we set f to be the shifted cosine function proposed by{Loshchilov & Hutter] (2016):
(<x (mn) : :)
where ag is the initial learning rate. Intuitively, this function anneals the learning rate from i value a to f([T'/M]) ~Â¥ 0 over the course of a cycle. Following update the learning rate at each iteration rather than at every epoch. This improves the convergence of short cycles, even when a large initial learning rate is used. Snapshot Ensembling. [Figure 2) depicts the training process using cyclic and traditional learning rate schedules. At the end of each training cycle, it is apparent that the model reaches a local mini- mum with respect to the training loss. Thus, before raising the learning rate, we take a âsnapshotâ of the model weights (indicated as vertical dashed black lines). After training M cycles, we have M model snapshots, 1... fz, each of which will be used in the final ensemble. It is important to highlight that the total training time of the M/ snapshots is the same as training a model with a stan- dard schedule (indicated in blue). In some cases, the standard learning rate schedule achieves lower training loss than the cyclic schedule; however, as we will show in the next section, the benefits of ensembling outweigh this difference. Ensembling at Test Time. The ensemble prediction at test time is the average of the last m (m < M) modelâs softmax outputs. Let x be a test sample and let h; (x) be the softmax score of snapshot i. The output of the ensemble is a simple average of the last m models: Rensemble = + â¢~1 hyy_; (x). We always ensemble the last m models, as these models tend to have the lowest test error.
4
Published as a conference paper at ICLR 2017
Method C10 C100 ResNet-110 Single model NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) 5.52 5.49 6.66 5.73 5.32 28.02 26.97 24.54 25.55 24.19 1.96 1.78 1.74 1.63 1.66 46.50 43.69 42.60 40.54 39.40 Wide-ResNet-32 DenseNet-40 DenseNet-100 Single model Dropout NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) Single model Dropout NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) Single model Dropout NoCycle Snapshot Ensemble SingleCycle Ensembles Snapshot Ensemble (α0 = 0.1) Snapshot Ensemble (α0 = 0.2) 5.43 4.68 5.18 5.95 4.41 4.73 5.24â 6.08 5.20 5.43 4.99 4.84 3.74â 3.65 3.80 4.52 3.57 3.44 23.55 22.82 22.81 21.38 21.26 21.56 24.42â 25.79 24.63 22.51 23.34 21.93 19.25â 18.77 19.30 18.38 18.12 17.41 1.90 1.81 1.81 1.65 1.64 1.51 1.77 1.79â 1.80 1.87 1.64 1.73 - - - - - 39.63 36.58 38.64 35.53 35.45 32.90 39.09 39.68 38.51 38.00 37.25 36.61 - - - - -
Table 1: Error rates (%) on CIFAR-10 and CIFAR-100 datasets. All methods in the same group are trained for the same number of iterations. Results of our method are colored in blue, and the best result for each network/dataset pair are bolded. â indicates numbers which we take directly from Huang et al. (2016a).
# 4 EXPERIMENTS
We demonstrate the effectiveness of Snapshot Ensembles on several benchmark datasets, comparing with competitive baselines. We run all experiments with Torch 7 (Collobert et al., 2011)2.
4.1 DATASETS
CIFAR. The two CIFAR datasets (Krizhevsky & Hinton, 2009) consist of colored natural images sized at 32Ã32 pixels. CIFAR-10 (C10) and CIFAR-100 (C100) images are drawn from 10 and 100 classes, respectively. For each dataset, there are 50,000 training images and 10,000 images reserved for testing. We use a standard data augmentation scheme (Lin et al., 2013; Romero et al., 2014; Lee et al., 2015; Springenberg et al., 2014; Srivastava et al., 2015; Huang et al., 2016b; Larsson et al., 2016), in which the images are zero-padded with 4 pixels on each side, randomly cropped to produce 32Ã32 images, and horizontally mirrored with probability 0.5. SVHN. The Street View House Numbers (SVHN) dataset (Netzer et al., 2011) contains 32 à 32 colored digit images from Google Street View, with one class for each digit. There are 73,257 images in the training set and 26,032 images in the test set. Following common practice (Sermanet et al., 2012; Goodfellow et al., 2013; Huang et al., 2016a), we withhold 6,000 training images for validation, and train on the remaining images without data augmentation. Tiny ImageNet. The Tiny ImageNet dataset3 consists of a subset of ImageNet images (Deng et al., 2009). There are 200 classes, each of which has 500 training images and 50 validation images. Each image is resized to 64 à 64 and augmented with random crops, horizontal mirroring, and RGB intensity scaling (Krizhevsky et al., 2012). ImageNet. The ILSVRC 2012 classiï¬cation dataset (Deng et al., 2009) consists of 1000 images classes, with a total of 1.2 million training images and 50,000 validation images. We adopt the same
2Code to reproduce results is available at https://github.com/gaohuang/SnapshotEnsemble 3https://tiny-imagenet.herokuapp.com
5
Published as a conference paper at ICLR 2017
Cifarl 0 (a9 =0.1) Cifarl 00 (ay = 0.1) Cifarl 0 (a9 =0.2) Cifarl 00 (ay = 0.2) DenseNet Baseline 40 â77 (Huang et al. 2016) 7 2 3 4S # of snapshots DenseNet Baseline 20.0 - 7" (Huang et al. 2016) + 2 3 4 5 6 # of snapshots DenseNet Baseline â77 (Huang et al. 2016) 238M # of snapshots DenseNet Baseline ~ 77 (Huang et al. 2016) 2 3 4 «5 6 # of snapshots
Figure 3: DenseNet-100 Snapshot Ensemble performance on CIFAR-10 and CIFAR-100 with restart learning rate α0 = 0.1 (left two) and α0 = 0.2 (right two). Each ensemble is trained with M = 6 annealing cycles (50 epochs per each).
data augmentation scheme as in (He et al., 2016a; Huang et al., 2016a) and apply a 224 Ã 224 center crop to images at test time.
4.2 TRAINING SETTING
Architectures. We test several state-of-the-art architectures, including residual networks (ResNet) (He et al., 2016a), Wide ResNet (Zagoruyko & Komodakis, 2016) and DenseNet (Huang et al., 2016a). For ResNet, we use the original 110-layer network introduced by He et al. (2016a). Wide-ResNet is a 32-layer ResNet with 4 times as many convolutional features per layer as a stan- dard ResNet. For DenseNet, our large model follows the same setup as (Huang et al., 2016a), with depth L = 100 and growth rate k = 24. In addition, we also evaluate our method on a small DenseNet, with depth L = 40 and k = 12. To adapt all these networks to Tiny ImageNet, we add a stride of 2 to the ï¬rst layer of the models, which downsamples the images to 32 à 32. For ImageNet, we test the 50-layer ResNet proposed in (He et al., 2016a). We use a mini batch size of 64.4 Baselines. Snapshot Ensembles incur the training cost of a single model; therefore, we compare with baselines that require the same amount of training. First, we compare against a Single Model trained with a standard learning rate schedule, dropping the learning rate from 0.1 to 0.01 halfway through training, and then to 0.001 when training is at 75%. Additionally, to compare against implicit ensembling methods, we test against a single model trained with Dropout. This baseline uses the same learning rate as above, and drops nodes during training with a probability of 0.2.
We then test the Snapshot Ensemble algorithm trained with the cyclic cosine learning rate as de- scribed in (2). We test models with the max learning rate α0 set to 0.1 and 0.2. In both cases, we divide the training process into learning rate cycles. Model snapshots are taken after each learn- ing rate cycle. Additionally, we train a Snapshot Ensemble with a non-cyclic learning rate schedule. This NoCycle Snapshot Ensemble, which uses the same schedule as the Single Model and Dropout baselines, is meant to highlight the impact of cyclic learning rates for our method. To accurately compare with the cyclic Snapshot Ensembles, we take the same number of snapshots equally spaced throughout the training process. Finally, we compare against SingleCycle Ensembles, a Snapshot Ensemble variant in which the network is re-initialized at the beginning of every cosine learning rate cycle, rather than using the parameters from the previous optimization cycle. This baseline es- sentially creates a traditional ensemble, yet each network only has 1/M of the typical training time. This variant is meant to highlight the tradeoff between model diversity and model convergence. Though SingleCycle Ensembles should in theory explore more of the parameter space, the models do not beneï¬t from the optimization of previous cycles. Training Budget. On CIFAR datasets, the training budget is B = 300 epochs for DenseNet-40 and DenseNet-100, and B = 200 for ResNet and Wide ResNet models. Snapshot variants are trained with M = 6 cycles of B/M = 50 epochs for DenseNets, and M = 5 cycles of B/M = 40 epochs for ResNets/Wide ResNets. SVHN models are trained with a budget of B = 40 epochs (5 cycles of 8 epochs). For Tiny ImageNet, we use a training budget of B = 150 (6 cycles of 25 epochs). Finally, ImageNet is trained with a budget of B = 90 epochs, and we trained 2 Snapshot variants: one with M = 2 cycles and one with M = 3.
4Exceptions: ResNet-110 and Wide-ResNet are trained with batch size 128 on Tiny ImageNet. The Ima- geNet model is trained with batch size 256.
6
Published as a conference paper at ICLR 2017
# 4.3 SNAPSHOT ENSEMBLE RESULTS
Accuracy. The main results are summarized in Table 1. In most cases, Snapshot ensem- bles achieve lower error than any of the base- line methods. Most notably, Snapshot Ensem- bles yield an error rate of 17.41% on CIFAR- 100 using large DenseNets, far outperforming the record of 19.25% under the same training cost and architecture (Huang et al., 2016a). Our method has the most success on CIFAR-100 and Tiny ImageNet, which is likely due to the complexity of these datasets. The softmax outputs for these datasets are high dimensional due to the large number of classes, making it unlikely that any two models make the same predictions. Snap- shot Ensembling is also capable of improving the competitive baselines for CIFAR-10 and SVHN as well, reducing error by 1% and 0.4% respectively with the Wide ResNet architecture.
Method Val. Error (%) Single model Snapshot Ensemble (M = 2) Snapshot Ensemble (M = 3) 24.01 23.33 23.96
The NoCycle Snapshot Ensemble generally has little effect on performance, and in some instances even increases the test error. This highlights the need for a cyclic learning rate for useful ensembling. The SingleCycle Ensemble has similarly mixed performance. In some cases, e.g., DenseNet-40 on CIFAR-100, the SingleCycle Ensemble is competitive with Snapshot Ensembles. However, as the model size increases to 100 layers, it does not perform as well. This is because it is difï¬cult to train a large model from scratch in only a few epochs. These results demonstrate that Snapshot Ensembles tend to work best when utilizing information from previous cycles. Effectively, Snapshot Ensembles strike a balance between model diversity and optimization.
Table 2 shows Snapshot Ensemble results on ImageNet. The Snapshot Ensemble with M = 2 achieves 23.33% validation error, outperforming the single model baseline with 24.01% validation error. It appears that 2 cycles is the optimal choice for the ImageNet dataset. Provided with the limited total training budget B = 90 epochs, we hypothesize that allocating fewer than B/2 = 45 epochs per training cycle is insufï¬cient for the model to converge on such a large dataset. Ensemble Size. In some applications, it may be beneï¬cial to vary the size of the ensemble dynamically at test time depending on available resources. Figure 3 displays the performance of DenseNet-40 on the CIFAR-100 dataset as the effective ensemble size, m, is varied. Each en- semble consists of snapshots from later cycles, as these snapshots have received the most training and therefore have likely converged to bet- ter minima. Although ensembling more models generally gives better performance, we observe signiï¬cant drops in error when the second and third models are added to the ensemble. In most cases, an ensemble of two models outperforms the baseline model. Restart Learning Rate. The effect of the restart learning rate can be observed in Figure 3. The left two plots show performance when using a restart learning rate of α0 = 0.1 at the beginning of each cycle, and the right two plots show α0 = 0.2. In most cases, ensembles with the larger restart learning rate perform better, presumably because the strong perturbation in between cycles increases the diversity of local minima. Varying Number of Cycles. Given a ï¬xed training budget, there is a trade-off between the number of learning rate cycles and their length. Therefore, we investigate how the number of cycles M affects the ensemble performance, given a ï¬xed training budget. We train a 40-layer DenseNet on the CIFAR-100 dataset with an initial learning rate of α0 = 0.2. We ï¬x the total training budget B = 300 epochs, and vary the value of M â {2, 4, 6, 8, 10}. As shown in Table 3, our method is relatively robust with respect to different values of M . At the extremes, M = 2 and M = 10, we ï¬nd a slight degradation in performance, as the cycles are either too few or too short. In practice, we ï¬nd that setting M to be 4 â¼ 8 works reasonably well. Varying Training Budget. The left and middle panels of Figure 4 show the performance of Snap- shot Ensembles and SingleCycle Ensembles as a function of training budget (where the number of cycles is ï¬xed at M = 6). We train a 40-layer DenseNet on CIFAR-10 and CIFAR-100, with an ini- tial learning rate of α0 = 0.1, varying the total number of training epochs from 60 to 300. We observe
M Test Error (%) 2 4 6 8 10 22.92 22.07 21.93 21.89 22.16
7
Published as a conference paper at ICLR 2017
Cifar10, DenseNet-40 Cifar100, DenseNet-40 Cifarl100, DenseNet-40 + Snapshot Ensemble â+â Snapshot Ensemble 10 + SingleCycle Ensemble 36 ââ SingleCycle Ensemble g = = Single Model Snapshot ensemble ~ (60 epochs per model cost) True ensemble of fully trained models, (300 epochs per model cost) Ensemble Test Error (%) Ensemble Test Error (%) Ensemble Test Error (%) 100 150 200 250 300 100 150 200 250 300 1 2 3 4 Training budget B (epochs) Training budget # (epochs) 5 6 # of models
Figure 4: Snapshot Ensembles under different training budgets on (Left) CIFAR-10 and (Middle) CIFAR-100. Right: Comparison of Snapshot Ensembles with true ensembles.
we Cifar10 (cosine annealing) so Cifar100 (cosine annealing) Cifar10 (standard Ir scheduling) Cifar100 (standard Ir scheduling) â with 5th snapshot â with 4-08 snapshot â with a-rd snapshot â with 2-nd snapshot â with 1-st snapshot th 5-th snapshot ith 4-th snapshot ith 3-rd snapshot ith 2-nd snapshot ith I-st snapshot.
Figure 5: Interpolations in parameter space between the ï¬nal model (sixth snapshot) and all intermediate snapshots. λ = 0 represents an intermediate snapshot model, while λ = 1 represents the ï¬nal model. Left: A Snapshot Ensemble, with cosine annealing cycles (α0 = 0.2 every B/M = 50 epochs). Right: A NoCycle Snapshot Ensemble, (two learning rate drops, snapshots every 50 epochs).
that both Snapshot Ensembles and SingleCycle Ensembles become more accurate as training bud- get increases. However, we note that as training budget decreases, Snapshot Ensembles still yield competitive results, while the performance of the SingleCycle Ensembles degrades rapidly. These results highlight the improvements that Snapshot Ensembles obtain when the budget is low. If the budget is high, then the SingleCycle baseline approaches true ensembles and outperforms Snapshot ensembles eventually. Comparison with True Ensembles. We compare Snapshot Ensembles with the traditional ensem- bling method. The right panel of Figure 4 shows the test error rates of DenseNet-40 on CIFAR-100. The true ensemble method averages models that are trained with 300 full epochs, each with differ- ent weight initializations. Given the same number of models at test time, the error rate of the true ensemble can be seen as a lower bound of our method. Our method achieves performance that is comparable with ensembling of 2 independent models, but with the training cost of one model.
# 4.4 DIVERSITY OF MODEL ENSEMBLES
Parameter Space. We hypothesize that the cyclic learning rate schedule creates snapshots which are not only accurate but also diverse with respect to model predictions. We qualitatively measure this diversity by visualizing the local minima that models converge to. To do so, we linearly interpolate snapshot models, as described by Goodfellow et al. (2014). Let J (θ) be the test error of a model using parameters θ. Given θ1 and θ2 â the parameters from models 1 and 2 respectively â we can compute the loss for a convex combination of model parameters: J (λ (θ1) + (1 â λ) (θ2)), where λ is a mixing coefï¬cient. Setting λ to 1 results in a parameters that are entirely θ1 while setting λ to 0 gives the parameters θ2. By sweeping the values of λ, we can examine a linear slice of the parameter space. Two models that converge to a similar minimum will have smooth parameter interpolations, whereas models that converge to different minima will likely have a non-convex interpolation, with a spike in error when λ is between 0 and 1.
Figure 5 displays interpolations between the ï¬nal model of DenseNet-40 (sixth snapshot) and all intermediate snapshots. The left two plots show Snapshot Ensemble models trained with a cyclic learning rate, while the right two plots show NoCycle Snapshot models. λ = 0 represents a model which is entirely snapshot parameters, while λ = 1 represents a model which is entirely the param- eters of the ï¬nal model. From this ï¬gure, it is clear that there are differences between cyclic and
8
Published as a conference paper at ICLR 2017
non-cyclic learning rate schedules. Firstly, all of the cyclic snapshots achieve roughly the same error as the ï¬nal cyclical model, as the error is similar for λ = 0 and λ = 1. Additionally, it appears that most snapshots do not lie in the same minimum as the ï¬nal model. Thus the snapshots are likely to misclassify different samples. Conversely, the ï¬rst three snapshots achieve much higher error than the ï¬nal model. This can be observed by the sharp minima around λ = 1, which suggests that mixing in any amount of the snapshot parameters will worsen performance. While the ï¬nal two snapshots achieve low error, the ï¬gures suggests that they lie in the same minimum as the ï¬nal model, and therefore likely add limited diversity to the ensemble. Activation space. To further explore the diver- sity of models, we compute the pairwise corre- lation of softmax outputs for every pair of snap- shots. Figure 6 displays the average correla- tion for both cyclic snapshots and non-cyclical snapshots. Firstly, there are large correlations between the last 3 snapshots of the non-cyclic training schedule (right). These snapshots are taken after dropping the learning rate, suggest- ing that each snapshot has converged to the same minimum. Though there is more diversity amongst the earlier snapshots, these snapshots have much higher error rates and are therefore not ideal for ensembling. Conversely, there is less correlation between all cyclic snapshots (left). Because all snapshots have similar accu- racy (as can be seen in Figure 5), these differ- ences in predictions can be exploited to create effective ensembles.
# 5 DISCUSSION
We introduce Snapshot Ensembling, a simple method to obtain ensembles of neural networks with- out any additional training cost. Our method exploits the ability of SGD to converge to and escape from local minima as the learning rate is lowered, which allows the model to visit several weight assignments that lead to increasingly accurate predictions over the course of training. We harness this power with the cyclical learning rate schedule proposed by Loshchilov & Hutter (2016), saving model snapshots at each point of convergence. We show in several experiments that all snapshots are accurate, yet produce different predictions from one another, and therefore are well suited for test-time ensembles. Ensembles of these snapshots signiï¬cantly improve the state-of-the-art on CIFAR-10, CIFAR-100 and SVHN. Future work will explore combining Snapshot Ensembles with traditional ensembles. In particular, we will investigate how to balance growing an ensemble with new models (with random initializations) and reï¬ning existing models with further training cycles under a ï¬xed training budget.
# ACKNOWLEDGEMENTS
We thank Ilya Loshchilov and Frank Hutter for their insightful comments on the cyclic cosine- shaped learning rate. The authors are supported in part by the, III-1618134, III-1526012, IIS- 1149882 grants from the National Science Foundation, US Army Research Ofï¬ce W911NF-14- 1-0477, and the Bill and Melinda Gates Foundation.
# REFERENCES
L´eon Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT. 2010.
Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006.
Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. Ensemble selection from libraries of models. In ICML, 2004.
9
Published as a conference paper at ICLR 2017
Ronan Collobert, Koray Kavukcuoglu, and Cl´ement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011.
Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In NIPS, 2014.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121â2159, 2011.
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In ICML, 2013.
Ian J Goodfellow, Oriol Vinyals, and Andrew M Saxe. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544, 2014.
Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12:993â1001, 1990.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, 2016a.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochas- tic depth. In ECCV, 2016b.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICCV, 2015.
Sbastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007, 2014.
Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110, 2016.
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, 2012.
Anders Krogh, Jesper Vedelsby, et al. Neural network ensembles, cross validation, and active learn- ing. In NIPS, volume 7, 1995.
David Krueger, Tegan Maharaj, J´anos Kram´ar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose- mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.
10
Published as a conference paper at ICLR 2017
Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016.
Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, 2015.
Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv preprint arXiv:1608.03983, 2016.
Mohammad Moghimi, Mohammad Saberian, Jian Yang, Li-Jia Li, Nuno Vasconcelos, and Serge Belongie. Boosted convolutional neural networks. 2016.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning, 2011. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891, 2016.
Pierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied to house numbers digit classiï¬cation. In ICPR, 2012.
Saurabh Singh, Derek Hoiem, and David Forsyth. Swapout: Learning an ensemble of deep archi- tectures. arXiv preprint arXiv:1605.06465, 2016.
Leslie N. Smith. No more pesky learning rate guessing games. CoRR, abs/1506.01186, 2016. URL http://arxiv.org/abs/1506.01186.
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. Journal of Machine Learning Research, 15(1):1929â1958, 2014.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
A Swann and N Allinson. Fast committee learning: Preliminary results. Electronics Letters, 34(14): 1408â1410, 1998.
Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
Jingjing Xie, Bing Xu, and Zhang Chuang. Horizontal and vertical ensemble with deep representa- tion for classiï¬cation. arXiv preprint arXiv:1306.2759, 2013.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
11
Published as a conference paper at ICLR 2017
# SUPPLEMENTARY
# A. Single model and Snapshot Ensemble performance over time
In Figures 7-9, we compare the test error of Snapshot Ensembles with the error of individual model snapshots. The blue curve shows the test error of a single model snapshot using a cyclic cosine learning rate. The green curve shows the test error when ensembling model snapshots over time. (Note that, unlike Figure 3, we construct these ensembles beginning with the earliest snapshots.) As a reference, the red dashed line in each panel represents the test error of single model trained for 300 epochs using a standard learning rate schedule. Without Snapshot Ensembles, in about half of the cases, the test error of ï¬nal model using a cyclic learning rateâthe right most point in the blue curveâis no better than using a standard learning rate schedule.
One can observe that under almost all settings, complete Snapshot Ensemblesâthe right most points of the green curvesâoutperform the single model baselines. In many cases, ensembles of just 2 or 3 model snapshots are able to match the performance of the single model trained with a standard learning rate. Not surprisingly, the ensembles of model snapshots consistently outperform any of its members, yielding a smooth curve of test error over time.
ResNet-110 on C10 («1,=0.1) ResNet-110 on C10 («1,=0.2) 9 9 Es SI £7 g7 5 5 Soper ESS ee ac 5 5 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots ResNet-110 on C100 («1,=0.1) ResNet-110 on C100 («1,=0.2) 324 32 = 30 g3 8 28b--â - 8 28h â SS ye = ge ~~ 3 26 3% 26 3 3 * 24 © 24 1 2 3 4 5 1 2 3 5 #snapshots #snapshots ResNet-110 on SVHN (i,=0.1) ResNet~110 on SVHN (ct,=0.2) N test error (%) a ®& test error (%) a & 1 2 3 4 5 1 2 3 5 #snapshots #snapshots ResNet-110 on Tiny ImageNet (1,=0.1) ResNet-110 on Tiny ImageNet (c1,=0.2) 50 50 test error (%) test error (%) 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots Wide-ResNet-32 on C10 («1,=0.1) Wide-ResNet-32 on C10 («1,=0.2) 7 7 1 2 3 4 5 1 2 #snapshots snapshots âAâ Single model snapshot âAâ Snapshot Ensemble
=
=
=
# Single model with STD-LR|
# Figure 7: Single model and Snapshot Ensemble performance over time (part 1).
12
Published as a conference paper at ICLR 2017
# Wide-ResNet-32 on C100 (o,=0.1)
# Wide-ResNet-32 on C100 (a)=0.2)
test error (%) 28 7 1 : 3 #snapshots Wide-ResNet-32 on SVHN (0,)=0.1) 2 - r 7 ore 5 1.8 be SNOUT a TT o B 16h dee Leet eeeeeeeedeeeeeuuentes 1 2 3 4 5 #snapshots Wide-ResNet-32 on Tiny ImageNet (o:,=0.1) 42 T T T : = 40k: 5 38 © 36 & 34 32 #snapshots DenseNet-40 on C10 (a,=0.1) 8 : r r . eS 4 S o 7) £ : 4 in i i i 1 2 3 4 5 6 #snapshots DenseNet-40 on C100 (a=0.1) 30 iS Ss 5 Q 2 2 3 4 5 6 #snapshots 28 7 1 : test error (%) NM NM N 3 #snapshots Wide-ResNet-32 on SVHN (a,=0.2) 2 > - 7 test error (%) #snapshots Wide-ResNet-32 on Tiny ImageNet (0,)=0.2) 42 T T T = 401 : : 5 3 S 3 3 3 : ; 32 i i fs i 1 2 3 4 5 6 #snapshots DenseNet-40 on C10 (0,)=0.2) 8 : r . 1 = : : 5 o a7) 2 #snapshots DenseNet-40 on C100 (0,=0.2) 30 : : r + test error (%) 1 2 3 4 5 6 âAâ Single model snapshot âAâ Snapshot Ensemble = = = Single model with STD-LR #snapshots
# Figure 8: Single model and Snapshot Ensemble performance over time (part 2).
13
Published as a conference paper at ICLR 2017
DenseNet-40 on SVHN (0.,=0.1) DenseNet-40 on SVHN (1,=0.2) 2 2 21 £18 5 Ci i 8g 8g 1.6 1.6 1 2 3 4 5 1 2 3 4 5 #snapshots #snapshots DenseNet-40 on Tiny ImageNet («1,=0.1) DenseNet-40 on Tiny ImageNet («1,=0.2) 44 44 Sa & 424 2 a £ 40 3 aE Â¥ 3 R38 36 36 4 1 2 3 4 5 6 1 2 3 4 5 6 #snapshots #snapshots DenseNet-100 on C10 («1,=0.1) DenseNet-100 on C10 («1,=0.2) 6 6 #snapshots #snapshots DenseNet-100 on C100 (o1,=0.1) DenseNet-100 on C100 (c1,=0.2) test error (%) test error (%) 1 2 3 4 5 6 1 2 3 4 #snapshots #snapshots â4â Single model snapshot âAâ Snapshot Ensemble â = = Single model with STD-LR
Figure 9: Single model and Snapshot Ensemble performance over time (part 3).
14 | {
"id": "1503.02531"
} |
1704.00051 | Reading Wikipedia to Answer Open-Domain Questions | This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | http://arxiv.org/pdf/1704.00051 | Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes | cs.CL | ACL2017, 10 pages | null | cs.CL | 20170331 | 20170428 | 7 1 0 2
r p A 8 2 ] L C . s c [
2 v 1 5 0 0 0 . 4 0 7 1 : v i X r a
# Reading Wikipedia to Answer Open-Domain Questions
# Danqi Chenâ Computer Science Stanford University Stanford, CA 94305, USA danqi@cs.stanford.edu
Adam Fisch, Jason Weston & Antoine Bordes Facebook AI Research 770 Broadway New York, NY 10003, USA {afisch,jase,abordes}@fb.com
# Abstract
to tackle open- This paper proposes using domain the unique knowledge Wikipedia source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document re- trieval (ï¬nding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules to are highly competitive with respect existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.
# Introduction
This paper considers the problem of answering factoid questions in an open-domain setting us- ing Wikipedia as the unique knowledge source, such as one does when looking for answers in an encyclopedia. Wikipedia is a constantly evolv- ing source of detailed information that could fa- cilitate intelligent machines â if they are able to leverage its power. Unlike knowledge bases (KBs) such as Freebase (Bollacker et al., 2008) or DB- Pedia (Auer et al., 2007), which are easier for computers to process but too sparsely populated for open-domain question answering (Miller et al.,
â Most of this work was done while DC was with Face- book AI Research.
2016), Wikipedia contains up-to-date knowledge that humans are interested in. It is designed, how- ever, for humans â not machines â to read.
Using Wikipedia articles as the knowledge source causes the task of question answering (QA) to combine the challenges of both large-scale open-domain QA and of machine comprehension of text. In order to answer any question, one must ï¬rst retrieve the few relevant articles among more than 5 million items, and then scan them care- fully to identify the answer. We term this setting, machine reading at scale (MRS). Our work treats Wikipedia as a collection of articles and does not rely on its internal graph structure. As a result, our approach is generic and could be switched to other collections of documents, books, or even daily up- dated newspapers.
Large-scale QA systems like IBMâs DeepQA (Ferrucci et al., 2010) rely on multiple sources to answer: besides Wikipedia, it is also paired with KBs, dictionaries, and even news articles, books, etc. As a result, such systems heavily rely on information redundancy among the sources to answer correctly. Having a single knowledge source forces the model to be very precise while searching for an answer as the evidence might appear only once. This challenge thus encour- ages research in the ability of a machine to read, a key motivation for the machine comprehen- sion subï¬eld and the creation of datasets such as SQuAD (Rajpurkar et al., 2016), CNN/Daily Mail (Hermann et al., 2015) and CBT (Hill et al., 2016).
those machine comprehension re- sources typically assume that a short piece of rel- evant text is already identiï¬ed and given to the model, which is not realistic for building an open- In sharp contrast, methods domain QA system. that use KBs or information retrieval over docu- ments have to employ search as an integral part of
the solution. Instead MRS is focused on simul- taneously maintaining the challenge of machine comprehension, which requires the deep under- standing of text, while keeping the realistic con- straint of searching over a large open resource.
In this paper, we show how multiple existing QA datasets can be used to evaluate MRS by re- quiring an open-domain system to perform well on all of them at once. We develop DrQA, a strong system for question answering from Wikipedia composed of: (1) Document Retriever, a mod- ule using bigram hashing and TF-IDF matching designed to, given a question, efï¬ciently return a subset of relevant articles and (2) Document Reader, a multi-layer recurrent neural network machine comprehension model trained to detect answer spans in those few returned documents. Figure 1 gives an illustration of DrQA.
Our experiments show that Document Retriever outperforms the built-in Wikipedia search engine and that Document Reader reaches state-of-the- art results on the very competitive SQuAD bench- mark (Rajpurkar et al., 2016). Finally, our full sys- In tem is evaluated using multiple benchmarks. particular, we show that performance is improved across all datasets through the use of multitask learning and distant supervision compared to sin- gle task training.
# 2 Related Work
Open-domain QA was originally deï¬ned as ï¬nd- ing answers in collections of unstructured docu- ments, following the setting of the annual TREC competitions1. With the development of KBs, many recent innovations have occurred in the con- text of QA from KBs with the creation of re- sources like WebQuestions (Berant et al., 2013) and SimpleQuestions (Bordes et al., 2015) based on the Freebase KB (Bollacker et al., 2008), or on automatically extracted KBs, e.g., OpenIE triples and NELL (Fader et al., 2014). However, KBs have inherent limitations (incompleteness, ï¬xed schemas) that motivated researchers to return to the original setting of answering from raw text.
A second motivation to cast a fresh look at this problem is that of machine comprehension of text, i.e., answering questions after reading a short text or story. That subï¬eld has made consider- able progress recently thanks to new deep learning architectures like attention-based and memory-
# 1http://trec.nist.gov/data/qamain.html
augmented neural networks (Bahdanau et al., 2015; Weston et al., 2015; Graves et al., 2014) and release of new training and evaluation datasets like QuizBowl (Iyyer et al., 2014), CNN/Daily Mail based on news articles (Hermann et al., 2015), CBT based on children books (Hill et al., 2016), or SQuAD (Rajpurkar et al., 2016) and WikiReading (Hewlett et al., 2016), both based on Wikipedia. An objective of this paper is to test how such new methods can perform in an open-domain QA framework.
QA using Wikipedia as a resource has been ex- plored previously. Ryu et al. (2014) perform open- domain QA using a Wikipedia-based knowledge model. They combine article content with multi- ple other answer matching modules based on dif- ferent types of semi-structured knowledge such as infoboxes, article structure, category structure, and deï¬nitions. Similarly, Ahn et al. (2004) also combine Wikipedia as a text resource with other resources, in this case with information retrieval over other documents. Buscaldi and Rosso (2006) also mine knowledge from Wikipedia for QA. In- stead of using it as a resource for seeking answers to questions, they focus on validating answers re- turned by their QA system, and use Wikipedia categories for determining a set of patterns that should ï¬t with the expected answer. In our work, we consider the comprehension of text only, and use Wikipedia text documents as the sole resource in order to emphasize the task of machine reading at scale, as described in the introduction.
There are a number of highly developed full pipeline QA approaches using either the Web, as does QuASE (Sun et al., 2015), or Wikipedia as a resource, as do Microsoftâs AskMSR (Brill et al., 2002), IBMâs DeepQA (Ferrucci et al., 2010) and YodaQA (BaudiËs, 2015; BaudiËs and ËSediv`y, 2015) â the latter of which is open source and hence reproducible for comparison purposes. AskMSR is a search-engine based QA system that relies on âdata redundancy rather than sophisticated lin- guistic analyses of either questions or candidate answersâ, i.e., it does not focus on machine com- prehension, as we do. DeepQA is a very sophisti- cated system that relies on both unstructured infor- mation including text documents as well as struc- tured data such as KBs, databases and ontologies to generate candidate answers or vote over evi- dence. YodaQA is an open source system mod- eled after DeepQA, similarly combining websites,
Open-domain QA SQuAD, TREC, WebQuestions, WikiMovies Q: How many of Warsaw's inhabitants spoke Polish in 1933? Document Retriever ââorm WIKIPEDIA The Free Encyclopedia 7 Document Reader ââ> 833,500
Figure 1: An overview of our question answering system DrQA.
information extraction, databases and Wikipedia in particular. Our comprehension task is made more challenging by only using a single resource. Comparing against these methods provides a use- ful datapoint for an âupper boundâ benchmark on performance.
Multitask learning (Caruana, 1998) and task transfer have a rich history in machine learning (e.g., using ImageNet in the computer vision com- munity (Huh et al., 2016)), as well as in NLP in particular (Collobert and Weston, 2008). Sev- eral works have attempted to combine multiple QA training datasets via multitask learning to (i) achieve improvement across the datasets via task transfer; and (ii) provide a single general system capable of asking different kinds of questions due to the inevitably different data distributions across the source datasets. Fader et al. (2014) used We- bQuestions, TREC and WikiAnswers with four KBs as knowledge sources and reported improve- ment on the latter two datasets through multi- task learning. Bordes et al. (2015) combined We- bQuestions and SimpleQuestions using distant su- pervision with Freebase as the KB to give slight improvements on both datasets, although poor per- formance was reported when training on only one dataset and testing on the other, showing that task transfer is indeed a challenging subject; see also (Kadlec et al., 2016) for a similar conclusion. Our work follows similar themes, but in the setting of having to retrieve and then read text documents,
rather than using a KB, with positive results.
# 3 Our System: DrQA
In the following we describe our system DrQA for MRS which consists of two components: (1) the Document Retriever module for ï¬nding relevant articles and (2) a machine comprehension model, Document Reader, for extracting answers from a single document or a small collection of docu- ments.
# 3.1 Document Retriever
Following classical QA systems, we use an efï¬- cient (non-machine learning) document retrieval system to ï¬rst narrow our search space and focus on reading only articles that are likely to be rel- evant. A simple inverted index lookup followed by term vector model scoring performs quite well on this task for many question types, compared to the built-in ElasticSearch based Wikipedia Search API (Gormley and Tong, 2015). Articles and ques- tions are compared as TF-IDF weighted bag-of- word vectors. We further improve our system by taking local word order into account with n-gram features. Our best performing system uses bigram counts while preserving speed and memory efï¬- ciency by using the hashing of (Weinberger et al., 2009) to map the bigrams to 224 bins with an un- signed murmur3 hash.
We use Document Retriever as the ï¬rst part of our full model, by setting it to return 5 Wikipedia
articles given any question. Those articles are then processed by Document Reader.
# 3.2 Document Reader
Our Document Reader model is inspired by the re- cent success of neural network models on machine comprehension tasks, in a similar spirit to the At- tentiveReader described in (Hermann et al., 2015; Chen et al., 2016).
tokens {q1, . . . , ql} and a document or a small set of doc- uments of n paragraphs where a single paragraph p consists of m tokens {p1, . . . , pm}, we develop an RNN model that we apply to each paragraph in turn and then ï¬nally aggregate the predicted an- swers. Our method works as follows:
Paragraph encoding We ï¬rst represent all to- kens pi in a paragraph p as a sequence of feature vectors Ëpi â Rd and pass them as the input to a recurrent neural network and thus obtain:
{p1, . . . , pm} = RNN({Ëp1, . . . , Ëpm}),
where pi is expected to encode useful context information around token pi. Speciï¬cally, we choose to use a multi-layer bidirectional long short-term memory network (LSTM), and take pi as the concatenation of each layerâs hidden units in the end.
The feature vector Ëpi is comprised of the fol- lowing parts:
⢠Word embeddings: femb(pi) = E(pi). We use the 300-dimensional Glove word em- beddings trained from 840B Web crawl data (Pennington et al., 2014). We keep most of the pre-trained word embeddings ï¬xed and only ï¬ne-tune the 1000 most frequent ques- tion words because the representations of some key words such as what, how, which, many could be crucial for QA systems.
⢠Exact match: fexact match(pi) = I(pi â q). We use three simple binary features, indicat- ing whether pi can be exactly matched to one question word in q, either in its original, low- ercase or lemma form. These simple features turn out to be extremely helpful, as we will show in Section 5.
⢠Token features:
# ftoken(pi) = (POS(pi), NER(pi), TF(pi)).
We also add a few manual features which re- ï¬ect some properties of token pi in its con- text, which include its part-of-speech (POS) and named entity recognition (NER) tags and its (normalized) term frequency (TF).
Aligned question embedding: Following (Lee et al., 2016) and other re- cent works, the last part we incorporate is an aligned question embedding fatign(pi) = D2; %,jE(qj), where the attention score a,j captures the similarity between p; and each question words q;. Specifically, a;,; is com- puted by the dot products between nonlinear mappings of word embeddings:
us, exp (a(E(pi)) - o(E(q))) Dy exp (a(E(pi)) - a(E(q))) â
and α(·) is a single dense layer with ReLU nonlinearity. Compared to the exact match features, these features add soft alignments between similar but non-identical words (e.g., car and vehicle).
Question encoding The question encoding is simpler, as we only apply another recurrent neu- ral network on top of the word embeddings of q; and combine the resulting hidden units into one single vector: {qi,...,qi} â> g. We compute qa=>> j 0;4; where b; encodes the importance of each question word:
b, = âoxp(w ay) 1 Dy exp(w ay)
and w is a weight vector to learn.
Prediction At the paragraph level, the goal is to predict the span of tokens that is most likely the correct answer. We take the the paragraph vectors {p1, . . . , pm} and the question vector q as input, and simply train two classiï¬ers independently for predicting the two ends of the span. Concretely, we use a bilinear term to capture the similarity be- tween pi and q and compute the probabilities of each token being start and end as:
Pstart(i) â exp (piWsq) Pend(i) â exp (piWeq)
During prediction, we choose the best span from token 7 to token 7â such that i < iâ! < i +15 and Pstart(t) X Pena(iâ) is maximized. To make scores
compatible across paragraphs in one or several re- trieved documents, we use the unnormalized expo- nential and take argmax over all considered para- graph spans for our ï¬nal prediction.
# 4 Data
Our work relies on three types of data: (1) Wikipedia that serves as our knowledge source for ï¬nding answers, (2) the SQuAD dataset which is our main resource to train Document Reader and (3) three more QA datasets (CuratedTREC, We- bQuestions and WikiMovies) that in addition to SQuAD, are used to test the open-domain QA abil- ities of our full system, and to evaluate the ability of our model to learn from multitask learning and distant supervision. Statistics of the datasets are given in Table 2.
4.1 Wikipedia (Knowledge Source) We use the 2016-12-21 dump2 of English Wikipedia for all of our full-scale experiments as the knowledge source used to answer questions. For each page, only the plain text is extracted and all structured data sections such as lists and ï¬g- ures are stripped.3 After discarding internal dis- ambiguation, list, index, and outline pages, we retain 5,075,182 articles consisting of 9,008,962 unique uncased token types.
# 4.2 SQuAD
The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) is a dataset for machine comprehension based on Wikipedia. The dataset contains 87k examples for training and 10k for development, with a large hidden test set which can only be accessed by the SQuAD creators. Each example is composed of a paragraph extracted from a Wikipedia article and an associated human-generated question. The answer is always a span from this paragraph and a model is given credit if its predicted answer matches it. Two evaluation metrics are used: exact string match (EM) and F1 score, which measures the weighted average of precision and recall at the token level.
In the following, we use SQuAD for training and evaluating our Document Reader for the stan- dard machine comprehension task given the rel-
2https://dumps.wikimedia.org/enwiki/ latest
3We use the WikiExtractor script: https://github. com/attardi/wikiextractor.
evant paragraph as deï¬ned in (Rajpurkar et al., 2016). For the task of evaluating open-domain question answering over Wikipedia, we use the SQuAD development set QA pairs only, and we ask systems to uncover the correct answer spans without having access to the associated para- graphs. That is, a model is required to answer a question given the whole of Wikipedia as a re- source; it is not given the relevant paragraph as in the standard SQuAD setting.
# 4.3 Open-domain QA Evaluation Resources
SQuAD is one of the largest general purpose QA datasets currently available. SQuAD questions have been collected via a process involving show- ing a paragraph to each human annotator and ask- ing them to write a question. As a result, their distribution is quite speciï¬c. We hence propose to train and evaluate our system on other datasets de- veloped for open-domain QA that have been con- structed in different ways (not necessarily in the context of answering from Wikipedia).
CuratedTREC This dataset is based on the benchmarks from the TREC QA tasks that have been curated by BaudiËs and ËSediv`y (2015). We use the large version, which contains a total of 2,180 questions extracted from the datasets from TREC 1999, 2000, 2001 and 2002.4
WebQuestions Introduced in (Berant et al., 2013), this dataset is built to answer questions from the Freebase KB. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechani- cal Turk. We convert each answer to text by us- ing entity names so that the dataset does not refer- ence Freebase IDs and is purely made of plain text question-answer pairs.
WikiMovies This dataset, introduced in (Miller et al., 2016), contains 96k question-answer pairs in the domain of movies. Originally created from the OMDb and MovieLens databases, the examples are built such that they can also be answered by us- ing a subset of Wikipedia as the knowledge source (the title and the ï¬rst section of articles from the movie domain).
4This dataset is available at https://github.com/ brmson/dataset-factoid-curated.
Dataset SQuAD Example Q: How many provinces did the Ottoman empire contain in the 17th century? A: 32 CuratedTREC Q: What U.S. stateâs motto is âLive free or Dieâ? A: New Hampshire WebQuestions Q: What part of the atom did Chadwick discover?â A: neutron WikiMovies Q: Who wrote the ï¬lm Gigli? A: Martin Brest Article / Paragraph Article: Ottoman Empire Paragraph: ... At the beginning of the 17th century the em- pire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the Ottoman Empire, while others were granted various types of autonomy during the course of centuries. Article: Live Free or Die Paragraph: âLive Free or Dieâ is the ofï¬cial motto of the U.S. state of New Hampshire, adopted by the state in 1945. It is possibly the best-known of all state mottos, partly because it conveys an assertive independence historically found in Amer- ican political philosophy and partly because of its contrast to the milder sentiments found in other state mottos. Article: Atom Paragraph: ... The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explana- tion for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the pro- ton, by the physicist James Chadwick in 1932. ... Article: Gigli Paragraph: Gigli is a 2003 American romantic comedy ï¬lm written and directed by Martin Brest and starring Ben Afï¬eck, Jennifer Lopez, Justin Bartha, Al Pacino, Christopher Walken, and Lainie Kazan.
Table 1: Example training data from each QA dataset. In each case we show an associated paragraph where distant supervision (DS) correctly identiï¬ed the answer within it, which is highlighted.
Dataset SQuAD CuratedTREC WebQuestions WikiMovies Train Test Plain DS 87,599 71,231 10,570â 1,486â 3,464 694 3,778â 4,602 2,032 96,185â 36,301 9,952 Dataset SQuAD CuratedTREC WebQuestions WikiMovies Wiki Search 62.7 81.0 73.7 61.7 Doc. Retriever plain +bigrams 76.1 85.2 75.5 54.4 77.8 86.0 74.4 70.3
Table 2: Number of questions for each dataset used in this paper. DS: distantly supervised train- ing data. â: These training sets are not used as is because no paragraph is associated with each question. â : Corresponds to SQuAD development set.
# 4.4 Distantly Supervised Data
All the QA datasets presented above contain train- ing portions, but CuratedTREC, WebQuestions and WikiMovies only contain question-answer pairs, and not an associated document or para- graph as in SQuAD, and hence cannot be used for training Document Reader directly. Follow- ing previous work on distant supervision (DS) for relation extraction (Mintz et al., 2009), we use a procedure to automatically associate paragraphs to such training examples, and then add these exam- ples to our training set.
We use the following process for each question- answer pair to build our training set. First, we
Table 3: Document retrieval results. % of ques- tions for which the answer segment appears in one of the top 5 pages returned by the method.
run Document Retriever on the question to re- trieve the top 5 Wikipedia articles. All paragraphs from those articles without an exact match of the known answer are directly discarded. All para- graphs shorter than 25 or longer than 1500 charac- ters are also ï¬ltered out. If any named entities are detected in the question, we remove any paragraph that does not contain them at all. For every remain- ing paragraph in each retrieved page, we score all positions that match an answer using unigram and bigram overlap between the question and a 20 to- ken window, keeping up to the top 5 paragraphs with the highest overlaps. If there is no paragraph with non-zero overlap, the example is discarded; otherwise we add each found pair to our DS train- ing dataset. Some examples are shown in Table 1 and data statistics are given in Table 2.
Note that we can also generate additional DS data for SQuAD by trying to ï¬nd mentions of the answers not just in the paragraph provided, but also from other pages or the same page that the given paragraph was in. We observe that around half of the DS examples come from pages outside of the articles used in SQuAD.
# 5 Experiments
This section ï¬rst presents evaluations of our Doc- ument Retriever and Document Reader modules separately, and then describes tests of their com- bination, DrQA, for open-domain QA on the full Wikipedia.
# 5.1 Finding Relevant Articles
We ï¬rst examine the performance of our Docu- ment Retriever module on all the QA datasets. Ta- ble 3 compares the performance of the two ap- proaches described in Section 3.1 with that of the Wikipedia Search Engine5 for the task of ï¬nd- ing articles that contain the answer given a ques- tion. Speciï¬cally, we compute the ratio of ques- tions for which the text span of any of their as- sociated answers appear in at least one the top 5 relevant pages returned by each system. Results on all datasets indicate that our simple approach outperforms Wikipedia Search, especially with bi- gram hashing. We also compare doing retrieval with Okapi BM25 or by using cosine distance in the word embeddings space (by encoding ques- tions and articles as bag-of-embeddings), both of which we ï¬nd performed worse.
# 5.2 Reader Evaluation on SQuAD
Next we evaluate our Document Reader com- ponent on the standard SQuAD evaluation (Ra- jpurkar et al., 2016).
Implementation details We use 3-layer bidirec- tional LSTMs with h = 128 hidden units for both paragraph and question encoding. We apply the Stanford CoreNLP toolkit (Manning et al., 2014) for tokenization and also generating lemma, part- of-speech, and named entity tags.
Lastly, all the training examples are sorted by the length of paragraph and divided into mini- batches of 32 examples each. We use Adamax for optimization as described in (Kingma and Ba,
5We use the Wikipedia Search API https://www. mediawiki.org/wiki/API:Search.
2014). Dropout with p = 0.3 is applied to word embeddings and all the hidden units of LSTMs.
Result and analysis Table 4 presents our eval- uation results on both development and test sets. SQuAD has been a very competitive machine comprehension benchmark since its creation and we only list the best-performing systems in the ta- ble. Our system (single model) can achieve 70.0% exact match and 79.0% F1 scores on the test set, which surpasses all the published results and can match the top performance on the SQuAD leader- board at the time of writing. Additionally, we think that our model is conceptually simpler than most of the existing systems. We conducted an ablation analysis on the feature vector of para- graph tokens. As shown in Table 5 all the features contribute to the performance of our ï¬nal system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%. More interestingly, if we remove both faligned and fexact match, the performance drops dramatically, so we conclude that both features play a similar but complementary role in the feature representa- tion related to the paraphrased nature of a question vs. the context around an answer.
# 5.3 Full Wikipedia Question Answering
Finally, we assess the performance of our full sys- tem DrQA for answering open-domain questions using the four datasets introduced in Section 4. We compare three versions of DrQA which eval- uate the impact of using distant supervision and multitask learning across the training sources pro- vided to Document Reader (Document Retriever remains the same for each case):
⢠SQuAD: A single Document Reader model is trained on the SQuAD training set only and used on all evaluation sets.
⢠Fine-tune (DS): A Document Reader model is pre-trained on SQuAD and then ï¬ne-tuned for each dataset independently using its dis- tant supervision (DS) training set.
⢠Multitask (DS): A single Document Reader model is jointly trained on the SQuAD train- ing set and all the DS sources.
For the full Wikipedia setting we use a stream- lined model that does not use the CoreNLP parsed ftoken features or lemmas for fexact match. We
Method Dynamic Coattention Networks (Xiong et al., 2016) Multi-Perspective Matching (Wang et al., 2016)â BiDAF (Seo et al., 2016) R-netâ DrQA (Our model, Document Reader Only) Dev EM F1 65.4 75.6 66.1 75.8 67.7 77.3 n/a n/a 69.5 78.8 Test EM F1 66.2 75.9 65.5 75.1 68.0 77.3 71.3 79.7 70.0 79.0
Table 4: Evaluation results on the SQuAD dataset (single model only). â : Test results reï¬ect the SQuAD leaderboard (https://stanford-qa.com) as of Feb 6, 2017.
Features Full No ftoken No fexact match No faligned No faligned and fexact match F1 78.8 78.0 (-0.8) 77.3 (-1.5) 77.3 (-1.5) 59.4 (-19.4)
Table 5: Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set.
ï¬nd that while these help for more exact paragraph reading in SQuAD, they donât improve results in the full setting. Additionally, WebQuestions and WikiMovies provide a list of candidate answers (e.g., 1.6 million Freebase entity strings for We- bQuestions) and we restrict the answer span must be in this list during prediction.
Results Table 6 presents the results. Despite the difï¬culty of the task compared to machine com- prehension (where you are given the right para- graph) and unconstrained QA (using redundant re- sources), DrQA still provides reasonable perfor- mance across all four datasets.
We compare to an unconstrained QA system us- ing redundant resources (not just Wikipedia), Yo- daQA (BaudiËs, 2015), giving results which were previously reported on CuratedTREC and We- bQuestions. Despite the increased difï¬culty of our task, it is reassuring that our performance is not too far behind on CuratedTREC (31.3 vs. 25.4). The gap is slightly bigger on WebQuestions, likely because this dataset was created from the speciï¬c structure of Freebase which YodaQA uses directly. DrQAâs performance on SQuAD compared to its Document Reader component on machine com- prehension in Table 4 shows a large drop (from 69.5 to 27.1) as we now are given Wikipedia to read, not a single paragraph. Given the correct document (but not the paragraph) we can achieve 49.4, indicating many false positives come from highly topical sentences. This is despite the fact that the Document Retriever works relatively well (77.8% of the time retrieving the answer, see Ta- ble 3). It is worth noting that a large part of the drop comes from the nature of the SQuAD ques- tions. They were written with a speciï¬c para- graph in mind, thus their language can be ambigu- ous when the context is removed. Additional re- sources other than SQuAD, speciï¬cally designed for MRS, might be needed to go further.
We are interested in a single, full system that can answer any question using Wikipedia. The single model trained only on SQuAD is outper- formed on all four of the datasets by the multitask model that uses distant supervision. However per- formance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. The majority of the improvement from SQuAD to Multitask (DS) however is likely not from task transfer as ï¬ne-tuning on each dataset alone using DS also gives improvements, showing that is is the introduction of extra data in the same domain that helps. Nevertheless, the best single model that we can ï¬nd is our overall goal, and that is the Multi- task (DS) system.
# 6 Conclusion
We studied the task of machine reading at scale, by using Wikipedia as the unique knowledge source for open-domain QA. Our results indicate that MRS is a key challenging task for researchers to focus on. Machine comprehension systems alone cannot solve the overall task. Our method integrates search, distant supervision, and mul- titask learning to provide an effective complete system. Evaluating the individual components as well as the full system across multiple benchmarks showed the efï¬cacy of our approach.
Dataset YodaQA SQuAD (All Wikipedia) CuratedTREC WebQuestions WikiMovies n/a 31.3 39.8 n/a 27.1 19.7 11.8 24.5 28.4 25.7 19.5 34.3 29.8 25.4 20.7 36.5
Table 6: Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Fine- tune (DS): Document Reader models trained on SQuAD and ï¬ne-tuned on each DS training set inde- pendently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant su- pervision (DS) training sets jointly. YodaQA results are extracted from https://github.com/brmson/ yodaqa/wiki/Benchmarks and use additional resources such as Freebase and DBpedia, see Section 2.
Future work should aim to improve over our DrQA system. Two obvious angles of attack are: (i) incorporate the fact that Document Reader ag- gregates over multiple paragraphs and documents directly in the training, as it currently trains on paragraphs independently; and (ii) perform end- to-end training across the Document Retriever and Document Reader pipeline, rather than indepen- dent systems.
from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP). pages 1533â1544.
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247â1250.
# Acknowledgments
The authors thank Pranav Rajpurkar for testing Document Reader on the test set of SQuAD.
# References
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 .
Eric Brill, Susan Dumais, and Michele Banko. 2002. An analysis of the AskMSR question-answering sys- In Empirical Methods in Natural Language tem. Processing (EMNLP). pages 257â264.
David Ahn, Valentin Jijkoun, Gilad Mishne, Karin Mller, Maarten de Rijke, and Stefan Schlobach. 2004. Using wikipedia at the trec qa track. In Pro- ceedings of TREC 2004.
Davide Buscaldi and Paolo Rosso. 2006. Mining knowledge from Wikipedia for the question answer- ing task. In International Conference on Language Resources and Evaluation (LREC). pages 727â730.
S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, Springer, pages 722â735.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).
Petr BaudiËs. 2015. YodaQA: a modular question an- swering system pipeline. In POSTER 2015-19th In- ternational Student Conference on Electrical Engi- neering. pages 1156â1165.
Petr BaudiËs and Jan ËSediv`y. 2015. Modeling of the question answering task in the YodaQA sys- In International Conference of the Cross- tem. Language Evaluation Forum for European Lan- guages. Springer, pages 222â228.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase
Rich Caruana. 1998. Multitask learning. In Learning to learn, Springer, pages 95â133.
Danqi Chen, Jason Bolton, and Christopher D Man- the In ning. 2016. A thorough examination of CNN/Daily Mail reading comprehension task. Association for Computational Linguistics (ACL).
Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural language processing: deep neural networks with multitask learning. In Interna- tional Conference on Machine Learning (ICML).
Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and In ACM SIGKDD in- extracted knowledge bases. ternational conference on Knowledge discovery and data mining. pages 1156â1165.
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine 31(3):59â79.
Clinton Gormley and Zachary Tong. 2015. Elastic- search: The Deï¬nitive Guide. â OâReilly Media, Inc.â.
Alex Graves, Greg Wayne, and Ivo Danihelka. arXiv preprint 2014. Neural turing machines. arXiv:1410.5401 .
Karl Moritz Hermann, Tom´aËs KoËcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems (NIPS).
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over In Association for Computational Lin- wikipedia. guistics (ACL). pages 1535â1545.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The Goldilocks Principle: Reading childrenâs books with explicit memory representa- tions. In International Conference on Learning Rep- resentations (ICLR).
Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. 2016. What makes ImageNet good for transfer learning? arXiv preprint arXiv:1608.08614 .
Jordan L Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid ques- tion answering over paragraphs. In Empirical Meth- ods in Natural Language Processing (EMNLP). pages 633â644.
Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2016. From particular to general: A preliminary case study of transfer learning in reading compre- hension. Machine Intelligence Workshop, NIPS .
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Di- panjan Das. 2016. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436 .
Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- In Association for Com- guage processing toolkit. putational Linguistics (ACL). pages 55â60.
Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. Key-value memory networks for directly In Empirical Methods in Nat- reading documents. ural Language Processing (EMNLP). pages 1400â 1409.
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation In Association extraction without labeled data. for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL/IJCNLP). pages 1003â1011.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word In Empirical Methods in Natural representation. Language Processing (EMNLP). pages 1532â1543.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Meth- ods in Natural Language Processing (EMNLP).
Pum-Mo Ryu, Myung-Gil Jang, and Hyun-Ki Kim. 2014. Open domain question answering using Information Wikipedia-based knowledge model. Processing & Management 50(5):683â692.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention ï¬ow for machine comprehension. arXiv preprint arXiv:1611.01603 .
Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang. 2015. Open do- main question answering via semantic enrichment. In Proceedings of the 24th International Conference on World Wide Web. ACM, pages 1045â1055.
Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context match- arXiv preprint ing for machine comprehension. arXiv:1612.04211 .
Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature hashing for large scale multitask learning. In Inter- national Conference on Machine Learning (ICML). pages 1113â1120.
Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In International Confer- ence on Learning Representations (ICLR).
Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 . | {
"id": "1608.08614"
} |
1703.09844 | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource
limits at test time. Two such settings are: 1. anytime classification, where
the network's prediction for a test example is progressively updated,
facilitating the output of a prediction at any time; and 2. budgeted batch
classification, where a fixed amount of computation is available to classify a
set of examples that can be spent unevenly across "easier" and "harder" inputs.
In contrast to most prior work, such as the popular Viola and Jones algorithm,
our approach is based on convolutional neural networks. We train multiple
classifiers with varying resource demands, which we adaptively apply during
test time. To maximally re-use computation between the classifiers, we
incorporate them as early-exits into a single deep convolutional neural network
and inter-connect them with dense connectivity. To facilitate high quality
classification early on, we use a two-dimensional multi-scale network
architecture that maintains coarse and fine level features all-throughout the
network. Experiments on three image-classification tasks demonstrate that our
framework substantially improves the existing state-of-the-art in both
settings. | http://arxiv.org/pdf/1703.09844 | Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, Kilian Q. Weinberger | cs.LG | null | null | cs.LG | 20170329 | 20180607 | 8 1 0 2 n u J 7 ] G L . s c [
5 v 4 4 8 9 0 . 3 0 7 1 : v i X r a
Published as a conference paper at ICLR 2018
MULTI-SCALE DENSE NETWORKS FOR RESOURCE EFFICIENT IMAGE CLASSIFICATION
Gao Huang Cornell University Danlu Chen Fudan University Tianhong Li Tsinghua University Laurens van der Maaten Facebook AI Research Kilian Weinberger Cornell University
# Felix Wu Cornell University
# ABSTRACT
In this paper we investigate image classiï¬cation with computational resource lim- its at test time. Two such settings are: 1. anytime classiï¬cation, where the net- workâs prediction for a test example is progressively updated, facilitating the out- put of a prediction at any time; and 2. budgeted batch classiï¬cation, where a ï¬xed amount of computation is available to classify a set of examples that can be spent unevenly across âeasierâ and âharderâ inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classiï¬ers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classiï¬ers, we incorporate them as early-exits into a single deep con- volutional neural network and inter-connect them with dense connectivity. To fa- cilitate high quality classiï¬cation early on, we use a two-dimensional multi-scale network architecture that maintains coarse and ï¬ne level features all-throughout the network. Experiments on three image-classiï¬cation tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.
# INTRODUCTION
Recent years have witnessed a surge in demand for applications of visual object recognition, for instance, in self-driving cars (Bojarski et al., 2016) and content-based image search (Wan et al., 2014). This demand has in part been fueled through the promise generated by the astonishing progress of convolutional networks (CNNs) on visual object recognition benchmark competition datasets, such as ILSVRC (Deng et al., 2009) and COCO (Lin et al., 2014), where state-of-the-art models may have even surpassed human-level performance (He et al., 2015; 2016).
However, the requirements of such competitions differ from real- world applications, which tend to incentivize resource-hungry mod- els with high computational demands at inference time. For exam- ple, the COCO 2016 competition was won by a large ensemble of computationally intensive CNNs1 â a model likely far too compu- tationally expensive for any resource-aware application. Although much smaller models would also obtain decent error, very large, computationally intensive models seem necessary to correctly clas- sify the hard examples that make up the bulk of the remaining mis- classiï¬cations of modern algorithms. To illustrate this point, Fig- ure 1 shows two images of horses. The left image depicts a horse in canonical pose and is easy to classify, whereas the right image is taken from a rare viewpoint and is likely in the tail of the data dis- tribution. Computationally intensive models are needed to classify such tail examples correctly, but are wasteful when applied to canonical images such as the left one.
Â¥
In real-world applications, computation directly translates into power consumption, which should be minimized for environmental and economical reasons, and is a scarce commodity on mobile
1http://image-net.org/challenges/talks/2016/GRMI-COCO-slidedeck.pdf
1
Published as a conference paper at ICLR 2018
devices. This begs the question: why do we choose between either wasting computational resources by applying an unnecessarily computationally expensive model to easy images, or making mistakes by using an efï¬cient model that fails to recognize difï¬cult images? Ideally, our systems should automatically use small networks when test images are easy or computational resources limited, and use big networks when test images are hard or computation is abundant.
Such systems would be beneï¬cial in at least two settings with computational constraints at test- time: anytime prediction, where the network can be forced to output a prediction at any given point in time; and budgeted batch classiï¬cation, where a ï¬xed computational budget is shared across a large set of examples which can be spent unevenly across âeasyâ and âhardâ examples. A prac- tical use-case of anytime prediction is in mobile apps on Android devices: in 2015, there existed 24, 093 distinct Android devices2, each with its own distinct computational limitations. It is infea- sible to train a different network that processes video frame-by-frame at a ï¬xed framerate for each of these devices. Instead, you would like to train a single network that maximizes accuracy on all these devices, within the computational constraints of that device. The budget batch classiï¬cation setting is ubiquitous in large-scale machine learning applications. Search engines, social media companies, on-line advertising agencies, all must process large volumes of data on limited hardware resources. For example, as of 2010, Google Image Search had over 10 Billion images indexed3, which has likely grown to over 1 Trillion since. Even if a new model to process these images is only 1/10s slower per image, this additional cost would add 3170 years of CPU time. In the budget batch classiï¬cation setting, companies can improve the average accuracy by reducing the amount of computation spent on âeasyâ cases to save up computation for âhardâ cases.
Motivated by prior work in computer vision on resource-efï¬cient recognition (Viola & Jones, 2001), we aim to develop CNNs that âsliceâ the computation and process these slices one-by-one, stopping the evaluation once the CPU time is depleted or the classiï¬cation sufï¬ciently certain (through âearly exitsâ). Unfortunately, the architecture of CNNs is inherently at odds with the introduction of early exits. CNNs learn the data representation and the classiï¬er jointly, which leads to two problems with early exits: 1. The features in the last layer are extracted directly to be used by the classiï¬er, whereas earlier features are not. The inherent dilemma is that different kinds of features need to be extracted depending on how many layers are left until the classiï¬cation. 2. The features in different layers of the network may have different scale. Typically, the ï¬rst layers of a deep nets operate on a ï¬ne scale (to extract low-level features), whereas later layers transition (through pooling or strided convolution) to coarse scales that allow global context to enter the classiï¬er. Both scales are needed but happen at different places in the network.
We propose a novel network architecture that addresses both of these problems through careful design changes, allowing for resource-efï¬cient image classiï¬cation. Our network uses a cascade of intermediate classiï¬ers throughout the network. The ï¬rst problem, of classiï¬ers altering the internal representation, is addressed through the introduction of dense connectivity (Huang et al., 2017). By connecting all layers to all classiï¬ers, features are no longer dominated by the most imminent early- exit and the trade-off between early or later classiï¬cation can be performed elegantly as part of the loss function. The second problem, the lack of coarse-scale features in early layers, is addressed by adopting a multi-scale network structure. At each layer we produce features of all scales (ï¬ne-to- coarse), which facilitates good classiï¬cation early on but also extracts low-level features that only become useful after several more layers of processing. Our network architecture is illustrated in Figure 2, and we refer to it as Multi-Scale DenseNet (MSDNet).
We evaluate MSDNets on three image-classiï¬cation datasets. In the anytime classiï¬cation setting, we show that it is possible to provide the ability to output a prediction at any time while maintain high accuracies throughout. In the budget batch classiï¬cation setting we show that MSDNets can be effectively used to adapt the amount of computation to the difï¬culty of the example to be classiï¬ed, which allows us to reduce the computational requirements of our models drastically whilst perform- ing on par with state-of-the-art CNNs in terms of overall classiï¬cation accuracy. To our knowledge this is the ï¬rst deep learning architecture of its kind that allows dynamic resource adaptation with a single model and obtains competitive results throughout.
2Source: https://opensignal.com/reports/2015/08/android-fragmentation/ 3https://en.wikipedia.org/wiki/Google_Images
2
Published as a conference paper at ICLR 2018
s fam) wee xt £0) AC) features classifier regular conv - 3 eae ONY ea? one h(-) + see layer concatenation strided conv identity
Figure 2: Illustration of the ï¬rst four layers of an MSDNet with three scales. The horizontal direction cor- responds to the layer direction (depth) of the network. The vertical direction corresponds to the scale of the feature maps. Horizontal arrows indicate a regular convolution operation, whereas diagonal and vertical arrows indicate a strided convolution operation. Classiï¬ers only operate on feature maps at the coarsest scale. Connec- tions across more than one layer are not drawn explicitly: they are implicit through recursive concatenations.
# 2 RELATED WORK
We brieï¬y review related prior work on computation-efï¬cient networks, memory-efï¬cient networks, and resource-sensitive machine learning, from which our network architecture draws inspiration.
Computation-efï¬cient networks. Most prior work on (convolutional) networks that are computa- tionally efï¬cient at test time focuses on reducing model size after training. In particular, many stud- ies propose to prune weights (LeCun et al., 1989; Hassibi et al., 1993; Li et al., 2017) or quantize weights (Hubara et al., 2016; Rastegari et al., 2016) during or after training. These approaches are generally effective because deep networks often have a substantial number of redundant weights that can be pruned or quantized without sacriï¬cing (and sometimes even improving) performance. Prior work also studies approaches that directly learn compact models with less parameter redundancy. For example, the knowledge-distillation method (Bucilua et al., 2006; Hinton et al., 2014) trains small student networks to reproduce the output of a much larger teacher network or ensemble. Our work differs from those approaches in that we train a single model that trades off computation for accuracy at test time without any re-training or ï¬netuning. Indeed, weight pruning and knowledge distillation can be used in combination with our approach, and may lead to further improvements.
Resource-efï¬cient machine learning. Various prior studies explore computationally efï¬cient vari- ants of traditional machine-learning models (Viola & Jones, 2001; Grubb & Bagnell, 2012; Karayev et al., 2014; Trapeznikov & Saligrama, 2013; Xu et al., 2012; 2013; Nan et al., 2015; Wang et al., 2015). Most of these studies focus on how to incorporate the computational requirements of com- puting particular features in the training of machine-learning models such as (gradient-boosted) decision trees. Whilst our study is certainly inspired by these results, the architecture we explore differs substantially: most prior work exploits characteristics of machine-learning models (such as decision trees) that do not apply to deep networks. Our work is possibly most closely related to recent work on FractalNets (Larsson et al., 2017), which can perform anytime prediction by pro- gressively evaluating subnetworks of the full network. FractalNets differ from our work in that they are not explicitly optimized for computation efï¬ciency and consequently our experiments show that MSDNets substantially outperform FractalNets. Our dynamic evaluation strategy for reducing batch computational cost is closely related to the the adaptive computation time approach (Graves, 2016; Figurnov et al., 2016), and the recently proposed method of adaptively evaluating neural networks (Bolukbasi et al., 2017). Different from these works, our method adopts a specially designed net- work with multiple classiï¬ers, which are jointly optimized during training and can directly output conï¬dence scores to control the evaluation process for each test example. The adaptive computation time method (Graves, 2016) and its extension (Figurnov et al., 2016) also perform adaptive eval- uation on test examples to save batch computational cost, but focus on skipping units rather than layers. In (Odena et al., 2017), a âcomposerâmodel is trained to construct the evaluation network from a set of sub-modules for each test example. By contrast, our work uses a single CNN with multiple intermediate classiï¬ers that is trained end-to-end. The Feedback Networks (Zamir et al., 2016) enable early predictions by making predictions in a recurrent fashion, which heavily shares parameters among classiï¬ers, but is less efï¬cient in sharing computation.
Related network architectures. Our network architecture borrows elements from neural fabrics (Saxena & Verbeek, 2016) and others (Zhou et al., 2015; Jacobsen et al., 2017; Ke et al., 2016)
3
Published as a conference paper at ICLR 2018
Relative accuracy of the intermediate classifier Relative accuracy of the final classifier Lo} <p he 1.00 ,O9F uc 4 0.98 4 Bos . â f z 0.87 ? â 4 4 L , = 0.96 4 â , é OTF 2 4 ry â fal Z 0.94 4 4 S 0.6} = + . 0.92 © MSDNet (with intermediate classifier) |7 ost H © DenseNet (with intermediate classifier) 0.90 @â® ResNet (with intermediate classifier) [4 0.0 02 04 06 0.8 10 0.0 02 04 0.6 08 10 location of intermediate classifier (relative to full depth) location of intermediate classifier (relative to full depth)
# A 5 a
# oe 5
# S So
Figure 3: Relative accuracy of the intermediate classiï¬er (left) and the ï¬nal classiï¬er (right) when introducing a single intermediate classiï¬er at different layers in a ResNet, DenseNet and MSDNet. All experiments were performed on the CIFAR-100 dataset. Higher is better.
to rapidly construct a low-resolution feature map that is amenable to classiï¬cation, whilst also maintaining feature maps of higher resolution that are essential for obtaining high classiï¬cation accuracy. Our design differs from the neural fabrics (Saxena & Verbeek, 2016) substantially in that MSDNets have a reduced number of scales and no sparse channel connectivity or up-sampling paths. MSDNets are at least one order of magnitude more efï¬cient and typically more accurate â for example, an MSDNet with less than 1 million parameters obtains a test error below 7.0% on CIFAR-10 (Krizhevsky & Hinton, 2009), whereas Saxena & Verbeek (2016) report 7.43% with over 20 million parameters. We use the same feature-concatenation approach as DenseNets (Huang et al., 2017), which allows us to bypass features optimized for early classiï¬ers in later layers of the network. Our architecture is related to deeply supervised networks (Lee et al., 2015) in that it incorporates classiï¬ers at multiple layers throughout the network. In contrast to all these prior architectures, our network is speciï¬cally designed to operate in resource-aware settings.
# 3 PROBLEM SETUP
We consider two settings that impose computational constraints at prediction time.
Anytime prediction. In the anytime prediction setting (Grubb & Bagnell, 2012), there is a ï¬nite computational budget B > 0 available for each test example x. The computational budget is nonde- terministic, and varies per test instance. It is determined by the occurrence of an event that requires the model to output a prediction immediately. We assume that the budget is drawn from some joint distribution P (x, B). In some applications P (B) may be independent of P (x) and can be estimated. For example, if the event is governed by a Poisson process, P (B) is an exponential distribution. We denote the loss of a model f (x) that has to produce a prediction for instance x within budget B by L(f (x), B). The goal of an anytime learner is to minimize the expected loss under the budget dis- tribution: L(f ) = E [L(f (x), B)]P (x,B). Here, L( ) denotes a suitable loss function. As is common · in the empirical risk minimization framework, the expectation under P (x, B) may be estimated by an average over samples from P (x, B).
Budgeted batch classiï¬cation. classify a set of examples x1, . . . , xM } is known in advance. The learner aims to minimize the loss across all examples in cumulative cost bounded by B, which we denote by L(f ( ). It can potentially do so by spending less than B L( · whilst using more than B B considered here is a soft constraint when we have a large batch of testing samples.
# 4 MULTI-SCALE DENSE CONVOLUTIONAL NETWORKS
A straightforward solution to the two problems introduced in Section 3 is to train multiple networks of increasing capacity, and sequentially evaluate them at test time (as in Bolukbasi et al. (2017)). In the anytime setting the evaluation can be stopped at any point and the most recent prediction is returned. In the batch setting, the evaluation is stopped prematurely the moment a network classiï¬es
4
Published as a conference paper at ICLR 2018
the test sample with sufï¬cient conï¬dence. When the resources are so limited that the execution is terminated after the ï¬rst network, this approach is optimal because the ï¬rst network is trained for exactly this computational budget without compromises. However, in both settings, this scenario is rare. In the more common scenario where some test samples can require more processing time than others the approach is far from optimal because previously learned features are never re-used across the different networks.
An alternative solution is to build a deep network with a cascade of classiï¬ers operating on the features of internal layers: in such a network features computed for an earlier classiï¬er can be re-used by later classiï¬ers. However, na¨ıvely attaching intermediate early-exit classiï¬ers to a state- of-the-art deep network leads to poor performance.
There are two reasons why intermediate early-exit classiï¬ers hurt the performance of deep neural networks: early classiï¬ers lack coarse-level features and classiï¬ers throughout interfere with the feature generation process. In this section we investigate these effects empirically (see Figure 3) and, in response to our ï¬ndings, propose the MSDNet architecture illustrated in Figure 2. Problem: The lack of coarse-level features. Traditional neural networks learn features of ï¬ne scale in early layers and coarse scale in later layers (through repeated convolution, pooling, and strided convolution). Coarse scale features in the ï¬nal layers are important to classify the content of the whole image into a single class. Early layers lack coarse-level features and early-exit clas- siï¬ers attached to these layers will likely yield unsatisfactory high error rates. To illustrate this point, we attached4 intermediate classiï¬ers to varying layers of a ResNet (He et al., 2016) and a DenseNet (Huang et al., 2017) on the CIFAR-100 dataset (Krizhevsky & Hinton, 2009). The blue and red dashed lines in the left plot of Figure 3 show the relative accuracies of these classiï¬ers. All three plots gives rise to a clear trend: the accuracy of a classiï¬er is highly correlated with its position within the network. Particularly in the case of the ResNet (blue line), one can observe a visible âstaircaseâ pattern, with big improvements after the 2nd and 4th classiï¬ers â located right after pooling layers.
Solution: Multi-scale feature maps. To address this issue, MSDNets maintain a feature repre- sentation at multiple scales throughout the network, and all the classiï¬ers only use the coarse-level features. The feature maps at a particular layer5 and scale are computed by concatenating the re- sults of one or two convolutions: 1. the result of a regular convolution applied on the same-scale features from the previous layer (horizontal connections) and, if possible, 2. the result of a strided convolution applied on the ï¬ner-scale feature map from the previous layer (diagonal connections). The horizontal connections preserve and progress high-resolution information, which facilitates the construction of high-quality coarse features in later layers. The vertical connections produce coarse features throughout that are amenable to classiï¬cation. The dashed black line in Figure 3 shows that MSDNets substantially increase the accuracy of early classiï¬ers. Problem: Early classiï¬ers interfere with later classiï¬ers. The right plot of Figure 3 shows the accuracies of the ï¬nal classiï¬er as a function of the location of a single intermediate classiï¬er, relative to the accuracy of a network without intermediate classiï¬ers. The results show that the introduction of an intermediate classiï¬er harms the ï¬nal ResNet classiï¬er (blue line), reducing its accuracy by up to 7%. We postulate that this accuracy degradation in the ResNet may be caused by the intermediate classiï¬er inï¬uencing the early features to be optimized for the short-term and not for the ï¬nal layers. This improves the accuracy of the immediate classiï¬er but collapses information required to generate high quality features in later layers. This effect becomes more pronounced when the ï¬rst classiï¬er is attached to an earlier layer.
Solution: Dense connectivity. By contrast, the DenseNet (red line) suffers much less from this effect. Dense connectivity (Huang et al., 2017) connects each layer with all subsequent layers and allows later layers to bypass features optimized for the short-term, to maintain the high accuracy of the ï¬nal classiï¬er. If an earlier layer collapses information to generate short-term features, the lost information can be recovered through the direct connection to its preceding layer. The ï¬nal classiï¬erâs performance becomes (more or less) independent of the location of the intermediate
4We select six evenly spaced locations for each of the networks to introduce the intermediate classiï¬er. Both the ResNet and DenseNet have three resolution blocks; each block offers two tentative locations for the intermediate classiï¬er. The loss of the intermediate and ï¬nal classiï¬ers are equally weighted.
5Here, we use the term âlayerâ to refer to a column in Figure 2.
5
Published as a conference paper at ICLR 2018
or directly indirectly not ze f=1 0=2 (=3 t=4 connected connected connected
Figure 4: The output x? of layer @ at the s" scale in a MSDNet. Herein, [...] denotes the concatenation operator, 7 (-) a regular convolution transformation, and h;(-) a strided convolutional. Note that the outputs of he and hj have the same feature map size; their outputs are concatenated along the channel dimension.
classiï¬er. As far as we know, this is the ï¬rst paper that discovers that dense connectivity is an important element to early-exit classiï¬ers in deep networks, and we make it an integral design choice in MSDNets.
4.1
THE MSDNET ARCHITECTURE
The MSDNet architecture is illustrated in Figure 2. We present its main components below. Addi- tional details on the architecture are presented in Appendix A.
First layer. The first layer (¢= 1) is unique as it includes vertical connections in Figure[2] Its main purpose is to âseedâ representations on all S scales. One could view its vertical layout as a miniature âS-layersâ convolutional network (S=3 in Figure [2p. Let us denote the output feature maps at layer 2 and scale s as x# and the original input image as x}. Feature maps at coarser scales are obtained via down-sampling. The output x} of the first layer is formally given in the top row of Figure[4] Subsequent layers. Following ), the output feature maps xj produced at subse- quent layers, ¢> 1, and scales, s, are a concatenation of transformed feature maps from all previous feature maps of scale s and s â 1 (if s > 1). Formally, the ¢-th vel of our network outputs a set of features at S scales {x}, see xP}, given in the last row of Figure|4|
Classifiers. The classifiers in MSDNets also follow the dense connectivity pattern within the coars- est scale, S, i.e., the classifier at layer @ uses all the features [x?, sey x?]. Each classifier consists of two convolutional layers, followed by one average pooling layer and one linear layer. In prac- tice, we only attach classifiers to some of the intermediate layers, and we let f,(-) denote the k⢠classifier. During testing in the anytime setting we propagate the input through the network until the budget is exhausted and output the most recent prediction. In the batch budget setting at test time, an example traverses the network and exits after classifier f), if its prediction confidence (we use the maximum value of the softmax probability as a confidence measure) exceeds a pre-determined threshold 0,. Before training, we compute the computational cost, C,, required to process the net- work up to the k" classifier. We denote by 0 <q < 1 a fixed exit probability that a sample that reaches a classifier will obtain a classification with sufficient confidence to exit. We assume that q is constant across all layers, which allows us to compute the probability that a sample exits at classifier kas: qx = 2(1âq)*~1q, where z is a normalizing constant that ensures that )>,, p(qx) = 1. At test time, we need to ensure that the overall cost of classifying all samples in D;..,, does not exceed our budget B (in expectation). This gives rise to the constraint |Dyest| }>, dekCk < B. We can solve this constraint for g and determine the thresholds 6;, on a validation set in such a way that approximately |Dtest|qx Validation samples exit at the k" classifier. Loss functions. During training we use cross entropy loss functions L(f;,) for all classifiers and minimize a weighted cumulative loss: Bl Ucxyyed Uk WkL (fe). Herein, D denotes the training set and w; > 0 the weight of the k-th classifier. If the budget distribution P(B) is known, we can use the weights w;, to incorporate our prior knowledge about the budget B in the learning. Empirically, we find that using the same weight for all loss functions (i.e., setting Vk : wz = 1) works well in practice.
Network reduction and lazy evaluation. There are two straightforward ways to further reduce the computational requirements of MSDNets. First, it is inefï¬cient to maintain all the ï¬ner scales until
6
Published as a conference paper at ICLR 2018
the last layer of the network. One simple strategy to reduce the size of the network is by splitting it into S blocks along the depth dimension, and only keeping the coarsest (' â i + 1) scales in the iâ block (a schematic layout of this structure is shown in[Figure 9p. This reduces computational cost for both training and testing. Every time a scale is removed from the network, we add a transition layer between the two blocks that merges the concatenated features using a 1 x 1 convolution and cuts the number of channels in half before feeding the fine-scale features into the coarser scale via a strided convolution (this is similar to the DenseNet-BC architecture of|Huang et al.|(2017)). Second, since a classifier at layer ¢ only uses features from the coarsest scale, the finer feature maps in layer ¢ (and some of the finer feature maps in the previous Sâ2 layers) do not influence the prediction of that classifier. Therefore, we group the computation in âdiagonal blocksâ such that we only propagate the example along paths that are required for the evaluation of the next classifier. This minimizes unnecessary computations when we need to stop because the computational budget is exhausted. We call this strategy lazy evaluation.
# 5 EXPERIMENTS
We evaluate the effectiveness of our approach on three image classiï¬cation datasets, i.e., the CIFAR- 10, CIFAR-100 (Krizhevsky & Hinton, 2009) and ILSVRC 2012 (ImageNet; Deng et al. (2009)) datasets. Code to reproduce all results is available at https://anonymous-url. Details on architectural conï¬gurations of MSDNets are described in Appendix A. Datasets. The two CIFAR datasets contain 50, 000 training and 10, 000 test images of 32 32 pixels; we hold out 5, 000 training images as a validation set. The datasets comprise 10 and 100 classes, respectively. We follow He et al. (2016) and apply standard data-augmentation techniques to the training images: images are zero-padded with 4 pixels on each side, and then randomly cropped to produce 32 32 images. Images are ï¬ipped horizontally with probability 0.5, and normalized by subtracting channel means and dividing by channel standard deviations. The ImageNet dataset comprises 1, 000 classes, with a total of 1.2 million training images and 50,000 validation images. We hold out 50,000 images from the training set to estimate the conï¬dence threshold for classiï¬ers in MSDNet. We adopt the data augmentation scheme of He et al. (2016) at training time; at test time, we classify a 224 224 center crop of images that were resized to 256 Training Details. We train all models using the framework of Gross & Wilber (2016). On the two CIFAR datasets, all models (including all baselines) are trained using stochastic gradient descent (SGD) with mini-batch size 64. We use Nesterov momentum with a momentum weight of 0.9 without dampening, and a weight decay of 10â4. All models are trained for 300 epochs, with an initial learning rate of 0.1, which is divided by a factor 10 after 150 and 225 epochs. We apply the same optimization scheme to the ImageNet dataset, except that we increase the mini-batch size to 256, and all the models are trained for 90 epochs with learning rate drops after 30 and 60 epochs.
Ã
Ã
5.1 ANYTIME PREDICTION
In the anytime prediction setting, the model maintains a progressively updated distribution over classes, and it can be forced to output its most up-to-date prediction at an arbitrary time.
Baselines. There exist several baseline approaches for anytime prediction: FractalNets (Larsson et al., 2017), deeply supervised networks (Lee et al., 2015), and ensembles of deep networks of varying or identical sizes. FractalNets allow for multiple evaluation paths during inference time, which vary in computation time. In the anytime setting, paths are evaluated in order of increasing computation. In our result ï¬gures, we replicate the FractalNet results reported in the original paper (Larsson et al., 2017) for reference. Deeply supervised networks introduce multiple early-exit classi- ï¬ers throughout a network, which are applied on the features of the particular layer they are attached to. Instead of using the original model proposed in Lee et al. (2015), we use the more competitive ResNet and DenseNet architectures (referred to as DenseNet-BC in Huang et al. (2017)) as the base networks in our experiments with deeply supervised networks. We refer to these as ResNetMC and DenseNetMC, where M C stands for multiple classiï¬ers. Both networks require about 1.3 108 FLOPs when fully evaluated; the detailed network conï¬gurations are presented in the supplemen- tary material. In addition, we include ensembles of ResNets and DenseNets of varying or identical sizes. At test time, the networks are evaluated sequentially (in ascending order of network size) to obtain predictions for the test data. All predictions are averaged over the evaluated classiï¬ers. On
7
Published as a conference paper at ICLR 2018
Anytime prediction on ImageNet Anytime prediction on CIFAR-100 ee MSDNet oo _ z 66 â MSDNet Ensemble of ResNets (varying depth) 50 F 64 62 tan Ensemble of Denseâ 60 . T r n a) 45 L L L 0.0 04 0.6 08 1.0 12 1d 0.0 0.2 04 0.6 1.0 12 10 14 budget (in MUL-ADD) x0) budget (in MUL-ADD) x108
Figure 5: Accuracy (top-1) of anytime prediction models as a function of computational budget on the ImageNet (left) and CIFAR-100 (right) datasets. Higher is better.
ImageNet, we compare MSDNet against a highly competitive ensemble of ResNets and DenseNets, with depth varying from 10 layers to 50 layers, and 36 layers to 121 layers, respectively.
Anytime prediction results are presented in Figure 5. The left plot shows the top-1 classiï¬cation accuracy on the ImageNet validation set. Here, for all budgets in our evaluation, the accuracy of MSDNet substantially outperforms the ResNets and DenseNets ensemble. In particular, when the 8% higher accuracy. budget ranges from 0.1
Ã
Ã
â¼
â
We evaluate more baselines on CIFAR-100 (and CIFAR-10; see supplementary materials). We observe that MSDNet substantially outperforms ResNetsMC and DenseNetsMC at any computational budget within our range. This is due to the fact that after just a few layers, MSDNets have produced low-resolution feature maps that are much more suitable for classiï¬cation than the high-resolution feature maps in the early layers of ResNets or DenseNets. MSDNet also outperforms the other baselines for nearly all computational budgets, although it performs on par with ensembles when the budget is very small. In the extremely low-budget regime, ensembles have an advantage because their predictions are performed by the ï¬rst (small) network, which is optimized exclusively for the low budget. However, the accuracy of ensembles does not increase nearly as fast when the budget is increased. The MSDNet outperforms the ensemble as soon as the latter needs to evaluate a second model: unlike MSDNets, this forces the ensemble to repeat the computation of similar low-level features repeatedly. Ensemble accuracies saturate rapidly when all networks are shallow.
5.2 BUDGETED BATCH CLASSIFICATION
In budgeted batch classiï¬cation setting, the predictive model receives a batch of M instances and a computational budget B for classifying all M instances. In this setting, we use dynamic evaluation: we perform early-exiting of âeasyâ examples at early classiï¬ers whilst propagating âhardâ examples through the entire network, using the procedure described in Section 4.
Baselines. On ImageNet, we compare the dynamically evaluated MSDNet with ï¬ve ResNets (He et al., 2016) and ï¬ve DenseNets (Huang et al., 2017), AlexNet (Krizhevsky et al., 2012), and Google- LeNet (Szegedy et al., 2015); see the supplementary material for details. We also evaluate an ensem- ble of the ï¬ve ResNets that uses exactly the same dynamic-evaluation procedure as MSDNets at test time: âeasyâ images are only propagated through the smallest ResNet-10, whereas âhardâ images are classiï¬ed by all ï¬ve ResNet models (predictions are averaged across all evaluated networks in the ensemble). We classify batches of M = 128 images.
On CIFAR-100, we compare MSDNet with several highly competitive baselines, including ResNets (He et al., 2016), DenseNets (Huang et al., 2017) of varying sizes, Stochastic Depth Net- works (Huang et al., 2016), Wide ResNets (Zagoruyko & Komodakis, 2016) and FractalNets (Lars- son et al., 2017). We also compare MSDNet to the ResNetMC and DenseNetMC models that were used in Section 5.1, using dynamic evaluation at test time. We denote these baselines as ResNetMC / DenseNetMC with early-exits. To prevent the result plots from becoming too cluttered, we present CIFAR-100 results with dynamically evaluated ensembles in the supplementary material. We clas- sify batches of M = 256 images at test time.
Budgeted batch classiï¬cation results on ImageNet are shown in the left panel of Figure 7. We trained three MSDNets with different depths, each of which covers a different range of compu-
8
Published as a conference paper at ICLR 2018
7 Budgeted batch classification on ImageNet Budgeted batch classification on CIFAR-100 * ResNet-H0 MSDNet with dynamic evaluation NSDNet with dynamic evaluation ensemble of Re © © MSDNet w/o dynamic evaluation sit ensemble of DenseNets Reset! with carlyenits ons â DenseNetâ¢Â© with early-exits s (He et al., 2015) lm M ResNets (He et al., 2015) x @-© DenseNets (Huang et al., 2016) al., 2016) al, 2016) 016) 0 1 2 3 4 5 00S 10 15 2.0 25 average budget (in MUL-ADD) x1? average budget (in MUL-ADD) x1?
Figure 7: Accuracy (top-1) of budgeted batch classiï¬cation models as a function of average computational budget per image the on ImageNet (left) and CIFAR-100 (right) datasets. Higher is better.
tational budgets. We plot the performance of each MSDNet as a gray curve; we select the best model for each budget based on its accuracy on the validation set, and plot the corresponding ac- curacy as a black curve. The plot shows that the predictions of MSDNets with dynamic evaluation are substantially more accurate than those of ResNets and DenseNets that use the same amount of 109 FLOPs, MSDNet achieves a top-1 computation. For instance, with an average budget of 1.7 6% higher than that achieved by a ResNet with the same number of accuracy of times fewer FLOPs. Compared to the computationally efï¬cient DenseNets, MSDNet uses FLOPs to achieve the same classiï¬cation accuracy. Moreover, MSDNet with dynamic evaluation allows for very precise tuning of the computational budget that is consumed, which is not possible with individual ResNet or DenseNet models. The ensemble of ResNets or DenseNets with dynamic evaluation performs on par with or worse than their individual counterparts (but they do allow for setting the computational budget very precisely).
The right panel of Figure 7 shows our results on CIFAR-100. The results show that MSDNets con- sistently outperform all baselines across all budgets. Notably, MSDNet performs on par with a 110- layer ResNet using only 1/10th of the computational budget and it is up to 5 times more efï¬cient than DenseNets, Stochastic Depth Networks, Wide ResNets, and FractalNets. Similar to results in the anytime-prediction setting, MSDNet substantially outperform ResNetsM C and DenseNetsM C with multiple intermediate classiï¬ers, which provides further evidence that the coarse features in the MSDNet are important for high performance in earlier layers.
Visualization. To illustrate the ability of our ap- proach to reduce the computational requirements for classifying âeasyâ examples, we show twelve randomly sampled test images from two Ima- geNet classes in Figure 6. The top row shows âeasyâ examples that were correctly classiï¬ed and exited by the ï¬rst classiï¬er. The bottom row shows âhardâ examples that would have been in- correctly classiï¬ed by the ï¬rst classiï¬er but were passed on because its uncertainty was too high. The ï¬gure suggests that early classiï¬ers recog- nize prototypical class examples, whereas the last classiï¬er recognizes non-typical images.
_
f (a) Red wine (b) Volcano sy ws ~â
Figure 6: Sampled images from the ImageNet classes Red wine and Volcano. Top row: images exited from the ï¬rst classiï¬er of a MSDNet with correct predic- tion; Bottom row: images failed to be correctly clas- siï¬ed at the ï¬rst classiï¬er but were correctly pre- dicted and exited at the last layer.
5.3 MORE COMPUTATIONALLY EFFICIENT DENSENETS
Here, we discuss an interesting ï¬nding during our exploration of the MSDNet architecture. We found that following the DenseNet structure to design our network, i.e., by keeping the number of output channels (or growth rate) the same at all scales, did not lead to optimal results in terms of the accuracy-speed trade-off. The main reason for this is that compared to network architectures like ResNets, the DenseNet structure tends to apply more ï¬lters on the high-resolution feature maps in the network. This helps to reduce the number of parameters in the model, but at the same time, it greatly increases the computational cost. We tried to modify DenseNets by doubling the growth rate
9
Published as a conference paper at ICLR 2018
Anytime prediction on CIFAR-100 Batch computational learning on CIFAR-100 7s : : : ee â MSDNet with early-exits H 8 Del s (Huang et al., 2016) bom De st lik. . J} i: 0.0 0.2 04 0. 6 2 0.0 0.5, 10 15 2.0 2.5 budget (in MUL- ADD) x10® average budget (in MUL-ADD) x10°
Figure 8: Test accuracy of DenseNet* on CIFAR-100 under the anytime learning setting (left) and the budgeted batch setting (right).
after each transition layer, so that more ï¬lters are applied to low-resolution feature maps. It turns out that the resulting network, which we denote as DenseNet*, signiï¬cantly outperform the original DenseNet in terms of computational efï¬ciency.
We experimented with DenseNet* in our two settings with test time budget constraints. The left panel of Figure 8 shows the anytime prediction performance of an ensemble of DenseNets* of vary- ing depths. It outperforms the ensemble of original DenseNets of varying depth by a large margin, but is still slightly worse than MSDNets. In the budgeted batch budget setting, DenseNet* also leads to signiï¬cantly higher accuracy over its counterpart under all budgets, but is still substantially outperformed by MSDNets.
# 6 CONCLUSION
We presented the MSDNet, a novel convolutional network architecture, optimized to incorporate CPU budgets at test-time. Our design is based on two high-level design principles, to generate and maintain coarse level features throughout the network and to inter-connect the layers with dense connectivity. The former allows us to introduce intermediate classiï¬ers even at early layers and the latter ensures that these classiï¬ers do not interfere with each other. The ï¬nal design is a two dimensional array of horizontal and vertical layers, which decouples depth and feature coarseness. Whereas in traditional convolutional networks features only become coarser with increasing depth, the MSDNet generates features of all resolutions from the ï¬rst layer on and maintains them through- out. The result is an architecture with an unprecedented range of efï¬ciency. A single network can outperform all competitive baselines on an impressive range of computational budgets ranging from highly limited CPU constraints to almost unconstrained settings.
As future work we plan to investigate the use of resource-aware deep architectures beyond object classiï¬cation, e.g. image segmentation (Long et al., 2015). Further, we intend to explore approaches that combine MSDNets with model compression (Chen et al., 2015; Han et al., 2015), spatially adaptive computation (Figurnov et al., 2016) and more efï¬cient convolution operations (Chollet, 2016; Howard et al., 2017) to further improve computational efï¬ciency.
ACKNOWLEDGMENTS
The authors are supported in part by grants from the National Science Foundation ( III-1525919, IIS-1550179, IIS-1618134, S&AS 1724282, and CCF-1740822), the Ofï¬ce of Naval Research DOD (N00014-17-1-2175), and the Bill and Melinda Gates Foundation. We are also thankful for generous support by SAP America Inc.
# REFERENCES
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
10
Published as a conference paper at ICLR 2018
Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networks for fast test-time prediction. arXiv preprint arXiv:1702.07811, 2017.
Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In ACM SIGKDD, pp. 535â541. ACM, 2006.
Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In ICML, pp. 2285â2294, 2015.
Franc¸ois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248â255, 2009.
Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. Spatially adaptive computation time for residual networks. arXiv preprint arXiv:1612.02297, 2016.
Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
Sam Gross and Michael Wilber. Training and investigating residual nets. 2016. URL http: //torch.ch/blog/2016/02/04/resnets.html.
Alexander Grubb and Drew Bagnell. Speedboost: Anytime prediction with uniform near-optimality. In AISTATS, volume 15, pp. 458â466, 2012.
Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015.
Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IJCNN, pp. 293â299, 1993.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, pp. 1026â1034, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In CVPR, pp. 770â778, 2016.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning Workshop, 2014.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In ECCV, pp. 646â661. Springer, 2016.
Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. In CVPR, 2017.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In NIPS, pp. 4107â4115, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 770â778, 2015.
J¨orn-Henrik Jacobsen, Edouard Oyallon, St´ephane Mallat, and Arnold WM Smeulders. Multiscale hierarchical convolutional networks. arXiv preprint arXiv:1703.04140, 2017.
Sergey Karayev, Mario Fritz, and Trevor Darrell. Anytime recognition of objects and scenes. In CVPR, pp. 572â579, 2014.
11
Published as a conference paper at ICLR 2018
Tsung-Wei Ke, Michael Maire, and Stella X. Yu. Neural multigrid. CoRR, abs/1611.07661, 2016. URL http://arxiv.org/abs/1611.07661.
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech Report, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In NIPS, pp. 1097â1105, 2012.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- works without residuals. In ICLR, 2017.
Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In NIPS, volume 2, pp. 598â605, 1989.
Chen-Yu Lee, Saining Xie, Patrick W Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. In AISTATS, volume 2, pp. 5, 2015.
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning ï¬lters for efï¬cient convnets. In ICLR, 2017.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pp. 740â755. Springer, 2014.
Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431â3440, 2015.
Feng Nan, Joseph Wang, and Venkatesh Saligrama. Feature-budgeted random forest. In ICML, pp. 1983â1991, 2015.
Augustus Odena, Dieterich Lawson, and Christopher Olah. Changing model behavior at test-time using reinforcement learning. arXiv preprint arXiv:1702.07780, 2017.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In ECCV, pp. 525â542. Springer, 2016.
Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPS, pp. 4053â4061, 2016.
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, pp. 1â9, 2015.
Kirill Trapeznikov and Venkatesh Saligrama. Supervised sequential classiï¬cation under budget constraints. In AI-STATS, pp. 581â589, 2013.
Paul Viola and Michael Jones. Robust real-time object detection. International Journal of Computer Vision, 4(34â47), 2001.
Ji Wan, Dayong Wang, Steven Chu Hong Hoi, Pengcheng Wu, Jianke Zhu, Yongdong Zhang, and In ACM Jintao Li. Deep learning for content-based image retrieval: A comprehensive study. Multimedia, pp. 157â166, 2014.
Joseph Wang, Kirill Trapeznikov, and Venkatesh Saligrama. Efï¬cient learning by directed acyclic graph for resource constrained prediction. In NIPS, pp. 2152â2160. 2015.
Zhixiang Xu, Olivier Chapelle, and Kilian Q. Weinberger. The greedy miser: Learning under test- time budgets. In ICML, pp. 1175â1182, 2012.
Zhixiang Xu, Matt Kusner, Minmin Chen, and Kilian Q. Weinberger. Cost-sensitive tree of classi- ï¬ers. In ICML, volume 28, pp. 133â141, 2013.
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
A. R. Zamir, T.-L. Wu, L. Sun, W. Shen, B. E. Shi, J. Malik, and S. Savarese. Feedback Networks. ArXiv e-prints, December 2016.
Yisu Zhou, Xiaolin Hu, and Bo Zhang. Interlinked convolutional neural networks for face parsing. In International Symposium on Neural Networks, pp. 222â231. Springer, 2015.
12
Published as a conference paper at ICLR 2018
# A DETAILS OF MSDNET ARCHITECTURE AND BASELINE NETWORKS
We use MSDNet with three scales on the CIFAR datasets, and the network reduction method intro- duced in[4.1]is applied. [Figure 9] gives an illustration of the reduced network. The convolutional layer functions in the first layer, hj, denote a sequence of 3x3 convolutions (Conv), batch normaliza- tion (BN; [loffe & Szegedy|(2015)), and rectified linear unit (ReLU) activation. In the computation of he, down-sampling is performed by applying convolutions using strides that are powers of two. For subsequent feature layers, the transformations h; and hs are defined following the design in DenseNets 2017): Conv(1 x 1)-BN-ReLU-Conv(3 x 3)-BN-ReLU. We set the num- ber of output channels of the three scales to 6, 12, and 24, respectively. Each classifier has two down-sampling convolutional layers with 128 dimensional 3 x 3 filters, followed by a 2 x 2 average pooling layer and a linear layer.
The MSDNet used for ImageNet has four scales, respectively producing 16, 32, 64, and 64 feature maps at each layer. The network reduction is also applied to reduce computational cost. The original images are ï¬rst transformed by a 7 3 max pooling (both with stride 2), 7 convolution and a 3 before entering the ï¬rst layer of MSDNets. The classiï¬ers have the same structure as those used for the CIFAR datasets, except that the number of output channels of each convolutional layer is set to be equal to the number of its input channels.
Figure 9: Illustration of an MSDNet with network reduction. The network has S = 3 scales, and it is divided into three blocks, which maintain a decreasing number of scales. A transition layer is placed between two contiguous blocks. Network architecture for anytime prediction. The MSDNet used in our anytime-prediction ex- periments has 24 layers (each layer corresponds to a column in Fig. 1 of the main paper), using the reduced network with transition layers as described in Section 4. The classiï¬ers operate on the (i+1)th layers, with i = 1, . . . , 11. On ImageNet, we use MSDNets with four scales, output of the 2 and the ith classiï¬er operates on the (k i+3)th layer (with i = 1, . . . , 5 ), where k = 4, 6 and 7. For à simplicity, the losses of all the classiï¬ers are weighted equally during training.
Network architecture for budgeted batch setting. The MSDNets used here for the two CIFAR datasets have depths ranging from 10 to 36 layers, using the reduced network with transition layers k i=1 i)th layer. The MSDNets used as described in Section 4. The kth classiï¬er is attached to the ( for ImageNet are the same as those described for the anytime learning setting. ResNetMC and DenseNetMC. The ResNetMC has 62 layers, with 10 residual blocks at each spatial resolution (for three resolutions): we train early-exit classiï¬ers on the output of the 4th and 8th residual blocks at each resolution, producing a total of 6 intermediate classiï¬ers (plus the ï¬nal classiï¬cation layer). The DenseNetMC consists of 52 layers with three dense blocks and each of them has 16 layers. The six intermediate classiï¬ers are attached to the 6th and 12th layer in each block, also with dense connections to all previous layers in that block.
# B ADDITIONAL RESULTS
B.1 ABLATION STUDY
We perform additional experiments to shed light on the contributions of the three main components of MSDNet, viz., multi-scale feature maps, dense connectivity, and intermediate classiï¬ers.
13
Published as a conference paper at ICLR 2018
We start from an MSDNet with six intermediate classiï¬ers and remove the three main components one at a time. To make our comparisons fair, we keep the computational costs of the full networks 108 FLOPs, by adapting similar, at around 3.0 the network width, i.e., number of output chan- nels at each layer. After removing all the three components in an MSDNet, we obtain a regular VGG-like convolutional network. We show the classiï¬cation accuracy of all classiï¬ers in a model in the left panel of Figure 10. Several observa- tions can be made: 1. the dense connectivity is crucial for the performance of MSDNet and re- moving it hurts the overall accuracy drastically (orange vs. black curve); 2. removing multi-scale convolution hurts the accuracy only in the lower budget regions, which is consistent with our mo- tivation that the multi-scale design introduces discriminative features early on; 3. the ï¬nal canonical CNN (star) performs similarly as MSDNet under the speciï¬c budget that matches its evaluation cost exactly, but it is unsuited for varying budget constraints. The ï¬nal CNN performs substantially bet- ter at its particular budget region than the model without dense connectivity (orange curve). This suggests that dense connectivity is particularly important in combination with multiple classiï¬ers.
B.2 RESULTS ON CIFAR-10
For the CIFAR-10 dataset, we use the same MSDNets and baseline models as we used for CIFAR- 100, except that the networks used here have a 10-way fully connected layer at the end. The results under the anytime learning setting and the batch computational budget setting are shown in the left and right panel of Figure 11, respectively. Similar to what we have observed from the results on CIFAR-100 and ImageNet, MSDNets outperform all the baselines by a signiï¬cant margin in both settings. As in the experiments presented in the main paper, ResNet and DenseNet models with multiple intermediate classiï¬ers perform relatively poorly.
Anytime prediction on CIFAR-10 Batch computational learning on CIFAR-10 92 MC with MSDNet MC with earl ts (He et al. 2015) exits M1 83 ® # Ensemble of ResNets (all shallow) jets (Huang et al., 2016) nm arying depth) Stochastic Depth-110 (Huang et al., 2016) |4 omy 's (varying depth) WideResNet-40 (Zagoruyko et al., 2016) 80. 00 02 Od 06 08 10 12 1d 10 15 20 25 budget (in MUL-ADD) x10® average budget (in MUL-ADD) x10°
Figure 11: Classiï¬cation accuracies on the CIFAR-10 dataset in the anytime-prediction setting (left) and the budgeted batch setting (right).
14 | {
"id": "1702.07780"
} |
1703.10135 | Tacotron: Towards End-to-End Speech Synthesis | A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given <text, audio> pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods. | http://arxiv.org/pdf/1703.10135 | Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, Rif A. Saurous | cs.CL, cs.LG, cs.SD | Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes) | null | cs.CL | 20170329 | 20170406 | 7 1 0 2
r p A 6 ] L C . s c [
2 v 5 3 1 0 1 . 3 0 7 1 : v i X r a
# TACOTRON: TOWARDS END-TO-END SPEECH SYN- THESIS
Yuxuan Wangâ, RJ Skerry-Ryanâ, Daisy Stanton, Yonghui Wu, Ron J. Weissâ , Navdeep Jaitly,
Zongheng Yang, Ying Xiaoâ, Zhifeng Chen, Samy Bengioâ , Quoc Le, Yannis Agiomyrgiannakis,
# Rob Clark, Rif A. Saurousâ
Google, Inc. {yxwang,rjryan,rif}@google.com
# ABSTRACT
A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module. Build- ing these components often requires extensive domain expertise and may contain brittle design choices. In this paper, we present Tacotron, an end-to-end genera- tive text-to-speech model that synthesizes speech directly from characters. Given <text, audio> pairs, the model can be trained completely from scratch with ran- dom initialization. We present several key techniques to make the sequence-to- sequence framework perform well for this challenging task. Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English, outperforming a pro- duction parametric system in terms of naturalness. In addition, since Tacotron generates speech at the frame level, itâs substantially faster than sample-level au- toregressive methods.
# INTRODUCTION
Modern text-to-speech (TTS) pipelines are complex (Taylor, 2009). For example, it is common for statistical parametric TTS to have a text frontend extracting various linguistic features, a duration model, an acoustic feature prediction model and a complex signal-processing-based vocoder (Zen et al., 2009; Agiomyrgiannakis, 2015). These components are based on extensive domain expertise and are laborious to design. They are also trained independently, so errors from each component may compound. The complexity of modern TTS designs thus leads to substantial engineering efforts when building a new system.
There are thus many advantages of an integrated end-to-end TTS system that can be trained on <text, audio> pairs with minimal human annotation. First, such a system alleviates the need for laborious feature engineering, which may involve heuristics and brittle design choices. Second, it more easily allows for rich conditioning on various attributes, such as speaker or language, or high-level features like sentiment. This is because conditioning can occur at the very beginning of the model rather than only on certain components. Similarly, adaptation to new data might also be easier. Finally, a single model is likely to be more robust than a multi-stage model where each componentâs errors can compound. These advantages imply that an end-to-end model could allow us to train on huge amounts of rich, expressive yet often noisy data found in the real world.
TTS is a large-scale inverse problem: a highly compressed source (text) is âdecompressedâ into audio. Since the same text can correspond to different pronunciations or speaking styles, this is a particularly difï¬cult learning task for an end-to-end model: it must cope with large variations at the signal level for a given input. Moreover, unlike end-to-end speech recognition (Chan et al., 2016)
âThese authors really like tacos. â These authors would prefer sushi.
1
Griffin-Lim reconstruction Linear-scale spectrogram Geecen cs = a Awe . . Seq2seq - a with r=3 CBHG \ \ VW Decoder] '\__ [Decoder ]'\_ [Decoder i RNN fy * v RNN U a [ J U U Attention }*â__|âattenton |_| [Attention |_| Attention fi : RNN â RNN â RNN Attention is applied t \ t \ 1 Pre-net to all decoder steps \ \ tT Pre-net â| Pre-net \|_Pre-net Character embeddings <GO> frame ed \ J
Figure 1: Model architecture. The model takes characters as input and outputs the corresponding raw spectrogram, which is then fed to the Grifï¬n-Lim reconstruction algorithm to synthesize speech.
or machine translation (Wu et al., 2016), TTS outputs are continuous, and output sequences are usually much longer than those of the input. These attributes cause prediction errors to accumulate In this paper, we propose Tacotron, an end-to-end generative TTS model based on the quickly. sequence-to-sequence (seq2seq) (Sutskever et al., 2014) with attention paradigm (Bahdanau et al., 2014). Our model takes characters as input and outputs raw spectrogram, using several techniques to improve the capability of a vanilla seq2seq model. Given <text, audio> pairs, Tacotron can be trained completely from scratch with random initialization. It does not require phoneme-level alignment, so it can easily scale to using large amounts of acoustic data with transcripts. With a simple waveform synthesis technique, Tacotron produces a 3.82 mean opinion score (MOS) on an US English eval set, outperforming a production parametric system in terms of naturalness1.
# 2 RELATED WORK
WaveNet (van den Oord et al., 2016) is a powerful generative model of audio. It works well for TTS, but is slow due to its sample-level autoregressive nature. It also requires conditioning on linguistic features from an existing TTS frontend, and thus is not end-to-end: it only replaces the vocoder and acoustic model. Another recently-developed neural model is DeepVoice (Arik et al., 2017), which replaces every component in a typical TTS pipeline by a corresponding neural network. However, each component is independently trained, and itâs nontrivial to change the system to train in an end-to-end fashion.
To our knowledge, Wang et al. (2016) is the earliest work touching end-to-end TTS using seq2seq with attention. However, it requires a pre-trained hidden Markov model (HMM) aligner to help the seq2seq model learn the alignment. Itâs hard to tell how much alignment is learned by the seq2seq per se. Second, a few tricks are used to get the model trained, which the authors note hurts prosody. Third, it predicts vocoder parameters hence needs a vocoder. Furthermore, the model is trained on phoneme inputs and the experimental results seem to be somewhat limited.
Char2Wav (Sotelo et al., 2017) is an independently-developed end-to-end model that can be trained on characters. However, Char2Wav still predicts vocoder parameters before using a SampleRNN neural vocoder (Mehri et al., 2016), whereas Tacotron directly predicts raw spectrogram. Also, their seq2seq and SampleRNN models need to be separately pre-trained, but our model can be trained
1Sound demos can be found at https://google.github.io/tacotron
2
# target
from scratch. Finally, we made several key modiï¬cations to the vanilla seq2seq paradigm. As shown later, a vanilla seq2seq model does not work well for character-level inputs.
# 3 MODEL ARCHITECTURE
The backbone of Tacotron is a seq2seq model with attention (Bahdanau et al., 2014; Vinyals et al., 2015). Figure 1 depicts the model, which includes an encoder, an attention-based decoder, and a post-processing net. At a high-level, our model takes characters as input and produces spectrogram frames, which are then converted to waveforms. We describe these components below.
HHL 4 Bidirectional RNN Highway layers Residual connection Conv1D layers 4 Conv1D projections 4 Max-pool along time (stride=1)
Figure 2: The CBHG (1-D convolution bank + highway network + bidirectional GRU) module adapted from Lee et al. (2016).
# 3.1 CBHG MODULE
We ï¬rst describe a building block dubbed CBHG, illustrated in Figure 2. CBHG consists of a bank of 1-D convolutional ï¬lters, followed by highway networks (Srivastava et al., 2015) and a bidirectional gated recurrent unit (GRU) (Chung et al., 2014) recurrent neural net (RNN). CBHG is a powerful module for extracting representations from sequences. The input sequence is ï¬rst convolved with K sets of 1-D convolutional ï¬lters, where the k-th set contains Ck ï¬lters of width k (i.e. k = 1, 2, . . . , K). These ï¬lters explicitly model local and contextual information (akin to modeling unigrams, bigrams, up to K-grams). The convolution outputs are stacked together and further max pooled along time to increase local invariances. Note that we use a stride of 1 to preserve the original time resolution. We further pass the processed sequence to a few ï¬xed-width 1-D convolutions, whose outputs are added with the original input sequence via residual connections (He et al., 2016). Batch normalization (Ioffe & Szegedy, 2015) is used for all convolutional layers. The convolution outputs are fed into a multi-layer highway network to extract high-level features. Finally, we stack a bidirectional GRU RNN on top to extract sequential features from both forward and backward context. CBHG is inspired from work in machine translation (Lee et al., 2016), where the main differences from Lee et al. (2016) include using non-causal convolutions, batch normalization, residual connections, and stride=1 max pooling. We found that these modiï¬cations improved generalization.
3.2 ENCODER
The goal of the encoder is to extract robust sequential representations of text. The input to the encoder is a character sequence, where each character is represented as a one-hot vector and em-
3
Table 1: Hyper-parameters and network architectures. âconv-k-c-ReLUâ denotes 1-D convolution with width k and c output channels with ReLU activation. FC stands for fully-connected.
Spectral analysis Character embedding Encoder CBHG Encoder pre-net Decoder pre-net Decoder RNN Attention RNN Post-processing net CBHG pre-emphasis: 0.97; frame length: 50 ms; frame shift: 12.5 ms; window type: Hann 256-D Conv1D bank: K=16, conv-k-128-ReLU Max pooling: stride=1, width=2 Conv1D projections: conv-3-128-ReLU â conv-3-128-Linear Highway net: 4 layers of FC-128-ReLU Bidirectional GRU: 128 cells FC-256-ReLU â Dropout(0.5) â FC-128-ReLU â Dropout(0.5) FC-256-ReLU â Dropout(0.5)â FC-128-ReLU â Dropout(0.5) 2-layer residual GRU (256 cells) 1-layer GRU (256 cells) Conv1D bank: K=8, conv-k-128-ReLU Max pooling: stride=1, width=2 Conv1D projections: conv-3-256-ReLU â conv-3-80-Linear Highway net: 4 layers of FC-128-ReLU Bidirectional GRU: 128 cells 2 Reduction factor (r)
bedded into a continuous vector. We then apply a set of non-linear transformations, collectively called a âpre-netâ, to each embedding. We use a bottleneck layer with dropout as the pre-net in this work, which helps convergence and improves generalization. A CBHG module transforms the pre- net outputs into the ï¬nal encoder representation used by the attention module. We found that this CBHG-based encoder not only reduces overï¬tting, but also makes fewer mispronunciations than a standard multi-layer RNN encoder (see our linked page of audio samples).
# 3.3 DECODER
We use a content-based tanh attention decoder (see e.g. Vinyals et al. (2015)), where a stateful recur- rent layer produces the attention query at each decoder time step. We concatenate the context vector and the attention RNN cell output to form the input to the decoder RNNs. We use a stack of GRUs with vertical residual connections (Wu et al., 2016) for the decoder. We found the residual con- nections speed up convergence. The decoder target is an important design choice. While we could directly predict raw spectrogram, itâs a highly redundant representation for the purpose of learning alignment between speech signal and text (which is really the motivation of using seq2seq for this task). Because of this redundancy, we use a different target for seq2seq decoding and waveform syn- thesis. The seq2seq target can be highly compressed as long as it provides sufï¬cient intelligibility and prosody information for an inversion process, which could be ï¬xed or trained. We use 80-band mel-scale spectrogram as the target, though fewer bands or more concise targets such as cepstrum could be used. We use a post-processing network (discussed below) to convert from the seq2seq target to waveform.
We use a simple fully-connected output layer to predict the decoder targets. An important trick we discovered was predicting multiple, non-overlapping output frames at each decoder step. Predicting r frames at once divides the total number of decoder steps by r, which reduces model size, training time, and inference time. More importantly, we found this trick to substantially increase convergence speed, as measured by a much faster (and more stable) alignment learned from attention. This is likely because neighboring speech frames are correlated and each character usually corresponds to multiple frames. Emitting one frame at a time forces the model to attend to the same input token for multiple timesteps; emitting multiple frames allows the attention to move forward early in training. A similar trick is also used in Zen et al. (2016) but mainly to speed up inference.
4
The ï¬rst decoder step is conditioned on an all-zero frame, which represents a <GO> frame. In inference, at decoder step t, the last frame of the r predictions is fed as input to the decoder at step t + 1. Note that feeding the last prediction is an ad-hoc choice here â we could use all r predictions. During training, we always feed every r-th ground truth frame to the decoder. The input frame is passed to a pre-net as is done in the encoder. Since we do not use techniques such as scheduled sampling (Bengio et al., 2015) (we found it to hurt audio quality), the dropout in the pre-net is critical for the model to generalize, as it provides a noise source to resolve the multiple modalities in the output distribution.
# 3.4 POST-PROCESSING NET AND WAVEFORM SYNTHESIS
As mentioned above, the post-processing netâs task is to convert the seq2seq target to a target that can be synthesized into waveforms. Since we use Grifï¬n-Lim as the synthesizer, the post-processing net learns to predict spectral magnitude sampled on a linear-frequency scale. Another motivation of the post-processing net is that it can see the full decoded sequence. In contrast to seq2seq, which always runs from left to right, it has both forward and backward information to correct the prediction error for each individual frame. In this work, we use a CBHG module for the post-processing net, though a simpler architecture likely works as well. The concept of a post-processing network is highly general. It could be used to predict alternative targets such as vocoder parameters, or as a WaveNet-like neural vocoder (van den Oord et al., 2016; Mehri et al., 2016; Arik et al., 2017) that synthesizes waveform samples directly.
We use the Grifï¬n-Lim algorithm (Grifï¬n & Lim, 1984) to synthesize waveform from the predicted spectrogram. We found that raising the predicted magnitudes by a power of 1.2 before feeding to Grifï¬n-Lim reduces artifacts, likely due to its harmonic enhancement effect. We observed that Grifï¬n-Lim converges after 50 iterations (in fact, about 30 iterations seems to be enough), which is reasonably fast. We implemented Grifï¬n-Lim in TensorFlow (Abadi et al., 2016) hence itâs also part of the model. While Grifï¬n-Lim is differentiable (it does not have trainable weights), we do not impose any loss on it in this work. We emphasize that our choice of Grifï¬n-Lim is for simplicity; while it already yields strong results, developing a fast and high-quality trainable spectrogram to waveform inverter is ongoing work.
# 4 MODEL DETAILS
Table 1 lists the hyper-parameters and network architectures. We use log magnitude spectrogram with Hann windowing, 50 ms frame length, 12.5 ms frame shift, and 2048-point Fourier transform. We also found pre-emphasis (0.97) to be helpful. We use 24 kHz sampling rate for all experiments.
We use r = 2 (output layer reduction factor) for the MOS results in this paper, though larger r values (e.g. r = 5) also work well. We use the Adam optimizer with learning rate decay, which starts from 0.001 and is reduced to 0.0005, 0.0003, and 0.0001 after SOOK, 1M and 2M global steps, respectively. We use a simple ¢1 loss for both seq2seq decoder (mel-scale spectrogram) and post-processing net (linear-scale spectrogram). The two losses have equal weights.
We train using a batch size of 32, where all sequences are padded to a max length. Itâs a com- mon practice to train sequence models with a loss mask, which masks loss on zero-padded frames. However, we found that models trained this way donât know when to stop emitting outputs, causing repeated sounds towards the end. One simple trick to get around this problem is to also reconstruct the zero-padded frames.
# 5 EXPERIMENTS
We train Tacotron on an internal North American English dataset, which contains about 24.6 hours of speech data spoken by a professional female speaker. The phrases are text normalized, e.g. â16â is converted to âsixteenâ.
5
Encoder states 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 50 100 150 200 250 300 350 Decoder timesteps
(a) Vanilla seq2seq + scheduled sampling
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Encoder states 0 10 20 30 40 50 60 70 Decoder timesteps
(b) GRU encoder
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Encoder states 0 10 20 30 40 50 60 70 Decoder timesteps
(c) Tacotron (proposed)
Figure 3: Attention alignments on a test phrase. The decoder length in Tacotron is shorter due to the use of the output reduction factor r=5.
5.1 ABLATION ANALYSIS
We conduct a few ablation studies to understand the key components in our model. As is common for generative models, itâs hard to compare models based on objective metrics, which often do not correlate well with perception (Theis et al., 2015). We mainly rely on visual comparisons instead. We strongly encourage readers to listen to the provided samples.
First, we compare with a vanilla seq2seq model. Both the encoder and decoder use 2 layers of residual RNNs, where each layer has 256 GRU cells (we tried LSTM and got similar results). No pre-net or post-processing net is used, and the decoder directly predicts linear-scale log magnitude spectrogram. We found that scheduled sampling (sampling rate 0.5) is required for this model to learn alignments and generalize. We show the learned attention alignment in Figure 3. Figure 3(a) reveals that the vanilla seq2seq learns a poor alignment. One problem is that attention tends to
6
1000 800 DFT bin 400 200 Frame
(a) Without post-processing net
1000 800 600 DFT bin 400 200 Frame
# (b) With post-processing net
Figure 4: Predicted spectrograms with and without using the post-processing net.
get stuck for many frames before moving forward, which causes bad speech intelligibility in the synthesized signal. The naturalness and overall duration are destroyed as a result. In contrast, our model learns a clean and smooth alignment, as shown in Figure 3(c).
Second, we compare with a model with the CBHG encoder replaced by a 2-layer residual GRU encoder. The rest of the model, including the encoder pre-net, remain exactly the same. Comparing Figure 3(b) and 3(c), we can see that the alignment from the GRU encoder is noisier. Listening to synthesized signals, we found that noisy alignment often leads to mispronunciations. The CBHG encoder reduces overï¬tting and generalizes well to long and complex phrases.
Figures 4(a) and 4(b) demonstrate the beneï¬t of using the post-processing net. We trained a model without the post-processing net while keeping all the other components untouched (except that the decoder RNN predicts linear-scale spectrogram). With more contextual information, the prediction from the post-processing net contains better resolved harmonics (e.g. higher harmonics between bins 100 and 400) and high frequency formant structure, which reduces synthesis artifacts.
# 5.2 MEAN OPINION SCORE TESTS
We conduct mean opinion score tests, where the subjects were asked to rate the naturalness of the stimuli in a 5-point Likert scale score. The MOS tests were crowdsourced from native speakers.
7
100 unseen phrases were used for the tests and each phrase received 8 ratings. When computing MOS, we only include ratings where headphones were used. We compare our model with a para- metric (based on LSTM (Zen et al., 2016)) and a concatenative system (Gonzalvo et al., 2016), both of which are in production. As shown in Table 2, Tacotron achieves an MOS of 3.82, which outperforms the parametric system. Given the strong baselines and the artifacts introduced by the Grifï¬n-Lim synthesis, this represents a very promising result.
# Table 2: 5-scale mean opinion score evaluation.
Tacotron Parametric Concatenative mean opinion score 3.82 ± 0.085 3.69 ± 0.109 4.09 ± 0.119
# 6 DISCUSSIONS
We have proposed Tacotron, an integrated end-to-end generative TTS model that takes a character sequence as input and outputs the corresponding spectrogram. With a very simple waveform syn- thesis module, it achieves a 3.82 MOS score on US English, outperforming a production parametric system in terms of naturalness. Tacotron is frame-based, so the inference is substantially faster than sample-level autoregressive methods. Unlike previous work, Tacotron does not need hand- engineered linguistic features or complex components such as an HMM aligner. It can be trained from scratch with random initialization. We perform simple text normalization, though recent ad- vancements in learned text normalization (Sproat & Jaitly, 2016) may render this unnecessary in the future.
We have yet to investigate many aspects of our model; many early design decisions have gone unchanged. Our output layer, attention module, loss function, and Grifï¬n-Lim-based waveform synthesizer are all ripe for improvement. For example, itâs well known that Grifï¬n-Lim outputs may have audible artifacts. We are currently working on fast and high-quality neural-network-based spectrogram inversion.
ACKNOWLEDGMENTS
The authors would like to thank Heiga Zen and Ziang Xie for constructive discussions and feedback.
# REFERENCES
Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Yannis Agiomyrgiannakis. Vocaine the vocoder and applications in speech synthesis. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 4230â 4234. IEEE, 2015.
Sercan Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Jonathan Raiman, Shubho Sengupta, and Mohammad Shoeybi. Deep voice: Real-time neural text-to-speech. arXiv preprint arXiv:1702.07825, 2017.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems, pp. 1171â1179, 2015.
William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pp. 4960â4964. IEEE, 2016.
8
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Xavi Gonzalvo, Siamak Tazari, Chun-an Chan, Markus Becker, Alexander Gutkin, and Hanna Silen. In Proc. Inter- Recent advances in Google real-time HMM-driven unit selection synthesizer. speech, pp. 2238â2242, 2016.
Daniel Grifï¬n and Jae Lim. Signal estimation from modiï¬ed short-time fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(2):236â243, 1984.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770â778, 2016.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2015.
Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.
Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, and Yoshua Bengio. SampleRNN: An unconditional end-to-end neural audio generation model. arXiv preprint arXiv:1612.07837, 2016.
Jose Sotelo, Soroush Mehri, Kundan Kumar, JoËao Felipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. Char2Wav: End-to-end speech synthesis. In ICLR2017 workshop submission, 2017.
Richard Sproat and Navdeep Jaitly. RNN approaches to text normalization: A challenge. arXiv preprint arXiv:1611.00068, 2016.
Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104â3112, 2014.
Paul Taylor. Text-to-speech synthesis. Cambridge university press, 2009.
Lucas Theis, A¨aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.
A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
Oriol Vinyals, Åukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Gram- mar as a foreign language. In Advances in Neural Information Processing Systems, pp. 2773â 2781, 2015.
Wenfu Wang, Shuang Xu, and Bo Xu. First step towards end-to-end parametric TTS synthesis: Generating spectral parameters with neural attention. In Proceedings Interspeech, pp. 2243â2247, 2016.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016.
Heiga Zen, Keiichi Tokuda, and Alan W Black. Statistical parametric speech synthesis. Speech Communication, 51(11):1039â1064, 2009.
9
Heiga Zen, Yannis Agiomyrgiannakis, Niels Egberts, Fergus Henderson, and PrzemysÅaw Szczepa- niak. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices. Proceedings Interspeech, 2016.
10 | {
"id": "1502.03167"
} |
1703.10069 | Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games | Many artificial intelligence (AI) applications often require multiple
intelligent agents to work in a collaborative effort. Efficient learning for
intra-agent communication and coordination is an indispensable step towards
general AI. In this paper, we take StarCraft combat game as a case study, where
the task is to coordinate multiple agents as a team to defeat their enemies. To
maintain a scalable yet effective communication protocol, we introduce a
Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a
vectorised extension of actor-critic formulation. We show that BiCNet can
handle different types of combats with arbitrary numbers of AI agents for both
sides. Our analysis demonstrates that without any supervisions such as human
demonstrations or labelled data, BiCNet could learn various types of advanced
coordination strategies that have been commonly used by experienced game
players. In our experiments, we evaluate our approach against multiple
baselines under different scenarios; it shows state-of-the-art performance, and
possesses potential values for large-scale real-world applications. | http://arxiv.org/pdf/1703.10069 | Peng Peng, Ying Wen, Yaodong Yang, Quan Yuan, Zhenkun Tang, Haitao Long, Jun Wang | cs.AI, cs.LG | 10 pages, 10 figures. Previously as title: "Multiagent
Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat
Games", Mar 2017 | null | cs.AI | 20170329 | 20170914 | 7 1 0 2
p e S 4 1 ] I A . s c [
4 v 9 6 0 0 1 . 3 0 7 1 : v i X r a
# Multiagent Bidirectionally-Coordinated Nets
# Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games*
# Peng Peng', Ying Wen'', Yaodong Yangâ, Yuan Quanâ, Zhenkun Tangâ, Haitao Longâ, Jun Wang?
5University College London, Alibaba Group
# Abstract
Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordi- nation is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective commu- nication protocol, we introduce a Multiagent Bidirectionally- Coordinated Network (BiCNet [âbrknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by ex- perienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.
# Introduction
The last decade has witnessed massive progresses in the field of Artificial Intelligence (AI). With supervision from la- belled data, machines have, to some extent, exceeded human- level perception on visual recognitions and speech recogni- tions, while fed with feedback reward, single AI units (aka agents) defeat humans in various games including Atari video games (Mnih et al. 2015), Go game (Silver et al. 2016), and card game (Brown and Sandholm 2017).
Yet, true human intelligence embraces social and collective wisdom which lays an essential foundation for reaching the grand goal of Artificial General Intelligence (AGI) (Goertzel and Pennachin 2007). As demonstrated by crowd sourcing, aggregating efforts collectively from the public would solve the problem that otherwise is unthinkable by a single person. Even social animals like a brood of well-organised ants could accomplish challenging tasks such as hunting, building a kingdom, and even waging a war, although each ant by itself is weak and limited. Interestingly, in the coming era of algo- rithmic economy, AI agents with a certain rudimentary level of artificial collective intelligence start to emerge from mul- tiple domains. Typical examples include the trading robots
*Previously as title: âMultiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Gamesâ, Mar 2017.
gaming on the stock markets (Deboeck 1994), ad bidding agents competing with each other over online advertising exchanges (Wang, Zhang, and Yuan 2017), and e-commerce collaborative filtering recommenders predicting user inter- ests through the wisdom of the crowd (Schafer, Konstan, and Riedl 1999).
We thus believe a next grand challenge of AGI is to an- swer how multiple AI agents could learn human-level col- laborations, or competitions, from their experiences with the environment where both of their incentives and eco- nomic constraints co-exist. As the flourishes of deep rein- forcement learning (DRL) (Mnih et al. 2015; Silver et al. 2016), researchers start to shed light on tackling multia- gent problems (Schmidhuber 1996) with the enhanced learn- ing capabilities, e.g., (Sukhbaatar, Fergus, and others 2016; Mordatch and Abbeel 2017).
In this paper, we leverage a real-time strategy game, Star- Craft', as the use case to explore the learning of intelligent collaborative behaviours among multiple agents. Particularly, we focus on StarCraft micromanagement tasks (Synnaeve et al. 2016), where each player controls their own units (with different functions to collaborate) to destroy the opponents army in the combats under different terrain conditions. Such game is considered as one of the most difficult games for computers with more possible states than Go game (Syn- naeve et al. 2016). The learning of this large-scale multiagent system faces a major challenge that the parameters space grows exponentially with the increasing number of agents involved. As such, the behaviours of the agents can become so sophisticated that any joint learner method (Sukhbaatar, Fergus, and others 2016) would be inefficient and unable to deal with the changing number of agents in the game.
We formulate multiagent learning for StarCraft combat tasks as a zero-sum Stochastic Game. Agents are communi- cated by our proposed bidirectionally-coordinated net (BiC- Net), while the learning is done using a multiagent actor-critic framework. In addition, we also introduce parameter shar- ing to solve the scalability issue. We observe that BiCNet can automatically learn various optimal strategies to coor- dinating agents, similar to what experienced human players would adopt in playing the StarCraft game, ranging from trivial move without collision to a basic tactic hit and run to sophisticated cover attack, and focus fire without overkill. We have conducted our experiments by testing over a set of combat tasks with different levels of difficulties. Our method
i The first two authors have equal contributions. Correspondence to Jun Wang, jun.wang @cs.ucl.ac.uk.
âTrademark of Blizzard Entertainmentâ¢.
outperforms state-of-the-art methods and shows its potential usage in a wide range of multiagent tasks in the real-world applications. Related Work The studies on interaction and collaboration in multiagent settings have a long history (Littman 1994; Schmidhuber 1996). Although limited to toy examples in the beginning, reinforcement learning, as a means, has long been applied to multiagent systems in order to learn optimal collaboration policies. One of the key components in multiagent RL is to learn a communication protocol among agents. With deep learning, representative solutions include the differentiable inter-agent learning (DIAL) (Foerster et al. 2016) and the CommNet (Sukhbaatar, Fergus, and others 2016), both of which are end-to-end trainable by back-propagation. DIAL (Foerster et al. 2016) was introduced in partially observable settings where messages passing between agents are allowed. The agent is also named as a independent learner. The idea of learning independent agents can also be found (Lauer and Riedmiller 2000; Kapetanakis and Kudenko 2002; Lauer and Riedmiller 2004; Foerster et al. 2016). In DIAL, each agent consists of a recurrent neural network that outputs individual agentâs Q-value and a message to transfer for each time-step. The generated messages is then transferred to other agents as used as inputs for others in the next time step. The received messages will be embedded with agentâs current observations and last action as the representation of the global information. Communication between independent agents is one way to mitigate the notorious non-stationary issue in the mutliagent settings as the gradients will at least flow among the agents; however, researchers are still looking for better solutions for complex environments such as StarCraft. By contrast, CommNet (Sukhbaatar, Fergus, and others 2016) is designed for joint action learners in fully observable settings. Unlike DIAL, CommNet proposes a single network in the multiagent setting, passing the averaged message over the agent modules between layers. However, as the commu- nication network is fully symmetric and embedded in the original network, it lacks the ability of handle heterogeneous agent types. Also it is a single network for all agents, and therefore its scalability is unclear. In this paper, we solve these issues by creating a dedicated bi-directional communi- cation channel using recurrent neural networks (Schuster and Paliwal 1997). As such, heterogeneous agents can be created with a different set of parameters and output actions. The bi-directional nature means that the communication is not entirely symmetric, and the different priority among agents would help solving any possible tie between multiple opti- mal joint actions (Busoniu, Babuska, and De Schutter 2008; Spaan et al. 2002). Multiagent systems have been explored on specific Star- Craft games. Google DeepMind released a game interface based on StarCraft II and claimed that it is hard to make significant progress on the full game even with the state-of- the-art RL algorithms (Vinyals et al. 2017). Usunier et al. presented a heuristic exploration technique for learning deter- ministic policies in micro-management tasks. Both Synnaeve et al. and Usunier et al. focused on a greedy MDP approach, 1.e., the action of an agent is dependent explicitly on the ac- tion of other agents. In our paper, the dependency of agents is rather modelled over hidden layers by making use of bi- Attention Neron, WB Bicirectionar NN WG Poticy Action (a) Multiagent policy networks (b) Multiagent Q networks Figure 1: Bidirectionally-Coordinated Net (BiCNet). As a means of communication, bi-direction recurrent networks have been used to connect each individual agentâs policy and and Q networks. The learning is done by multiagent deterministic actor-critic as derived in the text. directional RNN (Schuster and Paliwal 1997). A significant benefit over the greedy solution is that, while keeping simple, the communication happens in the latent space so that high- level information can be passed between agents; meanwhile, the gradient updates from all the agents can be efficiently propagated through the entire network. Recently, Foerster et al. has attempted to solve the non- stationarity problem in mutliagent learning by improving the replay buffer, and tested the DIAL model in a way that all agents are fully decentralised. The COMA model (Foerster et al. 2017a) was then proposed to tackle the challenge of multiagent credit assignment. Through the introduction of the counterfactual reward; the idea of training multiagent systems in the centralised critic and decentralised actors way was further reinforced. At the same time, the framework of centralised learning and decentralised execution was also adopted by MADDPG in (Lowe et al. 2017) in some simpler, non-startcraft cases. By contrast, our BiCNet makes use of memory to form a communication channel among agents where the parameter space of communication is independent of the number of agents. Multiagent Bidirectionally-Coordinated Nets StartCraft Combat as Stochastic Games The StarCraft combat games, a.k.a., the micromanagement tasks, refer to the low-level, short-term control of the army members during a combat against the enemy members. For each combat, the agents in one side are fully coop- erative, and they compete with the opponents; therefore,
each combat can be considered as a zero-sum competi- tive game between N agents and M enemies. We formu- late it as a zero-sum Stochastic Game (SG) (Owen 1995), i.e., a dynamic game in a multiple state situation played by multiple agents. A SG can be described by a tuple (S, {A}, (BG, T, {RM}. Let S denotes the state space of the current game, shared among all the agents. Initial state s; follows s; ~ p(s). We define the action space of the controlled agent 7 as A;, and the action space of the enemy j as Bj. T : S x AN x BM â S stands for the deterministic transition function of the environment, and Ri: Sx AN x BM â R represents the reward function of each agent i for i ⬠[1, N].
In order to maintain a flexible framework that could allow an arbitrary number of agents, we consider that the agents, either the controlled or the enemies, share the same state space S' of the current game; and within each camp, agents are homogeneousâ thus having the same action spaces A and B respectively. That is, for each agent i ⬠[1, N] and enemy j ⬠[1, M], Ai = A and B; = B. As discrete action space is intractably large, we consider continuous control outputs, e.g., attack angle and distance.
In defining the reward function, we introduce a time- variant global reward based on the difference of the heath level between two consecutive time steps:
N+M + 1 N r(s,a,b) = M > ARâ(s,a,b) - WD ARi(s.a,b), j=N4+1 i=1 (1)
where for simplicity, we drop the subscript ¢ in global reward r(s,a, b). For given time step ¢ with state s, the controlled agents take actions a ⬠AY, the opponents take actions b ⬠Bâ¢, and AR*(.) = Ri7!(s,a,b) â Ri(s, a, b) repre- sents the reduced health level for agent 7. Note that Eq.(1) is presented from the aspect of controlled agents; the enemyâs global reward is the exact opposite, making the sum of re- wards from both camps equal zero. As the health level is non-increasing over time, Eq. (1) gives a positive reward at time step ¢ if the decrease of enemiesâ health levels exceeds that of ours.
With the defined global reward r(s, a, b), the controlled agents jointly take actions a in state s when the enemies take joint actions b. The agentsâ objective is to learn a policy that maximises the expected sum of discounted rewards, i.e., Ets AM ritn], where 0 < \ < 1 is discount factor. Con- versely, the enemiesâ joint policy is to minimise the expected sum. Correspondingly, we have the following Minimax game:
Qiq(s, a, b) =r(s,a, b) + Amaxmin Qjq(s', aa(s"), ba(s')), (2)
where sâ = sâ+! is determined by T(s, a, b). Q§,(s, a, b) is the optimal action-state value function, which follows the Bellman Optimal Equation. Here we propose to use deter- ministic policy ag : S + AY of the controlled agents and the deterministic policy (Silver et al. 2014) bg : S + BY
With our framework heterogeneous agents can be also trained using different parameters and action space.
of the enemies. In small-scale MARL problems, a common solution is to employ Minimax Q-learning (Littman 1994). However, minimax Q-learning is generally intractable to ap- ply in complex games. For simplicity, we consider the case that the policy of enemies is fixed, while leaving dedicated opponent modelling for future work. Then, SG defined in Eq. (2) effectively turns into an MDP problem (He et al. 2016):
Q"*(s,a) =r(s,a) + Amax Q"(s',a9(sâ)), GB)
where we drop notation bg for brevity.
# Local, Individual Rewards
A potential drawback of only using the global reward in Eq. (1) and its resulting zero-sum game is that it ignores the fact that a team collaboration typically consists of local col- laborations and reward function and would normally includes certain internal structure. Moreover, in practice, each agent tends to have its own objective which drives the collaboration. To model this, we extend the formulation in the previous sec- tion by replacing Eq. (1) with each agentâs local reward and including the evaluation of their attribution to other agents that have been interacting with (either completing or collabo- rating), i.e.,
= 1 t rj(s,a,b) = jtop-K-u()| > AR;(s, a, b)â Jâ¬top-K-u(i) 1 t Toke 2 ARH ab), @ i! â¬top-K-e(i)
where each agent i maintains top-K-u(i) and top-K-e(i), the top-C lists of other agents and enemies, that are cur- rently being interacted with. Replacing it with Eq. (1), we have N number of Bellman equations for agent i, where i ⬠{1,..., N}, for the same parameter 0 of the policy func- tion:
Qi(s,a) =ri(s,a) + AmaxQj(s',ag(sâ)). â )
# Communication w/ Bidirectional Backpropagation
Although Eq. (5) makes single-agent methods, such as deter- ministic policy gradient (Silver et al. 2014; Mnih et al. 2016), immediately applicable for learning individual actions, those approaches, however, lacks a principled mechanism to foster team-level collaboration. In this paper, we allow communica- tions between agents (right before taking individual actions) by proposing a bidirectionally-coordinated net (BiCNet).
Overall, BiCNet consists of a multiagent actor network and a multiagent critic network as illustrated in Fig.(1). Both of the policy network (actor) and the Q-network (critic) are based on the bi-directional RNN structure (Schuster and Pali- wal 1997). The policy network, which takes in a shared ob- servation together with a local view, returns the action for each individual agent. As the bi-directional recurrent struc- ture could serve not only as a communication channel but also as a local memory saver, each individual agent is able to maintain its own internal states, as well as to share the information with its collaborators.
For the learning over BiCNet, intuitively, we can think of computing the backward gradients by unfolding the network of length N (the number of controlled agents) and then ap- plying backpropagation through time (BPTT) (Werbos 1990).
The gradients pass to both the individual Q,; function and the policy function. They are aggregated from all the agents and their actions. In other words, the gradients from all agents rewards are first propagated to influence each of agents ac- tions, and the resulting gradients are further propagated back to updating the parameters.
To see this mathematically, we denote the objective of a single agent i by J; (0); that is to maximise its expected cumu- lative individual reward r; as J;(9) = Eswpr [ri(s, ao(s))], ag where pL, (s) is the discounted state distribution correspond- ing to the policy ag under the transition T, i.e., pZ,(s) := Js De A *pi(s)1 (8! = Tey, py, (S))4s 5 it can also be cho- sen as the stationary distribution of an ergodic MDP. So, we can write the objective of N agents denoted by J(@) as follows:
N J(8) =Eswor [D> ri(s,ae(s))]- 6) i=1
Next, we introduce a multiagent analogue to the deterministic policy gradient theorem. The proof, which we give in the Supplementary Material, follows a similar scheme to both (Silver et al. 2014) and (Sutton et al. 2000).
Theorem 1 (Multiagent Deterministic PG Theorem) Given N agents which are collectively represented in a policy parameterised with 0, the discounted state distribution pi, (s), and the objective function J(0) defined in Eq.(6), we have the policy gradient as follows:
VoJ(0) =
N N Es pF, (s) > > Voaj,(s) - Va;Q;*(s, aa(s)) i=l j=l (7)
where to ensure adequate exploration, we apply Ornstein- Uhlenbeck process to add noise on the output of actor net- work in each time step. Here we further consider the off- policy deterministic actor-critic algorithms (Lillicrap et al. 2015; Silver et al. 2014) to reduce the variance. In particular, we employ a critic function in Eq. (7) to estimate the action- value Q?? where off-policy explorations can be conducted. In training the critic, we use the sum of square loss and have the following gradient for the parametrised critic Q§(s, a), where ⬠is the parameter for the Q network:
N VeL(S) = Borys c)| Do(rl5sa0(s)) + AQE!-a0(s') i=1 ~Q§(s, av(s))) Vac (s.a0(8) . (8)
Note that the gradient is also aggregated from multiple agents as the policy network would do. With Eqs. (7) and Eqs. (8), we apply Stochastic Gradient Descent (SGD) to op- timise both the actor and the critic networks. The pseudocode of the over algorithm is given in the Supplementary Material.
BiCNet is markedly different from greedy MDP approach as the dependency of agents are embedded in the latent lay- ers, rather than directly on the actions. While simple, our
approach allow full dependency among agents because the gradients from all the actions in Eq.(7) are efficiently prop- agated through the entire networks. Yet, unlike CommNet (Sukhbaatar, Fergus, and others 2016), our communication is not fully symmetric, and we maintain certain social conven- tions and roles by fixing the order of the agents that join the RNN. This would help solving any possible tie between multi- ple optimal joint actions (Busoniu, Babuska, and De Schutter 2008; Spaan et al. 2002).
Across different agents, the parameters are shared so that the number of parameters is independent of the number of agents (analogous to the shared parameters across the time domain in vanilla RNN). Parameter sharing results in the compactness of the model which could speed up the learning process. Moreover, this could also enable the domain adap- tion where the network trained on the small team of of agents (typically three) effectively scales up to larger team of agents during the test under different combat scenarios.
# Experiments
# Experimental Setup
To understand the properties of our proposed BiCNet and its performance, we conducted the experiments on the Star- Craft combats with different settings . Following similar ex- periment set-up as Sukhbaatar, Fergus, and others, BiCNet controls a group of agents trying to defeat the enemy units controlled by the built-in AI.
The level of combat difficulties can be adjusted by vary- ing the unit types and the number of units in both sides. We measured the winning rates, and compared it with the state-of- the-art approaches. The comparative baselines consist of both the rule-based approaches, and deep reinforcement learning approaches. Our setting is summarised as follows where BiC- Net controls the former units and the built-in AI controls the latter. We categorize the settings into three types: 1) easy combats {3 Marines vs. 1 Super Zergling, and 3 Wraiths vs. 3 Mutalisks}; 2) hard combats {5 Marines vs. 5 Marines, 15 Marines vs. 16 Marines, 20 Marines vs. 30 Zerglings, 10 Marines vs. 13 Zerglings, and 15 Wraiths vs. 17 Wraiths.}; 3) heterogeneous combats { 2 Dropships and 2 Tanks vs. 1 Ultralisk }.
The rule-based approaches allow us to have a reference point that we could make sense of. Here we adopted three rule-based baselines: StarCraft built-in AI, Attack the Weakest, Attack the Closest.
For the deep reinforcement learning approaches, we con- sidered the following work as the baselines:
Independent controller (IND): We trained the model for single agent and control each agent individually in the com- bats. Note that there is no information sharing among differ- ent agents even though such method is easily adaptable to all kinds of multiagent combats.
Fully-connected (FC): We trained the model for all agents in a multiagent setting and control them collectively in the com- bats. The communication between agents are fully-connected. Note that it is needed to re-train a different model when the number of units at either side changes.
CommNet: CommNet (Sukhbaatar, Fergus, and others 2016) is a multiagent network designed to learning to communicate among multiple agents. To make a fair comparison, we im- plemented both the CommNet and the BiCNet on the same (state, action) spaces and follow the same training processes.
âmem batch size 16 um batch sive 32 mmm batch size 64 mm batch size 128 y = Winning Rate y =MeanQ 100 ale â lo Training Testing 200 300 400-=«500. «600700 x= Dataset X= Max Number of Episodes
Figure 2: The impact of batch_size in combat 2 Marines vs. 1 Super Zergling.
GMEZO: GreedyMDP with Episodic Zero-Order Optimisa- tion (GMEZO) (Usunier et al. 2016) was proposed particu- larly to solve StarCraft micromanagement tasks. Two novel ideas are introduced: conducting collaborations through a greedy update over MDP agents, as well as adding episodic noises in the parameter space for explorations. To focus on the comparison with these two ideas, we replaced our bi- directional formulation with the greedy MDP approach, and employed episodic zero-order optimisation with noise over the parameter space in the last layer of Q networks in our BiCNet. We keep the rest of the settings exactly the same.
BiCNet: In BiCNet, we defined the action space differently from Sukhbaatar, Fergus, and others. Specifically, the ac- tion space of each individual agent is represented as a 3- dimensional real vector, i.e., continuous action space. The first dimension corresponds to the probability of attack, ac- cording to which we sample a value from [0,1]. If the sampled value is 1, then the agent attacks; otherwise, the agent moves. The second and the third dimension correspond to the degree and the distance of where to attack. With the above three quantities, BiCNet can precisely order an agent to attack a certain location. Note that this is different from executing high-level commands such as âattack enemy_idâ, in other words, how to effectively output the damage is itself a form of intelligence.
# Parameter Tuning
In our training, Adam (Kingma and Ba 2014) is set as the optimiser with learning rate equal to 0.002 and the other arguments set by default values. We set the maximum steps of each episode as 800.
We study the impact of the batch size and the results are shown in Figure 2 in the â2 Marines vs. 1 Super Zerglingâ combat. The two metrics, the winning rate and the Q value, are given. We fine-tune the batch_size by selecting the best BiCNet model which are trained on 800 episodes (more than 700k steps) and then tested on 100 independent games. The model with batch_size 32 achieves both the highest winning rate and the highest mean Q-value after 600k training steps. We also observed that skip frame 2 gave the highest mean Q-value between 300k and 600k training steps. We fix this parameter with the learned optimal value in the remaining part of our test.
In Fig. 3, we also compare the convergence speed of pa- rameter learning by plotting the winning rate against the number of training episodes. It shows that the proposed BiC- Net model has a much quicker convergence than the two main StarCraft baselines.
# Performance Comparison
Table 1 compares our proposed BiCNet model against the baselines under multiple combat scenarios. In each scenario,
Wining Rate Ss So S S eS & 8 & & & ° id â BiCNet CommNet - GMEZO e 0.0 20 40 60 80 100 120 140 160 Num. Episodes
Figure 3: Learning Curves in Combat â10 Marines vs. 13 Zerglingsâ
Table 1: Performance comparison. M: Marine, Z: Zergling, W: Wraith.
Combat Rule Based RL Based Built-in Weakest Closest /TND FC GMEZO CommNet BiCNet 20M vs. 30Z |1.00 000 870 940 .00T 880 1.00 1.00 5 Mvs.5 M}.720 900 700 310 .080 .910 950 920 15 M vs. 16 M |.610 000 670 590 440 .630 680 710 10M vs. 13 Z |.550 230 410 522 430 .570 440 640 15 W vs. 17 W}.440 000 300 310 .460 420 470 530
BiCNet is trained over 100k steps, and we measure the per- formance as the average winning rate on 100 test games. The winning rate of the built-in AI is also provided as an indicator of the level of difficulty of the combats.
As illustrated in Table 1, in 4/5 of the scenarios, BiCNet outperforms the other baseline models. In particular, when the number of agents goes beyond 10, where cohesive col- laborations are required, the margin of the performance gap between BiCNet and the second best starts to increase.
In the combat â5 M vs. 5 Mâ, where the key factor to win is to âfocus fireâ on the weak, the IND and the FC models have relatively poorer performance. We believe it is because both of the models do not come with an explicit collaboration mechanism between agents in the training stage; coordinating the attacks towards one single enemy is even challenging. On the contrary, GMEZO, CommNet, and BiCNet, which are explicitly or implicitly designed for multiagent collaboration, can grasp and master this simple strategy, thus enjoying bet- ter performances. Furthermore, it is worth mentioning that despite the second best performance on â5 Marines vs. 5 Marinesâ, our BiCNet only needs 10 combats before learn- ing the idea of âfocus fireâ, and achieves over 85% win rate, whereas CommNet needs more than 50 episodes to grasp the skill of âfocus fireâ with a much lower winning rate.
Note that the order of which side starts the first attack will influence the combat. This explains why in the combat â5 M vs. 5 Mâ, the built-in AI on the left (as the first to attack) has more advantages on the winning rate 0.720 over the built-in AI on the right, even though the number of marines at both sides is the same.
# How It Works
To further understand how BiCNet works, we conduct two more studies. We first examine whether a higher Q-value would represent a more optimal join actions among agents.
Hidden states have high Q value - 100 Hidden states have low Q value Q Value Figure 4: Visualisation for 3 Marines vs. 1 Super Zergling combat. Upper Left: State with high Q value; Lower Left: State with low Q value; Right: Visualisation of hidden layer outputs for each step using TSNE, coloured by Q values. We visualise the model outputs when the coordinated cover at- tack is learned in Figure 4. The values in the last hidden layer of the critic network over 10k steps are collected and then em- beded in 2-dimensional space using t-SNE algorithm (Maaten and Hinton 2008). We observe that the steps with high Q- values are aggregated in the same area in the embedding space. For example, Figure 4 Upper Left shows that the agents attack the enemy in far distance when the enemy can- not attack the agents, and in this status, the model predicts high Q values. By contrast, in Figure 4 Lower Left, the agents suffer the damages from the enemy when it closes, which leads to low Q-values. Our next aim is to examine whether there is any semantic meaning of the information exchanged among agents be- fore their actions. However, due to the high variability of the StarCraft game, so far we have not observed any con- crete meaning yet. We instead only focus on bidirectioinal communications by considering a simpler game, where the sophistications that are not related to communications are removed. Specifically, this simpler game consists of n agents, At each round, each agent observes a randomly generated number (sampled in range [â10, 10] under truncated Gaus- sian) as its input, and nothing else. The goal for each agent is to output the sum over the inputs that all the agents observed. Each agent receives reward based on the difference between the sum and their prediction (action output). In the setting of three agents guessing the sum with one Bi-RNN communication layer (the hidden state size is 1) followed by a MLP layer, Figure 5 displays the values that have been transferred among three agents. As shown, Agent 1 passes a high value to Agent 2 when it observes a high ob- servation number. When Agent 2 communicates with Agent 3, it tends to output an âadditiveâ value between its own and previously communicated agent, i.e., agent 1. In other words, the hidden state value is increasing when the sum of Agents 1 and 2âs observations goes high. Both senders have learned to make the other receiver obtain a helpful message in order to predict the target sum over all agentsâ observations. We further set the game with num. of agents n = 5, 10, or 20. Apart from the four baselines tested previously, we also implement a supervised MLP with 10 hidden nodes as additional (predicting the sum based on the inputs given to agents). The results are compared in Table 2. The metric is the absolute value of the difference between each agentâs action and target. We see our method significantly outperform others. The second best is CommNet. Possible explanation is âAgent 2 Observation Figure 5: Left: The hidden state value passed by Agent | to Agent 2 in three agent guessing number game; Middle: The hidden state value passed by Agent | and Agent 2 to Agent 3 in three agent guessing number game; Right: Colour bar. Table 2: Performance comparison in the guessing game with different agent numbers. Results are given as average laction_value â target_value| in 10,000 testing steps and its standard deviation; A smaller value means a better per- formance. SL-MLP is a supervised MLP as an additional baseline. t-test is conducted, and the significant ones (p-value < 0.05) compared to the second best are marked as *. gent Number SL-MLP IND CommNet GMEZO BiCNet 5 2.824238 13.92£12.0 0.57£0.4T 5.92£7.62. *0.5250.51 10 4.3143.67 15.32+13.90 1.18+0.90 9.21+8.22 *0.97+0.91 20 6.7145.31 19.67414.61 3.8843.03 13.65£11.74 *3.1242.93 that it takes an average as the message, and thus naturally fits the problem, while ours have to learn the additives implicitly. Emerged Human-level Coordination With adequate trainings from scratch, BiCNet would be able to discover several effective collaboration strategies. In this section, we conduct a qualitative analysis on the learned col- laboration policies from BiCNet. We refer the demonstration video to the Supplementary Material and the experimental configurations to Section Experiments. Coordinated moves without collision. We observe that, in the initial stages of learning, in Fig. 6 (a) and (b), the agents move in a rather uncoordinated way. In particular, when two agents are close to each other, one agent often unintentionally blocks the otherâs path. With the increasing rounds of train- ing (typically over 40k steps in near 50 episodes in the â3 (a) Early stage (b) Early stage (c) Well-trained (d) Well-trained of training of training Figure 6: Coordinated moves without collision in combat 3 Marines (ours) vs. 1 Super Zergling (enemy). The yellow line points out the direction each agent is going to move.
gent Number SL-MLP IND CommNet GMEZO BiCNet 5 2.824238 13.92£12.0 0.57£0.4T 5.92£7.62. *0.5250.51 10 4.3143.67 15.32+13.90 1.18+0.90 9.21+8.22 *0.97+0.91 20 6.7145.31 19.67414.61 3.8843.03 13.65£11.74 *3.1242.93
(a) time step 1 (b) time step 2 (c) time step 3 (d) time step 4 Figure 7: Hit and Run tactics in combat 3 Marines (ours) vs. I Zealot (enemy). (a) time step 1 (b) time step 2 (c) time step 3 (d) time step 4 Figure 8: Coordinated cover attacks in combat 4 Dragoons (ours) vs. 1 Ultralisks (enemy) Marines vs. 1 Super Zerglingâ combat setting), the number of collisions reduces dramatically. Finally, when the training be- comes stable, the coordinated moves emerge, as illustrated in Fig. 6 (c) and (d). Such coordinated moves become important in large-scale combats as shown in Fig. 9 (a) and (b). Hit and Run tactics. For human players, a common tactic of controlling agents in StarCraft combat is Hit and Run, i.e., moving the agents away if they are under attack, and fighting back again when agents stay safe. We find that BiCNet can rapidly grasp the tactic of Hit and Run, either in the case of single agent or multiple agents settings. We illustrate four consecutive movements of Hit and Run in Fig. 7. Despite the simplicity, Hit and Run serves as the basis for more advanced and sophisticated collaboration tactics. Coordinated cover attack. Cover attack is a high-level collaborative strategy that is often used on the real battlefield. The essence of cover attack is to let one agent draw fire or attentions from the enemies, meanwhile, other agents take advantage of this time period or distance gap to output more harms. The difficulty of conducting cover attack lies in how to arrange the sequential moves of multiple agents in a coor- dinated hit and run way. As shown in Figs. 8, BiCNet can master it well. Starting from Fig. 8(a), BiCNet controls the bottom two Dragoons to run away from the enemy Ultralisk, while the one in the upper-right corner immediately starts to attack the enemy Ultralisk to cover them up. As a response, the enemy starts to attack the top one in time step 2. The bottom two Dragoons fight back and form another cover-up. By continuously looping this strategy over, the team of Dra- goons guarantees consecutive attack outputs to the enemy while minimising the team-level damages (because the en- emy wastes time in targeting different Dragoons) until the enemy is killed. Focus fire without overkill. As the number of agents in- creases, how to efficiently allocate the attacking resources becomes important. Neither scattering over all enemies nor focusing on one enemy (wasting attacking fires is also called overkill) are desired. We observe that BiCNet learns to con- trol each agent to focus their fires on particular enemies, and (a) time step 1 (b) time step 2 (c) timestep 3 (d) time step 4 Figure 9: focus fireâ in combat /5 Marines (ours) vs. 16 Marines (enemy). (a) time step 1 (b) time step 2 Figure 10: Coordinated heterogeneous agents in combat 2 Dropships and 2 tanks vs. | Ultralisk. different agents tend to move to the sides to spread the fire and avoid overkill. An example could be found in Fig.(9) Collaborations between heterogeneous agents. In Star- Craft, there are tens of types of agent units, each with unique functionalities, action space, strength, and weakness. For combats with different types of units involved, we expect the agents to reach win-win situations through the collaborations. In fact, heterogeneous collaborations can be easily imple- mented in our framework by limiting the parameter sharing only to the same types of the units. In this paper, we study a simple case where two Dropships and two tanks collaborate to fight against an Ultralisk. A Dropship does not have the function to attack, but it can carry maximally two ground units in the air. As shown in Fig. 10, when the Ultralisk is attacking one of the tanks, the Dropship escorts the tank to escape from the attack. At the same time, the other Dropship unloads his tank to the ground so as to attack the Ultralisk. At each side, the collaboration between the Dropship and the tank keeps iterating until the Ultralisk is destroyed. Conclusions In this paper, we have introduced a new deep multiagent re- inforcement learning. The action is learned by constructing a vectorised actor-critic framework, where each dimension corresponds to an agent. The coordination is done by bi- directional recurrent communications in the internal layers. Through end-to-end learning, our BiCNet would be able to successfully learn several effective coordination strategies. Our experiments have demonstrated its ability to collaborate and master diverse combats in StarCraft combat games. We have also shown five human-level coordination strategies BiCNet could grasp from playing StarCraft combat games. Admittedly, quantifying the sophistication of the collabora- tions in games is challenging in general, and our analysis here is qualitative in nature. In the next step, we plan to carry on experiments of letting the machine compete with human players at different lev- els. We also plan to further investigate how the policies are communicated over the networks among agents in more com- plicated settings, and whether there is a specific language that may have emerged in StartCraft (Lazaridou, Peysakhovich, and Baroni 2016; Mordatch and Abbeel 2017).
# References
Brown and Sandholm 2017] Brown, N., and Sandholm, T. 2017. Safe and nested endgame solving for imperfect-information games. AAAI/AAI.
Busoniu, Babuska, and De Schutter 2008] Busoniu, L.; Babuska, R.; and De Schutter, B. 2008. A comprehensive survey of multia- gent reinforcement learning. IEEE Transactions on Systems Man and Cybernetics Part C Applications and Reviews 38(2):156.
Deboeck 1994] Deboeck, G. 1994. Trading on the edge: neural, genetic, and fuzzy systems for chaotic financial markets, volume 39. John Wiley & Sons.
Foerster et al. 2016] Foerster, J.; Assael, Y. M.; de Freitas, N.; and Whiteson, S. 2016. Learning to communicate with deep multi-agent reinforcement learning. In NJPS, 2137-2145.
Foerster et al. 2017a] Foerster, J.; Farquhar, G.; Afouras, T.; Nardelli, N.; and Whiteson, S. 2017a. Counterfactual multi-agent policy gradients. arXiv preprint arXiv:1705.08926.
Foerster et al. 2017b] Foerster, J.; Nardelli, N.; Farquhar, G.; Torr, P.; Kohli, P.; Whiteson, S.; et al. 2017b. Stabilising experience replay for deep multi-agent reinforcement learning. arXiv preprint arXiv: 1702.08887.
Goertzel and Pennachin 2007] Goertzel, B., and Pennachin, C. 2007. Artificial general intelligence, volume 2. Springer.
He et al. 2016] He, H.; Boyd-Graber, J.; Kwok, K.; and Daumé III, H. 2016. Opponent modeling in deep reinforcement learning. In ICML, 1804-1813.
Kapetanakis and Kudenko 2002] Kapetanakis, S., and Kudenko, D. 2002. Reinforcement learning of coordination in cooperative multi- agent systems. AAAJ/IAAI 2002:326-331.
Kingma and Ba 2014] Kingma, D., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980.
Lauer and Riedmiller 2000] Lauer, M., and Riedmiller, M. 2000. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In JCML.
Lauer and Riedmiller 2004] Lauer, M., and Riedmiller, M. 2004. Reinforcement learning for stochastic cooperative multi-agent sys- tems. In AAMA.
Lazaridou, Peysakhovich, and Baroni 2016] Lazaridou, A. Peysakhovich, A.; and Baroni, M. 2016. Multi-agent cooper- ation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182.
Lillicrap et al. 2015] Lillicrap, T. P.; Hunt, J. J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; and Wierstra, D. 2015. Con- tinuous control with deep reinforcement learning. arXiv preprint arXiv: 1509.02971.
Littman 1994] Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning. In JCML.
Lowe et al. 2017] Lowe, R.; Wu, Y.; Tamar, A.; Harb, J.; Abbeel, P.; and Mordatch, I. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275.
Maaten and Hinton 2008] Maaten, L. v. d., and Hinton, G. 2008. Visualizing data using t-sne. Journal of Machine Learning Research 9(Nov):2579-2605.
Mnih et al. 2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; et al. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529-533.
Mnih et al. 2016] Mnih, V.; Badia, A. P.; Mirza, M.; Graves, A.; Lillicrap, T. P.; Harley, Silver, D.; and Kavukcuoglu, K. 2016. Asynchronous methods for deep reinforcement learning. In Jnterna- tional Conference on Machine Learning.
Mordatch and Abbeel 2017] Mordatch, I., and Abbeel, P. 2017. Emergence of grounded compositional language in multi-agent populations. arXiv preprint arXiv: 1703.04908.
Owen 1995] Owen, G. 1995. Game theory. Academic Press.
Schafer, Konstan, and Riedl 1999] Schafer, J. B.; Konstan, J.; and Riedl, J. 1999. Recommender systems in e-commerce. In ACM EC.
Schmidhuber 1996] Schmidhuber, J. 1996. A general method for multi-agent reinforcement learning in unrestricted environments. In Adaptation, Coevolution and Learning in Multiagent Systems: Papers from the 1996 AAAI Spring Symposium, 84-87.
Schuster and Paliwal 1997] Schuster, M., and Paliwal, K. K. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673-2681.
Silver et al. 2014] Silver, D.; Lever, G.; Heess, N.; Degris, T.; Wier- stra, D.; and Riedmiller, M. 2014. Deterministic policy gradient algorithms. In JCML.
Silver et al. 2016] Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, L; Panneershelvam, V.; Lanctot, M.; et al. 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484489.
Spaan et al. 2002] Spaan, M. T.; Vlassis, N.; Groen, F. C.; et al. 2002. High level coordination of agents based on multiagent markov decision processes with roles. In JROS, volume 2, 66-73.
Sukhbaatar, Fergus, and others 2016] Sukhbaatar, S.; Fergus, R.; etal. 2016. Learning multiagent communication with backpropaga- tion. In NIPS, 2244-2252.
Sutton et al. 2000] Sutton, R. S.; McAllester, D. A.; Singh, S. P.; and Mansour, Y. 2000. Policy gradient methods for reinforcement learning with function approximation. In NPS, 1057-1063.
Synnaeve et al. 2016] Synnaeve, G.; Nardelli, N.; Auvolat, A.; Chintala, S.; Lacroix, T.; Lin, Z.; Richoux, F.; and Usunier, N. 2016. Torchcraft: a library for machine learning research on real-time strategy games. arXiv preprint arXiv: 1611.00625.
Usunier et al. 2016] Usunier, N.; Synnaeve, G.; Lin, Z.; and Chin- tala, S. 2016. Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks. arXiv preprint arXiv:1609.02993.
Vinyals et al. 2017] Vinyals, O.; Ewalds, T.; Bartunov, S.; Georgiev, P.; Vezhnevets, A. S.; Yeo, M.; Makhzani, A.; Kiittler, H.; Agapiou, J.; Schrittwieser, J.; et al. 2017. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv: 1708.04782.
Wang, Zhang, and Yuan 2017] Wang, J.; Zhang, W.; and Yuan, S. 2017. Display advertising with real-time bidding (RTB) and be- havioural targeting. Foundations and Trends in Information Re- trieval, Now Publishers.
Werbos 1990] Werbos, P. J. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 78(10):1550- 1560.
# Supplementary Material
# Proof of Theorem 1
# Qi" (8, a)la=ao(s)
Following the regularity conditions mentioned in (Silver et al. 2014), we know that the supreme of 0a; ,, ind
# 0aio(s)
for each agent 7 are bounded functions of s. Based on the regularity and the boundedness, we can use Leibniz integral
# c
rule and Fubiniâs theorem, respectively. Note that as the policy ag and the transition matrix of the environment 7 are both considered deterministic, the expectation is only taken over the initial state sp, which is different from the original deterministic policy gradient theorem. According to the definition of Q?°(s, a) and the our objective function in Eq.(6), we derive the multiagent deterministic policy gradient theorem, which mostly follows the line of (Sutton et al. 2000).
Ny âBe man JP) LO 008)) )
ad = [noZrere. ag(s))ds (10)
Ny = [m9 % (nisavis)) + [ M(s! = Te (8) "(8 as!) ) ds dh
# Ny
=
[ m6) (ee done ia} ds + [mw [a (â2 Fall = Tevs(Nlanm Do a(t) dsids + [me [aura a0.b, (8 1 eer s',a9(sâ ») ds'ds (12)
Jag(s = [ m6) 30 Fon (S, a) |aâay(s) +f L(s' = Ta», (8 ed sâ,ag(sâ)) | dsâ | ds â<$<$<â_ iterate as Eq.(10) to Eq.(11)
=
= ) Daal (sâ) a L(s' = 0 ( ndsâ 1 [LE rmente = Tes (00S goon a)jana,(s)ds'ds ( 4)
[> (5)1(8' = Td, 48) as ) 2222 2S ae dsâ ( 5 sy Pr ag, by a0 das a=ag(sâ) _ SE
=
pa, (s')
0ao(sâ) Es pF ( (s) : Gaa(s) ay om sa Jla= a9 ( «| (
Ey. py, a> S Voaj,o(s) - Va;Q7*(s, aa(s))] ; ( i=1 j=l 2)
where in Eq.(10) Leibniz intergal rule is used to exchange derivative and integral since Q?°(s, ag(s)) is continuous. For Eq.( 1), we used the definition of @-value. Then, we take derivatives for each term in Eq.(11) to get Eq.(12). Afterwards, we combine the first and the second term in Eq.(12) to get the first term in Eq.(13), while we notice that we can iterate Eq.(10) and Eq.(11) to expand the second term in Eq.(13). By summing up the iterated terms, we get Eq.(14), which implies Eq.(15) by using Fubiniâs theorem to exchange the order of integration. Using the expectation to denote Eq.(15), we derive Eq.(16). Finally, we get Eq.(17) and the proof is done.
( 3)
5)
6)
# Pseudocode
# Algorithm 1 BiCNet algorithm
Initialise actor network and critic network with ⬠and 6 Initialise target network and critic network with £â + ⬠and 6â + 6 iti replay buffer R des=1, E do e a random process U/ for action exploration receive initial observation state s+ for t=1, T do for each agent i, select and execute action at = aio(s*) +M receive reward [r/]_, and observe new state sâ+? store transition {sâ, at, r4]Â¥_,, sâ+4} inR ; sample a random minibatch of M transitions {5%,,, [a%, ;. 1), Jha. 8nt) JY compute target value for each agent in each transition using the Bi-RNN: for m=1, M do , from R
Qmi =Tmi + OE, (shit, agâ(s't)) for each agent i
# end for
compute critic gradient estimation according to Eq.(8):
Ag= + S > (Qn. _ Qs (Sm ag(s ))) . VeQs (s ag(s »}: Mag , mem ⢠miem m
compute actor gradient estimation according to Eq.(7) and replace Q-value with the critic estimation:
M N N 1 AO = 5 YI Y [Voaj.0(Sn) - Vay Q%y, (Sm: Ao(Sm))] m=1i=1 j=1
update the networks based on Adam using the above gradient estimators update the target networks:
EH rEt (1-9 + (1-7) 0" end for end for | {
"id": "1609.02993"
} |
1703.06585 | Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning | We introduce the first goal-driven training for visual question answering and
dialog agents. Specifically, we pose a cooperative 'image guessing' game
between two agents -- Qbot and Abot -- who communicate in natural language
dialog so that Qbot can select an unseen image from a lineup of images. We use
deep reinforcement learning (RL) to learn the policies of these agents
end-to-end -- from pixels to multi-agent multi-round dialog to game reward.
We demonstrate two experimental results.
First, as a 'sanity check' demonstration of pure RL (from scratch), we show
results on a synthetic world, where the agents communicate in ungrounded
vocabulary, i.e., symbols with no pre-specified meanings (X, Y, Z). We find
that two bots invent their own communication protocol and start using certain
symbols to ask/answer about certain visual attributes (shape/color/style).
Thus, we demonstrate the emergence of grounded language and communication among
'visual' dialog agents with no human supervision.
Second, we conduct large-scale real-image experiments on the VisDial dataset,
where we pretrain with supervised dialog data and show that the RL 'fine-tuned'
agents significantly outperform SL agents. Interestingly, the RL Qbot learns to
ask questions that Abot is good at, ultimately resulting in more informative
dialog and a better team. | http://arxiv.org/pdf/1703.06585 | Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra | cs.CV, cs.AI, cs.CL, cs.LG | 11 pages, 4 figures, 2 tables, webpage: http://visualdialog.org/ | null | cs.CV | 20170320 | 20170321 | 7 1 0 2
r a M 1 2 ] V C . s c [ 2 v 5 8 5 6 0 . 3 0 7 1 : v i X r a
# Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
# Abhishek Das1*, Satwik Kottur2*, José M.F. Moura2, Stefan Lee3, Dhruv Batra1
# 1Georgia Institute of Technology, 2Carnegie Mellon University, 3Virginia Tech visualdialog.org
# Abstract
We introduce the ï¬rst goal-driven training for visual ques- tion answering and dialog agents. Speciï¬cally, we pose a cooperative âimage guessingâ game between two agents â Q-BOT and A-BOTâ who communicate in natural language dialog so that Q-BOT can select an unseen image from a lineup of images. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to game reward. We demonstrate two experimental results. First, as a âsanity checkâ demonstration of pure RL (from scratch), we show results on a synthetic world, where the agents communicate in ungrounded vocabulary, i.e., sym- bols with no pre-speciï¬ed meanings (X, Y, Z). We ï¬nd that two bots invent their own communication protocol and start using certain symbols to ask/answer about certain vi- sual attributes (shape/color/style). Thus, we demonstrate the emergence of grounded language and communication among âvisualâ dialog agents with no human supervision. Second, we conduct large-scale real-image experiments on the VisDial dataset [4], where we pretrain with supervised dialog data and show that the RL âï¬ne-tunedâ agents signif- icantly outperform SL agents. Interestingly, the RL Q-BOT learns to ask questions that A-BOT is good at, ultimately resulting in more informative dialog and a better team.
@ i] Questioner Answerer [ Two zebra are walking around their pen at the zoo. A °° Q1: Any people in the shot? ] ° [ A1: No, there aren't any. Oe om) @ pS Q10: Are they facing each other? | A10: They aren't. o) ° F 9 | pally me A) |e (9) | think we were talking about this image! .' cn 5 ee
Figure 1: We propose a cooperative image guessing game between two agents â Q-BOT and A-BOTâ who communicate through a natural language dialog so that Q-BOT can select a particular un- seen image from a lineup. We model these agents as deep neural networks and train them end-to-end with reinforcement learning.
# 1. Introduction
The focus of this paper is visually-grounded conversational artiï¬cial intelligence (AI). Speciï¬cally, we would like to de- velop agents that can âseeâ (i.e., understand the contents of an image) and âcommunicateâ that understanding in natu- ral language (i.e., hold a dialog involving questions and an- swers about that image). We believe the next generation of intelligent systems will need to posses this ability to hold a dialog about visual content for a variety of applications: e.g., helping visually impaired users understand their sur- roundings [2] or social media content [36] (âWho is in the photo? Dave. What is he doing?â), enabling analysts to
sift through large quantities of surveillance data (âDid any- one enter the vault in the last month? Yes, there are 103 recorded instances. Did any of them pick something up?â), and enabling users to interact naturally with intelligent as- sistants (either embodied as a robot or not) (âDid I leave my phone on my desk? Yes, itâs here. Did I miss any calls?â). Despite rapid progress at the intersection of vision and lan- guage, in particular, in image/video captioning [3, 12, 32â 34, 37] and question answering [1, 21, 24, 30, 31], it is clear we are far from this grand goal of a visual dialog agent. Two recent works [4, 5] have proposed studying this task of visually-grounded dialog. Perhaps somewhat counter- intuitively, both these works treat dialog as a static super- vised learning problem, rather than an interactive agent learning problem that it naturally is. Speciï¬cally, both
*The ï¬rst two authors (AD, SK) contributed equally.
1
works [4, 5] ï¬rst collect a dataset of human-human dia- log, i.e., a sequence of question-answer pairs about an im- age (q1, a1), . . . , (qT , aT ). Next, a machine (a deep neu- ral network) is provided with the image I, the human dia- log recorded till round t â 1, (q1, a1), . . . , (qtâ1, atâ1), the follow-up question qt, and is supervised to generate the hu- man response at. Essentially, at each round t, the machine is artiï¬cially âinjectedâ into the conversation between two humans and asked to answer the question qt; but the ma- chineâs answer Ëat is thrown away, because at the next round t + 1, the machine is again provided with the âground-truthâ human-human dialog that includes the human response at and not the machine response Ëat. Thus, the machine is never allowed to steer the conversation because that would take the dialog out of the dataset, making it non-evaluable. In this paper, we generalize the task of Visual Dialog be- yond the necessary ï¬rst stage of supervised learning â by posing it as a cooperative âimage guessingâ game between two dialog agents. We use deep reinforcement learning (RL) to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to the game reward. Our setup is illustrated in Fig. 1. We formulate a game be- tween a questioner bot (Q-BOT) and an answerer bot (A- BOT). Q-BOT is shown a 1-sentence description (a caption) of an unseen image, and is allowed to communicate in natu- ral language (discrete symbols) with the answering bot (A- BOT), who is shown the image. The objective of this fully- cooperative game is for Q-BOT to build a mental model of the unseen image purely from the natural language dialog, and then retrieve that image from a lineup of images. Notice that this is a challenging game. Q-BOT must ground the words mentioned in the provided caption (âTwo zebra are walking around their pen at the zoo.â), estimate which images from the provided pool contain this content (there will typically be many such images since captions describe only the salient entities), and ask follow-up questions (âAny people in the shot? Are there clouds in the sky? Are they facing each other?â) that help it identify the correct image. Analogously, A-BOT must build a mental model of what Q- BOT understands, and answer questions (âNo, there arenât any. I canât see the sky. They arenât.â) in a precise enough way to allow discrimination between similar images from a pool (that A-BOT does not have access to) while being concise enough to not confuse the imperfect Q-BOT. At every round of dialog, Q-BOT listens to the answer pro- vided by A-BOT, updates its beliefs, and makes a prediction about the visual representation of the unseen image (specif- ically, the fc7 vector of I), and receives a reward from the environment based on how close Q-BOTâs prediction is to the true fc7 representation of I. The goal of Q-BOT and A-BOT is to communicate to maximize this reward. One critical issue is that both the agents are imperfect and noisy â both âforgetâ things in the past, sometimes repeat them-
2
selves, may not stay consistent in their responses, A-BOT does not have access to an external knowledge-base so it cannot answer all questions, etc. Thus, to succeed at the task, they must learn to play to each otherâs strengths. An important question to ask is â why force the two agents to communicate in discrete symbols (English words) as op- posed to continuous vectors? The reason is twofold. First, discrete symbols and natural language is interpretable. By forcing the two agents to communicate and understand nat- ural language, we ensure that humans can not only inspect the conversation logs between two agents, but more im- portantly, communicate with them. After the two bots are trained, we can pair a human questioner with A-BOT to ac- complish the goals of visual dialog (aiding visually/situa- tionally impaired users), and pair a human answerer with Q-BOT to play a visual 20-questions game. The second reason to communicate in discrete symbols is to prevent cheating â if Q-BOT and A-BOT are allowed to exchange continuous vectors, then the trivial solution is for A-BOT to ignore Q-BOTâs question and directly convey the fc7 vec- tor for I, allowing Q-BOT to make a perfect prediction. In essence, discrete natural language is an interpretable low- dimensional âbottleneckâ layer between these two agents. Contributions. We introduce a novel goal-driven training for visual question answering and dialog agents. Despite signiï¬cant popular interest in VQA (over 200 works citing [1] since 2015), all previous approaches have been based on supervised learning, making this the ï¬rst instance of goal- driven training for visual question answering / dialog. We demonstrate two experimental results. First, as a âsanity checkâ demonstration of pure RL (from scratch), we show results on a diagnostic task where per- ception is perfect â a synthetic world with âimagesâ con- taining a single object deï¬ned by three attributes (shape/- color/style). In this synthetic world, for Q-BOT to identify an image, it must learn about these attributes. The two bots communicate via an ungrounded vocabulary, i.e., symbols with no pre-speciï¬ed human-interpretable meanings (âXâ, âYâ, â1â, â2â). When trained end-to-end with RL on this task, we ï¬nd that the two bots invent their own communica- tion protocol â Q-BOT starts using certain symbols to query for speciï¬c attributes (âXâ for color), and A-BOT starts re- sponding with speciï¬c symbols indicating the value of that attribute (â1â for red). Essentially, we demonstrate the auto- matic emergence of grounded language and communication among âvisualâ dialog agents with no human supervision! Second, we conduct large-scale real-image experiments on the VisDial dataset [4]. With imperfect perception on real images, discovering a human-interpretable language and communication strategy from scratch is both tremendously difï¬cult and an unnecessary re-invention of English. Thus, we pretrain with supervised dialog data in VisDial before âï¬ne tuningâ with RL; this alleviates a number of challenges
in making deep RL converge to something meaningful. We show that these RL ï¬ne-tuned bots signiï¬cantly outperform the supervised bots. Most interestingly, while the super- vised Q-BOT attempts to mimic how humans ask questions, the RL trained Q-BOT shifts strategies and asks questions that the A-BOT is better at answering, ultimately resulting in more informative dialog and a better team.
# 2. Related Work
Vision and Language. A number of problems at the inter- section of vision and language have recently gained promi- nence, e.g., image captioning [6, 7, 13, 34], and visual ques- tion answering (VQA) [1, 9, 20, 21, 24]. Most related to this paper are two recent works on visually-grounded dia- log [4, 5]. Das et al. [4] proposed the task of Visual Di- alog, collected the VisDial dataset by pairing two subjects on Amazon Mechanical Turk to chat about an image (with assigned roles of âQuestionerâ and âAnswererâ), and trained neural visual dialog answering models. De Vries et al. [5] extended the Referit game [14] to a âGuessWhatâ game, where one person asks questions about an image to guess which object has been âselectedâ, and the second person answers questions in âyesâ/ânoâ/NA (natural language an- swers are disallowed). One disadvantage of GuessWhat is that it requires bounding box annotations for objects; our image guessing game does not need any such annotations and thus an unlimited number of game plays may be sim- ulated. Moreover, as described in Sec. 1, both these works unnaturally treat dialog as a static supervised learning prob- lem. Although both datasets contain thousands of human dialogs, they still only represent an incredibly sparse sam- ple of the vast space of visually-grounded questions and an- swers. Training robust, visually-grounded dialog agents via supervised techniques is still a challenging task. In our work, we take inspiration from the AlphaGo [27] ap- proach of supervision from human-expert games and rein- forcement learning from self-play. Similarly, we perform supervised pretraining on human dialog data and ï¬ne-tune in an end-to-end goal-driven manner with deep RL. 20 Questions and Lewis Signaling Game. Our proposed image-guessing game is naturally the visual analog of the popular 20-questions game. More formally, it is a general- ization of the Lewis Signaling (LS) [17] game, widely stud- ied in economics and game theory. LS is a cooperative game between two players â a sender and a receiver. In the clas- sical setting, the world can be in a number of ï¬nite discrete states {1, 2, . . . , N }, which is known to the sender but not the receiver. The sender can send one of N discrete sym- bols/signals to the receiver, who upon receiving the signal must take one of N discrete actions. The game is perfectly cooperative, and one simple (though not unique) Nash Equi- librium is the âidentity mappingâ, where the sender encodes each world state with a bijective signal, and similarly the
3
receiver has a bijective mapping from a signal to an action. Our proposed âimage guessingâ game is a generalization of LS with Q-BOT being the receiver and A-BOT the sender. However, in our proposed game, the receiver (Q-BOT) is not passive. It actively solicits information by asking ques- tions. Moreover, the signaling process is not âsingle shotâ, but proceeds over multiple rounds of conversation. Text-only or Classical Dialog. Li et al. [18] have pro- posed using RL for training dialog systems. However, they hand-deï¬ne what a âgoodâ utterance/dialog looks like (non- repetition, coherence, continuity, etc.). In contrast, taking a cue from adversarial learning [10, 19], we set up a cooper- ative game between two agents, such that we do not need to hand-deï¬ne what a âgoodâ dialog looks like â a âgoodâ dialog is one that leads to a successful image-guessing play. Emergence of Language. There is a long history of work on language emergence in multi-agent systems [23]. The more recent resurgence has focused on deep RL [8, 11, 16, 22]. The high-level ideas of these concurrent works are sim- ilar to our synthetic experiments. For our large-scale real- image results, we do not want our bots to invent their own uninterpretable language and use pretraining on VisDial [4] to achieve âalignmentâ with English.
3. Cooperative Image Guessing Game: In Full Generality and a Speciï¬c Instantiation
Players and Roles. The game involves two collaborative agents â a questioner bot (Q-BOT) and an answerer bot (A- BOT) â with an information asymmetry. A-BOT sees an im- age I, Q-BOT does not. Q-BOT is primed with a 1-sentence description c of the unseen image and asks âquestionsâ (se- quence of discrete symbols over a vocabulary V), which A- BOT answers with another sequence of symbols. The com- munication occurs for a fixed number of rounds. Game Objective in General. At each round, in addition to communicating, Q-BOT must provide a âdescriptionâ 4 of the unknown image J based only on the dialog history and both players receive a reward from the environment in- versely proportional to the error in this description under some metric ¢(, 9%â). We note that this is a general set- ting where the âdescriptionâ y can take on varying levels of specificity â from image embeddings (or fe7 vectors of J) to textual descriptions to pixel-level image generations. Specific Instantiation. In our experiments, we focus on the setting where Q-BOT is tasked with estimating a vector em- bedding of the image J. Given some feature extractor (i.e., a pretrained CNN model, say VGG-16), no human annotation is required to produce the target âdescriptionâ 9% (simply forward-prop the image through the CNN). Reward/error can be measured by simple Euclidean distance, and any im- age may be used as the visual grounding for a dialog. Thus, an unlimited number of âgame playsâ may be simulated.
# 4. Reinforcement Learning for Dialog Agents
In this section, we formalize the training of two visual dia- log agents (Q-BOT and A-BOT) with Reinforcement Learn- ing (RL) â describing formally the action, state, environ- ment, reward, policy, and training procedure. We begin by noting that although there are two agents (Q-BOT, A-BOT), since the game is perfectly cooperative, we can without loss of generality view this as a single-agent RL setup where the single âmeta-agentâ comprises of two âconstituent agentsâ communicating via a natural language bottleneck layer. Action. Both agents share a common action space con- sisting of all possible output sequences under a token vo- cabulary V. This action space is discrete and in princi- ple, infinitely-large since arbitrary length sequences q, a¢ may be produced and the dialog may go on forever. In our synthetic experiment, the two agents are given different vo- cabularies to coax a certain behavior to emerge (details in Sec. 5). In our VisDial experiments, the two agents share a common vocabulary of English tokens. In addition, at each round of the dialog t, Q-BOT also predicts y, its current guess about the visual representation of the unseen image. This component of Q-BOTâs action space is continuous. State. Since there is information asymmetry (A-BOT can see the image J, Q-BOT cannot), each agent has its own observed state. For a dialog grounded in image J with caption c, the state of Q-BOT at round ¢ is the caption and dialog history so far se = [e,q1,41,---,Qâ1, 4-1], and the state of A-BOT also includes the image s/ = [L,¢,Q1,1,---,t-1, 4-1, U]- Policy. We model Q-BOT and A-BOT operating under stochastic policies 7Q(q | 8;0Q) and ma(a; | sf;6.4), such that questions and answers may be sampled from these policies conditioned on the dialog/state history. These poli- cies will be learned by two separate deep neural networks parameterized by 0g and 64. In addition, Q-BOT includes a feature regression network f (-) that produces an image rep- resentation prediction after listening to the answer at round tie, He = f(s? qe, a1;f) = f(s2.450). Thus, the goal of policy learning is to estimate the parameters 0g, 04, Of. Environment and Reward. The environment is the image I upon which the dialog is grounded. Since this is a purely cooperative setting, both agents receive the same reward. Let ¢(-,+) be a distance metric on image representations (Euclidean distance in our experiments). At each round t, we define the reward for a state-action pair as:
ri( 82 (grav) ) =C(Gi-1,9") ââ¬(Gy") A) state action distance at t-1 distance at t
i.e., the change in distance to the true representation be- fore and after a round of dialog. In this way, we consider a question-answer pair to be low quality (i.e., have a negative reward) if it leads the questioner to make a worse estimate of
4
the target image representation than if the dialog had ended. Note that the total reward summed over all time steps of a dialog is a function of only the initial and ï¬nal states due to the cancellation of intermediate terms, i.e.,
Yon(s?, (ae, 41.44))) = C(Go,y") ââ¬(Gr,y") (2) = Se t=1 overall improvement due to dialog
This is again intuitive â âHow much do the feature predic- tions of Q-BOT improve due to the dialog?â The details of policy learning are described in Sec. 4.2, but before that, let us describe the inner working of the two agents.
# 4.1. Policy Networks for Q-BOT and A-BOT
Fig. 2 shows an overview of our policy networks for Q-BOT and A-BOT and their interaction within a single round of dialog. Both the agent policies are modeled via Hierarchical Recurrent Encoder-Decoder neural networks, which have recently been proposed for dialog modeling [4, 25, 26]. Q-BOT consists of the following four components:
- Fact Encoder: Q-BOT asks a question qt: âAre there any animals?â and receives an answer at: âYes, there are two elephants.â. Q-BOT treats this concatenated (qt, at)-pair as a âfactâ it now knows about the unseen image. The fact encoder is an LSTM whose ï¬nal hidden state F Q t â R512 is used as an embedding of (qt, at).
- State/History Encoder is an LSTM that takes the en- coded fact F Q t at each time step to produce an encoding of the prior dialog including time t as SQ t â R512. Notice that this results in a two-level hierarchical encoding of the dialog (qt, at) â F Q
- Question Decoder is an LSTM that takes the state/his- tâ1 and gener- tory encoding from the previous round SQ ates question qt by sequentially sampling words.
- Feature Regression Network f (·) is a single fully- connected layer that produces an image representation prediction Ëyt from the current encoded state Ëyt = f (SQ t ).
Each of these components and their relation to each other are shown on the left side of Fig. 2. We collectively refer to the parameters of the three LSTM models as θQ and those of the feature regression network as θf . A-BOT has a similar structure to Q-BOT with slight differ- ences since it also models the image I via a CNN:
- Question Encoder: A-BOT receives a question qt from t â R512. Q-BOT and encodes it via an LSTM QA
- Fact Encoder: Similar to Q-BOT, A-BOT also encodes t â R512. The the (qt, at)-pairs via an LSTM to get F A purpose of this encoder is for A-BOT to remember what it has already told Q-BOT and be able to understand ref- erences to entities already mentioned.
1Q Sey Question Decoder Q History [J Fact Encoder [~~] Embedding Rounds of Dialog Feature |ââ+ Regression Network [0.1, -2, 0, ..., 0.57] Are there any animals? Yes, there are two elephants. Answer at Reward Function y_ FA, SA, Question | History Encoder Encoder Decoder Fact Embedding FA sa
Figure 2: Policy networks for Q-BOT and A-BOT. At each round t of dialog, (1) Q-BOT generates a question qt from its question decoder conditioned on its state encoding SQ t , and generates an answer at, (3) both encode the completed exchange as F Q t , predicts an image representation Ëyt, and receives a reward.
- State/History Encoder is an LSTM that takes as in- put at each round t â the encoded question Q/, the image features from VGG [28] y, and the previous fact encoding F;4, â to produce a state encoding, i.e. the model to contextualize the current question w.r.t. the history while looking at the image to seek an answer.
While the above is a natural objective, we ï¬nd that consid- ering the entire dialog as a single RL episode does not dif- ferentiate between individual good or bad exchanges within it. Thus, we update our model based on per-round rewards,
J(94,0Q,99) = E TQITA [re (s?, (qt, ae; w))| (5)
- Answer Decoder is an LSTM that takes the state encod- t and generates at by sequentially sampling words.
Following the REINFORCE algorithm, we can write the gradient of this expectation as an expectation of a quantity related to the gradient. For θQ, we derive this explicitly:
Our code will be publicly available. To recap, a dialog round at time t consists of 1) Q-BOT generating a question qt conditioned on its state encoding SQ tâ1, 2) A-BOT encoding qt, updating its state encoding SA t , and generating an answer at, 3) Q-BOT and A-BOT both encoding the completed exchange as F Q t and F A t , and 4) Q-BOT updating its state to SQ t based on F Q t and making an image representation prediction Ëyt for the unseen image.
VooI = Veo E [ri «| (r; inputs hidden to avoid clutter) TQ.TA = Vo [> m9 (als@1) ma (a Ino] = > TQ (als) Veg log TQ (als?) TA (a:|s#) Tt (-) qe.ae = _E [m6 () Vag logo (aels?1)| ©) TQ.TA
# 4.2. Joint Training with Policy Gradients
Similarly, gradient w.r.t. θA, i.e., âθA J can be derived as
In order to train these agents, we use the REINFORCE [35] algorithm that updates policy parameters (θQ, θA, θf ) in re- sponse to experienced rewards. In this section, we derive the expressions for the parameter gradients for our setup. Recall that our agents take actions â communication (qt, at) and feature prediction Ëyt â and our objective is to maximize the expected reward under the agentsâ policies, summed over the entire dialog:
min J(04,9Q, 9) 4,0 .99 where, (3)
Vot= E [re() Vox log ma (arls)].
As is standard practice, we estimate these expectations with sample averages. Speciï¬cally, we sample a question from Q-BOT (by sequentially sampling words from the question decoder LSTM till a stop token is produced), sample its an- swer from A-BOT, compute the scalar reward for this round, multiply that scalar reward to gradient of log-probability of this exchange, propagate backward to compute gradients w.r.t. all parameters θQ, θA. This update has an intuitive interpretation â if a particular (qt, at) is informative (i.e., leads to positive reward), its probabilities will be pushed up (positive gradient). Conversely, a poor exchange leading to negative reward will be pushed down (negative gradient).
5
shape A HO : Task Image i : Task: (shape, style) : color | : (color, shape) {purple, square, filled} : aK ( P Pee ) te Reword vs # Ker c ® i Q2:Z A2:4 : syle @O) Predicted: (iriangle, filled) : Tasks i (color, shape), (shape, color), (style, color), (color, style), (shape, style), (style, shape) (a) : (b) L| i Qu:zZ AT Task: (style, color) Reward Q2:X AZT pL Predicted: (solid, purple) $ "© â 100-200 300400 # Iter (c) ; (d)
Figure 3: Emergence of grounded dialog: (a) Each âimageâ has three attributes, and there are six tasks for Q-BOT (ordered pairs of attributes). (b) Both agents interact for two rounds followed by attribute pair prediction by Q-BOT. (c) Example 2-round dialog where grounding emerges: color, shape, style have been encoded as X, Y, Z respectively. (d) Improvement in reward while policy learning.
Finally, since the feature regression network f(-) forms a deterministic policy, its parameters 67 receive âsupervisedâ gradient updates for differentiable ¢(-, -).
# 5. Emergence of Grounded Dialog
To succeed at our image guessing game, Q-BOT and A-BOT need to accomplish a number of challenging sub-tasks â they must learn a common language (do you understand what I mean when I say âpersonâ?) and develop map- pings between symbols and image representations (what does âpersonâ look like?), i.e., A-BOT must learn to ground language in visual perception to answer questions and Q- BOT must learn to predict plausible image representations â all in an end-to-end manner from a distant reward func- tion. Before diving in to the full task on real images, we conduct a âsanity checkâ on a synthetic dataset with perfect perception to ask â is this even possible? Setup. As shown in Fig. 3, we consider a synthetic world with âimagesâ represented as a triplet of attributes â 4 shapes, 4 colors, 4 styles â for a total of 64 unique images. A-BOT has perfect perception and is given direct access to this representation for an image. Q-BOT is tasked with de- ducing two attributes of the image in a particular order â e.g., if the task is (shape, color), Q-BOT would need to out- put (square, purple) for a (purple, square, ï¬lled) image seen by A-BOT (see Fig. 3b). We form all 6 such tasks per image. Vocabulary. We conducted a series of pilot experiments and found the choice of the vocabulary size to be crucial for coaxing non-trivial ânon-cheatingâ behavior to emerge. For instance, we found that if the A-BOT vocabulary VA is large enough, say |VA| ⥠64 (#images), the optimal policy learnt simply ignores what Q-BOT asks and A-BOT conveys the entire image in a single token (e.g. token 1 â¡ (red, square, ï¬lled)). As with human communication, an impoverished vocabulary that cannot possibly encode the richness of the visual sensor is necessary for non-trivial dialog to emerge. To ensure at least 2 rounds of dialog, we restrict each agent to only produce a single symbol utterance per round from âminimalâ vocabularies VA = {1, 2, 3, 4} for A-BOT and VQ = {X, Y, Z} for Q-BOT. Since |VA|#rounds < #images,
a non-trivial dialog is necessary to succeed at the task. Policy Learning. Since the action space is discrete and small, we instantiate Q-BOT and A-BOT as fully specified tables of Q-values (state, action, future reward estimate) and apply tabular Q-learning with Monte Carlo estimation over 10k episodes to learn the policies. Updates are done alternately where one bot is frozen while the other is up- dated. During training, we use e-greedy policies [29], en- suring an action probability of 0.6 for the greedy action and split the remaining probability uniformly across other ac- tions. At test time, we default to greedy, deterministic pol- icy obtained from these e-greedy policies. The task requires outputting the correct attribute value pair based on the task and image. Since there are a total of 4+ 4+ 4 = 12 unique values across the 3 attributes, Q-BOTâs final action selects one of 12x 12=144 attribute-pairs. We use +1 and â1 as rewards for right and wrong predictions. Results. Fig. 3d shows the reward achieved by the agentsâ policies vs. number of RL iterations (each with 10k episodes/dialogs). We can see that the two quickly learn the optimal policy. Fig. 3b,c show some example exchanges be- tween the trained bots. We find that the two invent their own communication protocol â Q-BOT consistently uses specific symbols to query for specific attributes: X â color, Y > shape, Z â> style. And A-BOT consistently responds with specific symbols to indicate the inquired attribute, e.g., if Q- BOT emits X (asks for color), A-BOT responds with: 1 > purple, 2 â> green, 3 â blue, 4 â> red. Similar mappings exist for responses to other attributes. Essentially, we find the automatic emergence of grounded language and a com- munication protocol among âvisualâ dialog agents without any human supervision!
# 6. Experiments
Our synthetic experiments in the previous section establish that when faced with a cooperative task where information must be exchanged, two agents with perfect perception are capable of developing a complex communication protocol. In general, with imperfect perception on real images, dis- covering human-interpretable language and communication
6
Image + Caption Human-Human dialog [4] 1 man 1 woman What are their genders? Yes Are they both adults? Looks like sunglasses Do they wear goggles? Man does Do they have hats on? No Are there any other people? Black What color is manâs hat? Is it snowing now? No What is woman wearing? Blue jacket and black pants Yes Are they smiling? Yes Do you see trees? SL-pretrained Q-BOT-A-BOT dialog 2 How many people? Male or female? I canât tell I canât tell, they are far away What are they wearing? I canât tell What color are skis? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing goggles? Yes Are they wearing hats? Yes Are they wearing goggles? RL-full-QAf Q-BOT-A-BOT dialog Are people male or female or both? What are they wearing? Is it snowing? Can you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Can you see any trees? Can you see any buildings in background? Does it look like they are in park? I think 1 is male, canât see other 1 ... They are all wearing snow pants and jackets It does not appear to be snowing at moment Yes, there are lot of trees in background No, I can not Yes, there are lot of trees in background No, I do not see any buildings Yes, there are lot of trees in background No , I do not see any buildings It does not appear to be A couple of people in the snow on skis. They look to be 30âs How old are men? I canât tell, its close up Are they in restaurant? Maybe Do men look they might be related? Are they each holding slice of pizza in 1 hand and giving thumbs up with other? 1 holding slice, other holding box... cheese What kind of pizza do they have? No Does box have pizza company name on it? No Are there any pictures or graphics on box? Slender Are men slender or heavy set? Yes Do either of men wear glasses? 1 1 or both? How old are men? Are they male or female? What are they wearing? What kind of pizza is it? Does it look good? Can you tell what kind of pizza it is? Are there any toppings on pizza? Are there any toppings on pizza? Are there any toppings on pizza? Are there any other toppings on pizza? They look like they are in their 20âs 1 male and 1 female T shirt and shorts Looks like pepperoni Yes it does Looks like pepperoni Pepperoni Pepperoni Looks like cheese, but I canât tell... No How old are men? Are they wearing chefâs hat? Do they have long hair? What kind of pizza is it? Does it look good? Are there any other people around? Are they inside or outside? Can you tell what time of day it is? Are there any other people around? What color are walls? They look like they are in their 20âs No No Looks like pepperoni Yes No Inside No No White Two men giving thumbs up holding large slices of pizza. How old is man? Is he wearing helmet? Is he wearing any protective gear? What color is his hair? What color is his shirt? Is he wearing shorts? What color are his shorts? Are there other people visible? What color are his shoes? Is this at skate park? In his teens No No Dirty blonde or dark brown Black with red letters Yes Denim No I canât see them Yes How old is man? What is he wearing? What color is his shirt? what color are his pants? what color is his shirt? What color are his shoes? What color are his shoes? What color are his shoes? What color are his shoes? What color are his shoes? I canât see his face, I donât know shorts and t shirt white black blue Black Black Black Black Black What color is skateboard? Is he wearing helmet? Is he wearing knee pads? Is it day or night? Is it sunny? Can you see any other people? Is it day or night? Is it sunny? What color is skateboard? Is he wearing helmet? It is hard to tell but I think itâs black Yes No, no knee pads It is daytime It is bright out, but I see no shadows, so I can not There are 2 other people in background It is daytime Yes, it is sunny It is hard to tell but I think itâs black Yes, he is wearing helmet A man riding a skateboard down the side of a ramp. What color is plane? Is there writing on plane? What color is ground? Any people? Can you see airport? What color is it? Is there sunshine? Any other planes? Any grass? Any trees? White and blue Yes it says lufthansa Grey tarmac No people visible Yes Grey No itâs kind of overcast No other planes No grass No tree What color is plane? Is it large or small? Any people? Any writing on plane? Any writing on plane? Any writing on plane? Any writing on plane? What does it say? What color is plane? Any writing on it? White Large No Yes Yes Yes Yes UNK White Yes What color is airplane? Can you tell what airline it is? What time of day is it? Can you see any people? Can you see any buildings? Is it day or night time? Can you see sky? Are there clouds in sky? Does it look like itâs going to rain? Does it look like itâs going to rain? White with red and blue stripes No, itâs too far away It is daytime No Yes, in background It is daytime Yes Yes, few Yes it does Yes it does An airplane sitting on an asphalt landing strip. What color is dog? What breed is dog? Is dog wearing collar? Can you tell what is cooking? Is oven stainless steel? Does it have see through glass? How many racks in oven? Can you see knobs? Does oven have wooden handle? Are beans in pot? Brown and white Look like brittany spaniel Not able to see Look like beans Yes Yes 2 No Not able to see Glass dish What kind of dog is it? What color is it? Is it large or small? What kind of food is it? Is it large or small? What color is it? Is it large or small? What color are walls? What kind of ï¬oor? What kind of ï¬oor? I canât tell White Large I canât tell Small White Large White Canât see ï¬oor Canât see ï¬oor What kind of dog is it? What color is it? Can you tell what kind of food it is? Are there people around? Is it inside or outside? Can you tell what time of day it is? What color is room? Do you see any windows? Do you see any windows? Do you see any windows? Terrier of some kind White and grey No No people Inside No I canât I canât see walls No No No A dog watching food cook in an oven. Are there more people in this photo or just man? No Just tennis player man in this photo? Yes Canât tell, itâs close up Is it outdoor court? White shirt and shorts What is man wearing? No Can you see net? Yes Can you see bottom of court? Yes Can you see ball in photo? Yes Is it close to manâs racquet? Yes Is man about to take swing? Right Is he left handed or right handed? How old is man? What color is his hair? What is he wearing? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? What color is his shirt? I canât see his face, I donât know Brown Shorts and t shirt White White White White White White White How old is man? What color is his hair? Is he wearing glasses? Is he wearing hat? Is he wearing glasses? Are there other people around? Can you see net? What color is court? Is it sunny? Do you see any trees? He looks to be in his 20âs His hair is brown No he is not No he is not No he is not No itâs just him No I canât Court is green Yes it is No I do not A man prepares to swing at the tennis ball. Is giraffe adult? Is grass at higher altitude than its feet? Is rock big? How is weather? What color is rock? Is grass green or dry? Does giraffe look healthy? Does giraffe look happy? Is giraffe laying down? Is giraffeâs mouth closed? I cannot tell, I only see face No Yes Sunny Tan color Green but looks little dry Yes It looks tired Yes Yes Is there only 1 giraffe? Are there any other animals? Are there any people? Are there any other animals? Is it day or night? Is it sunny? What time of day it is? Are there any other animals? Are there any trees? Are there any other animals? Yes No No No Day Yes I canât tell No No No Is there only 1 giraffe? Are there any other animals? Are there any people? Does this look like zoo? Is there fence? Is it day or night? Is it sunny? What time of day is it? Are there any trees? Can you see sky? Yes No No Canât tell No Day Yes Morning No No A giraffe is resting its head in the grass near a rock. Are there any people in picture? people Is cheese white? Are crackers white? Are there lots of greens? Was picture taken in kitchen? zoomed in How many crackers are there? Is there design on plate? Is there good lighting? Is plate on table? background Does cracker look delicious? No, there are no Yes, cheese is white No, cracker is brownish No, only 1 sprig I cannot tell, it is too There is 1 No, plate is plane white Yes, lighting is decent I cannot tell I can only see plate as Are there any people? What color is plate? Is plate on table? What color is plate? Is plate on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? Is there anything else on table? No White I canât tell White I canât tell No No No No No Is plate large or small? Is it on table? What kind of donut? What kind of chips? Any people? Is this inside or outside? What color is table? Any chairs? What color is table? Is it large or small? Large I canât tell Plain with sugar on it Potato chips No I canât tell, itâs close up White No I canât tell I canât see plate Absolutely A cracker with shredded cheese and leafy greens on a white plate.
He looks like late teens How old is this man? Medium Does he have short hair or long hair? Brown What color is his hair? No Is he wearing glasses? No Does he have facial hair? How is he dressed? Heâs white dress shirt with red tie and matching red vest over it looks like school uniform Is he larger man or is he skinny? Is this picture taken outside? Can you tell if itâs daytime? Can you tell what room he is standing in? hallway
Heâs skinny Inside Iâm not sure Heâs in
How old is man? What is he wearing? What color is his hair? What is he wearing? What color is his hair? What is he wearing? What color is his hair? Is it short or long? What color is his hair? Is it short or long?
He looks like heâs in his 20s T shirt and jeans Black T shirt and jeans Black T shirt and jeans Black Short Black Short
Where is man located? What is man wearing? Are there any other people in picture? What color is table? Is there anything else on table? What are other people doing? they Are there any windows? What color are walls? What time of day do you think it is? What is man wearing?
Looks like classroom of some sort Black t shirt and jeans Yes, quite few people in background Picture is black and white, but itâs wood table Not that I can see They are standing in front of him, but I donât know what
Not that I can see I canât see walls I canât tell Black t shirt and jeans
A man making the live long and prosper sign from star trek.
Table 1: Selected examples of Q-BOT-A-BOT interactions for SL-pretrained and RL-full-QAf. RL-full-QAf interactions are diverse, less prone to repetitive and safe exchanges (âcanât tell", âdonât know" etc.), and more image-discriminative.
7
strategy from scratch is both tremendously difficult and an unnecessary re-invention of English. We leverage the recently introduced VisDial dataset [4] that contains (as of the publicly released v0.5) human dialogs (10 rounds of question-answer pairs) on 68k images from the COCO dataset, for a total of 680k QA-pairs. Example dialogs from the VisDial dataset are shown in Tab. 1. Image Feature Regression. We consider a specific in- stantiation of the visual guessing game described in Sec. 3 â specifically at each round t, Q-BOT needs to regress to the vector embedding %, of image J corresponding to the fc7 (penultimate fully-connected layer) output from VGG- 16 [28]. The distance metric used in the reward computation . : to» 2 tn 2 is C2, ie. r4(-) = llyâ â Grills â lly â dello- Training Strategies. We found two training strategies to be crucial to ensure/improve the convergence of the RL frame- work described in Sec. 4, to produce any meaningful dialog exchanges, and to ground the agents in natural language. 1) Supervised Pretraining. We first train both agents in a supervised manner on the train split of VisDial [4] v0.5 under an MLE objective. Thus, conditioned on human di- alog history, Q-BOT is trained to generate the follow-up question by human1, A-BOT is trained to generate the re- sponse by human2, and the feature network f(-) is opti- mized to regress to y. The CNN in A-BOT is pretrained on ImageNet. This pretraining ensures that the agents can generally recognize some objects/scenes and emit English questions/answers. The space of possible (q;, a,) is ttemen- dously large and without pretraining most exchanges result in no information gain about the image. 2) Curriculum Learning. After supervised pretraining, we âsmoothlyâ transition the agents to RL training accord- ing to a curriculum. Specifically, we continue supervised training for the first (say 9) rounds of dialog and tran- sition to policy-gradient updates for the remaining 10 â rounds. We start at A = 9 and gradually anneal to 0. This curriculum ensures that the agent team does not suddenly diverge off policy, if one incorrect q or a is generated. Models are pretrained for 15 epochs on VisDial, after which we transition to policy-gradient training by annealing K down by 1 every epoch. All LSTMs are 2-layered with 512- d hidden states. We use Adam [15] with a learning rate of 10-3, and clamp gradients to [â5,5] to avoid explosion. All our code will be made publicly available. There is no explicit state-dependent baseline in our training as we ini- tialize from supervised pretraining and have zero-centered reward, which ensures a good proportion of random sam- ples are both positively and negatively reinforced. Model Ablations. We compare to a few natural ablations of our full model, denoted RL-ful1-QAf. First, we evaluate the purely supervised agents (denoted SL-pret rained), i.e., trained only on VisDial data (no RL). Comparison to these agents establishes how much RL helps over super-
8
vised learning. Second, we ï¬x one of Q-BOT or A-BOT to the supervised pretrained initialization and train the other agent (and the regression network f ) with RL; we label these as Frozen-Q or Frozen-A respectively. Compar- ing to these partially frozen agents tell us the importance of coordinated communication. Finally, we freeze the regres- sion network f to the supervised pretrained initialization while training Q-BOT and A-BOT with RL. This measures improvements from language adaptation alone. We quantify performance of these agents along two dimen- sions â how well they perform on the image guessing task (i.e. image retrieval) and how closely they emulate human dialogs (i.e. performance on VisDial dataset [4]). Evaluation: Guessing Game. To assess how well the agents have learned to cooperate at the image guessing task, we setup an image retrieval experiment based on the test split of VisDial v0.5 (â¼9.5k images), which were never seen by the agents in RL training. We present each im- age + an automatically generated caption [13] to the agents, and allow them to communicate over 10 rounds of dialog. After each round, Q-BOT predicts a feature representation Ëyt. We sort the entire test set in ascending distance to this prediction and compute the rank of the source image. Fig. 4a shows the mean percentile rank of the source im- age for our method and the baselines across the rounds (shaded region indicates standard error). A percentile rank of 95% means that the source image is closer to the predic- tion than 95% of the images in the set. Tab. 1 shows ex- ample exchanges between two humans (from VisDial), the SL-pretrained and the RL-full-QAf agents. We make a few observations:
We see that outperforms SL-pretrained and all other ablations (e.g., at improving percentile rank by over 3%), round 10, indicating that our training framework is indeed effective at training these agents for image guessing.
⢠All agents âforgetâ; RL agents forget less. One in- teresting trend we note in Fig. 4a is that all methods signiï¬cantly improve from round 0 (caption-based re- trieval) to rounds 2 or 3, but beyond that all methods with the exception of RL-full-QAf get worse, even though they have strictly more information. As shown in Tab. 1, agents will often get stuck in inï¬nite repeat- ing loops but this is much rarer for RL agents. More- over, even when RL agents repeat themselves, it is af- ter longer gaps (2-5 rounds). We conjecture that the goal of helping a partner over multiple rounds encour- ages longer term memory retention.
⢠RL leads to more informative dialog. SL A-BOT tends to produce âsafeâ generic responses (âI donât knowâ, âI canât seeâ) but RL A-BOT responses are
2 Bom = Frozen-f Oo = Frozen-A Oo1% Frozen-Q oO SL-pretrained O o0% 89% 88% 2 4 6 8 10 Dialog Round 95% 94% RL-full-QAF 8 Fd
Model SL-pretrain Frozen-Q Frozen-f RL-full-QAf Frozen-Q-multi MRR R@5 R@10 Mean Rank 0.436 0.428 0.432 0.428 0.437 53.41 53.12 53.28 53.08 53.67 60.09 60.19 60.11 60.22 60.48 21.83 21.52 21.54 21.54 21.13
(a) Guessing Game Evaluation. (b) Visual Dialog Answerer Evaluation.
£2 distance to ground truth image in fc7 space Round 1: What kind of pizza is it? Cheese and | maybe mushroom. ce he | Fag tS 0.9263 gaa! Pizza slice sittingon | Round 5: Is there anything else on plate? top of white plate. Yes, es are 2 other plates in background. Group of people standing on top of lush green field wa 78| i 1.0477 1 ' 1 f 1 1 1 0.9343 '0.9352|1 0.9423 10,9426 0.9446 = Round 4: Are th outdoors? Outdoors. ae \ pee & ! H i] i = lus a3 111508] 28) 1761 Man in light-colored suit and tie standing next to woman in short purple dress. £2 distance to ground truth image in fc7 space 1 is i i 1.1551 H 1629 1.1591 Round 3: What aE one flowers in one of many ceramic vases. Round 1: How many people are there? Lot, too many to count. il i f H f i 0.8687! 0.8890 ik os 1,0.9006} 0.9149 People staring at man Round 3: Does it look old or 7" It looks new. on fancy motorcycle. 1 | = 1.1861 Round 9: |s it sunny out? Yes. gm 1.1882 11852 (c) Retrieval Results.
(c) Qualitative Retrieval Results.
Figure 4: a) Guessing Game Evaluation. Plot shows the rank in percentile (higher is better) of the âground truthâ image (shown to A-BOT) as retrieved using fc7 predictions of Q-BOT vs. rounds of dialog. Round 0 corresponds to image guessing based on the caption alone. We can see that the RL-full-QAf bots signiï¬cantly outperforms the SL-pretrained bots (and other ablations). Error bars show standard error of means. (c) shows qualitative results on this predicted fc7-based image retrieval. Left column shows true image and caption, right column shows dialog exchange, and a list of images sorted by their distance to the ground-truth image. The image predicted by Q-BOT is highlighted in red. We can see that the predicted image is often semantically quite similar. b) VisDial Evaluation. Performance of A-BOT on VisDial v0.5 test, under mean reciprocal rank (MRR), recall@k for k = {5, 10} and mean rank metrics. Higher is better for MRR and recall@k, while lower is better for mean rank. We see that our proposed Frozen-Q-multi outperforms all other models on VisDial metrics by 3% relative gain. This improvement is entirely âfor freeâ since no additional annotations were required for RL.
much more detailed (âIt is hard to tell but I think itâs blackâ). These observations are consistent with re- cent literature in text-only dialog [18]. Our hypothesis for this improvement is that human responses are di- verse and SL trained agents tend to âhedge their betsâ and achieve a reasonable log-likelihood by being non-
committal. In contrast, such âsafeâ responses do not help Q-BOT in picking the correct image, thus encour- aging an informative RL A-BOT.
Evaluation: Emulating Human Dialogs. To quantify how well the agents emulate human dialog, we evaluate A-BOT on the retrieval metrics proposed by Das et al. [4]. Speciï¬-
9
cally, every question in VisDial is accompanied by 100 can- didate responses. We use the log-likehood assigned by the A-BOT answer decoder to sort these candidates and report the results in Tab. 4b. We ï¬nd that despite the RL A-BOTâs answer being more informative, the improvements on Vis- Dial metrics are minor. We believe this is because while the answers are correct, they may not necessarily mimic hu- man responses (which is what the answer retrieval metrics check for). In order to dig deeper, we train a variant of Frozen-Q with a multi-task objective â simultaneous (1) ground truth answer supervision and (2) image guessing re- ward, to keep A-BOT close to human-like responses. We use a weight of 1.0 for the SL loss and 10.0 for RL. This model, denoted Frozen-Q-multi, performs better than all other approaches on VisDial answering metrics, improv- ing the best reported result on VisDial by 0.7 mean rank (relative improvement of 3%). Note that this gain is entirely âfreeâ since no additional annotations were required for RL. Human Study. We conducted a human interpretabil- ity study to measure (1) whether humans can easily un- derstand the Q-BOT-A-BOT dialog, and (2) how image- discriminative the interactions are. We show human sub- jects a pool of 16 images, the agent dialog (10 rounds), and ask humans to pick their top-5 guesses for the image the two agents are talking about. We ï¬nd that mean rank of the ground-truth image for SL-pretrained agent dialog is 3.70 vs. 2.73 for RL-full-QAf dialog. In terms of MRR, the comparison is 0.518 vs. 0.622 respectively. Thus, un- der both metrics, humans ï¬nd it easier to guess the unseen image based on RL-full-QAf dialog exchanges, which shows that agents trained within our framework (1) success- fully develop image-discriminative language, and (2) this language is interpretable; they do not deviate off English.
# 7. Conclusions
To summarize, we introduce a novel training framework for visually-grounded dialog agents by posing a cooperative âimage guessingâ game between two agents. We use deep reinforcement learning to learn the policies of these agents end-to-end â from pixels to multi-agent multi-round dialog to game reward. We demonstrate the power of this frame- work in a completely ungrounded synthetic world, where the agents communicate via symbols with no pre-speciï¬ed meanings (X, Y, Z). We ï¬nd that two bots invent their own communication protocol without any human supervision. We go on to instantiate this game on the VisDial [4] dataset, where we pretrain with supervised dialog data. We ï¬nd that the RL âï¬ne-tunedâ agents not only signiï¬cantly outperform SL agents, but learn to play to each otherâs strengths, all the while remaining interpretable to outside humans observers.
Acknowledgements. We thank Devi Parikh for helpful discussions. This work was funded in part by the following
10
awards to DB â NSF CAREER award, ONR YIP award, ONR Grant N00014-14-1-0679, ARO YIP award, ICTAS Junior Faculty award, Google Faculty Research Award, Amazon Academic Research Award, AWS Cloud Credits for Research, and NVIDIA GPU donations. SK was sup- ported by ONR Grant N00014-12-1-0903, and SL was par- tially supported by the Bradley Postdoctoral Fellowship. Views and conclusions contained herein are those of the au- thors and should not be interpreted as necessarily represent- ing the ofï¬cial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
# References
[1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visual Question Answering. In ICCV, 2015. 1, 2, 3
[2] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, and T. Yeh. VizWiz: Nearly Real-time Answers to Visual Ques- tions. In UIST, 2010. 1
[3] X. Chen and C. L. Zitnick. Mindâs Eye: A Recurrent Vi- sual Representation for Image Caption Generation. In CVPR, 2015. 1
[4] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual Dialog. In CVPR, 2017. 1, 2, 3, 4, 7, 8, 9, 10
[5] H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville. GuessWhat?! visual object discovery through multi-modal dialogue. In CVPR, 2017. 1, 2, 3 [6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Re- current Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. 3
[7] H. Fang, S. Gupta, F. N. Iandola, R. K. Srivastava, L. Deng, P. Dollár, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zit- nick, and G. Zweig. From Captions to Visual Concepts and Back. In CVPR, 2015. 3
[8] J. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson. Learning to communicate with deep multi-agent reinforce- ment learning. In Advances in Neural Information Process- ing Systems, 2016. 3
[9] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu. Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering. In NIPS, 2015. 3 J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative Adversarial Nets. In NIPS, 2014. 3
[11] S. Havrylov and I. Titov. Emergence of language with multi- agent games: Learning to communicate with sequences of symbols. In ICLR Workshop, 2017. 3
[12] J. Johnson, A. Karpathy, and L. Fei-Fei. DenseCap: Fully Convolutional Localization Networks for Dense Captioning. In CVPR, 2016. 1
[13] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 3, 8
[14] S. Kazemzadeh, V. Ordonez, M. Matten, and T. L. Berg. ReferItGame: Referring to Objects in Photographs of Nat-
ural Scenes. In EMNLP, 2014. 3
[15] D. Kingma and J. Ba. Adam: A Method for Stochastic Opti- mization. In ICLR, 2015. 8
[16] A. Lazaridou, A. Peysakhovich, and M. Baroni. Multi-agent cooperation and the emergence of (natural) language. In ICLR, 2017. 3
[17] D. Lewis. Convention: A philosophical study. John Wiley & Sons, 2008. 3
[18] J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Juraf- sky. Deep Reinforcement Learning for Dialogue Generation. In EMNLP, 2016. 3, 9
[19] J. Li, W. Monroe, T. Shi, A. Ritter, and D. Jurafsky. Adver- sarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547, 2017. 3
[20] M. Malinowski and M. Fritz. A Multi-World Approach to Question Answering about Real-World Scenes based on Un- certain Input. In NIPS, 2014. 3
[21] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neu- rons: A neural-based approach to answering questions about images. In ICCV, 2015. 1, 3
[22] I. Mordatch and P. Abbeel. Emergence of grounded compo- sitional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017. 3
[23] S. Nolï¬ and M. Mirolli. Evolution of Communication and Language in Embodied Agents. Springer Publishing Com- pany, Incorporated, 1st edition, 2009. 3
[24] M. Ren, R. Kiros, and R. Zemel. Exploring Models and Data for Image Question Answering. In NIPS, 2015. 1, 3
[25] I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In AAAI, 2016. 4
[26] I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. arXiv preprint arXiv:1605.06069, 2016. 4
[27] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, I. Antonoglou,
11
V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 2016. 3
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 5, 8
[29] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. 6
[30] M. Tapaswi, Y. Zhu, R. Stiefelhagen, A. Torralba, R. Ur- tasun, and S. Fidler. MovieQA: Understanding Stories in Movies through Question-Answering. In CVPR, 2016. 1 [31] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S. C. Zhu. Joint Video and Text Parsing for Understanding Events and Answering Queries. IEEE MultiMedia, 2014. 1
[32] S. Venugopalan, M. Rohrbach, J. Donahue, R. J. Mooney, T. Darrell, and K. Saenko. Sequence to Sequence - Video to Text. In ICCV, 2015. 1
[33] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Lan- guage Using Deep Recurrent Neural Networks. In NAACL HLT, 2015. 1
[34] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. 1, 3 [35] R. J. Williams. Simple statistical gradient-following algo- rithms for connectionist reinforcement learning. Machine learning, 8(3-4):229â256, 1992. 5
[36] S. Wu, H. Pique, and J. Wieland. Using artiï¬- facebook. intelligence to help blind people âseeâ cial http://newsroom.fb.com/news/2016/04/using-artiï¬cial- intelligence-to-help-blind-people-see-facebook/, 1 2016.
[37] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhut- dinov, R. S. Zemel, and Y. Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML, 2015. 1 | {
"id": "1605.06069"
} |
1703.05175 | Prototypical Networks for Few-shot Learning | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset. | http://arxiv.org/pdf/1703.05175 | Jake Snell, Kevin Swersky, Richard S. Zemel | cs.LG, stat.ML | null | null | cs.LG | 20170315 | 20170619 | 7 1 0 2 n u J 9 1 ] G L . s c [
2 v 5 7 1 5 0 . 3 0 7 1 : v i X r a
# Prototypical Networks for Few-shot Learning
# Jake Snell University of Torontoâ
Kevin Swersky Twitter
# Richard S. Zemel University of Toronto, Vector Institute
# Abstract
We propose prototypical networks for the problem of few-shot classiï¬cation, where a classiï¬er must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classiï¬cation can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reï¬ect a simpler inductive bias that is beneï¬cial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the- art results on the CU-Birds dataset.
# Introduction
Few-shot classiï¬cation [20, 16, 13] is a task in which a classiï¬er must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overï¬t. While the problem is quite difï¬cult, it has been demonstrated that humans have the ability to perform even one-shot classiï¬cation, where only a single example of each new class is given, with a high degree of accuracy [16].
Two recent approaches have made signiï¬cant progress in few-shot learning. Vinyals et al. [29] proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classiï¬er applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle [22] take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM [9] to produce the updates to a classiï¬er, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode.
We attack the problem of few-shot learning by addressing the key issue of overï¬tting. Since data is severely limited, we work under the assumption that a classiï¬er should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a classâs prototype to be the mean of its support set in the embedding space. Classiï¬cation is then performed for an embedded query point by simply ï¬nding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class.
*Initial work by ï¬rst author done while at Twitter.
(a) Few-shot (b) Zero-shot
Figure 1: Prototypical networks in the few-shot and zero-shot scenarios. Left: Few-shot prototypes ck are computed as the mean of embedded support examples for each class. Right: Zero-shot prototypes ck are produced by embedding class meta-data vk. In either case, embedded query points are classiï¬ed via a softmax over distances to class prototypes: pÏ(y = k|x) â exp(âd(fÏ(x), ck)).
Classiï¬cation is performed, as in the few-shot scenario, by ï¬nding the nearest class prototype for an embedded query point.
In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering [4] in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We ï¬nd empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efï¬cient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning.
# 2 Prototypical Networks
# 2.1 Notation
In few-shot classiï¬cation we are given a small support set of N labeled examples S = {(x1, y1), . . . , (xN , yN )} where each xi â RD is the D-dimensional feature vector of an example and yi â {1, . . . , K} is the corresponding label. Sk denotes the set of examples labeled with class k.
# 2.2 Model
Prototypical networks compute an M -dimensional representation ck â RM , or prototype, of each class through an embedding function fÏ : RD â RM with learnable parameters Ï. Each prototype is the mean vector of the embedded support points belonging to its class:
1 c= DL folxi) ) (xi,yi)ESk
Given a distance function d : RM Ã RM â [0, +â), prototypical networks produce a distribution over classes for a query point x based on a softmax over distances to the prototypes in the embedding space:
x exp(âd( f(x), cx)) B13) = 5 exp(âdlfo(), cr) °) Poly
Learning proceeds by minimizing the negative log-probability J(Ï) = â log pÏ(y = k | x) of the true class k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J(Ï) for a training episode is provided in Algorithm 1.
2
Algorithm 1 Training episode loss computation for prototypical networks. N is the number of examples in the training set, K is the number of classes in the training set, NC ⤠K is the number of classes per episode, NS is the number of support examples per class, NQ is the number of query examples per class. RANDOMSAMPLE(S, N ) denotes a set of N elements chosen uniformly at random from set S, without replacement.
Input: Training set D = {(x1,41),...,(xn,yn)}, where each y; ⬠{1,..., A}. Dy denotes the subset of D containing all elements (x;, yi) such that y; = k. Output: The loss J for a randomly generated training episode. V < RANDOMSAMPLE({1,..., A}, No) > Select class indices for episode for k in {1,...,Nco}do S; < RANDOMSAMPLE(Dy, , Ns) > Select support examples Qk ++ RANDOMSAMPLE(Dy, \ Si, Ng) > Select query examples 1 Che Ne > fo(xi) > Compute prototype from support examples © (xiv ESt end for J<-0 > Initialize loss for k in {1,...,Nco} do for (x, y) in Q;, do Dede 4 (fo(x), ex) ) + los) _exp(- d(fo(x), â¬r)) > Update loss end for end for
# 2.3 Prototypical Networks as Mixture Density Estimation
For a particular class of distance functions, known as regular Bregman divergences [4], the prototypi- cal networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dÏ is deï¬ned as:
d,(z,2') = 9(z) â g(zâ) â (2 -2') Ve(zâ), (3) where y is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ||z â zâ ||? and Mahalanobis distance.
Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown [4] for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used.
Moreover, any regular exponential family distribution pÏ(z|θ) with parameters θ and cumulant function Ï can be written in terms of a uniquely determined regular Bregman divergence [4]:
pÏ(z|θ) = exp{zT θ â Ï(θ) â gÏ(z)} = exp{âdÏ(z, µ(θ)) â gÏ(z)} (4)
Consider now a regular exponential family mixture model with parameters Î = {θk, Ïk}K
k=1:
p(2|P) = Ym 2/0.) = Yomeso(- dy(z, M(x) â 9p(z)) (5)
Given Î, inference of the cluster assignment y for an unlabeled point z becomes:
mx exp(âdy(z, W(Ox))) Der Te exp(âde(z, w(Ox))) p(y = kz) (6)
For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fÏ(x) = z and ck = µ(θk). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dÏ. The choice of distance therefore speciï¬es modeling assumptions about the class- conditional data distribution in the embedding space.
3
# 2.4 Reinterpretation as a Linear Model
A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(z,zâ) = ||z â zâ||?, then the model in Equation (2) is equivalent to a linear model with a particular parameterization [19]. To see this, expand the term in the exponent:
âlFo(x) â ex? = âSo()" fox) + 2h fo(x) â ef cx 7)
The ï¬rst term in Equation (7) is constant with respect to the class k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows:
2c) f(x) â eer = wi f(x) + be, where wy = 2cy and by = âcj cx (8)
We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classiï¬cation systems currently use, e.g., [14, 28].
# 2.5 Comparison to Matching Networks
Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks [29] produce a weighted nearest neighbor classiï¬er given the support set, while prototypical networks produce a linear classiï¬er when squared Euclidean distance is used. In the case of one-shot learning, ck = xk since there is only one support point per class, and matching networks and prototypical networks become equivalent.
A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is ï¬xed and greater than 1, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. [19] and Rippel et al. [25]; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods.
Vinyals et al. [29] propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account speciï¬c points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next.
# 2.6 Design Choices
Distance metric Vinyals et al. [29] and Ravi and Larochelle [22] apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold.
Episode composition A straightforward way to construct episodes, used in Vinyals et al. [29] and Ravi and Larochelle [22], is to choose Nc classes and NS support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 5-way classiï¬cation and 1-shot learning, then training episodes could be comprised of Nc = 5, NS = 1. We have found, however, that it can be extremely beneï¬cial to train with a higher Nc, or âwayâ, than will be used at test-time. In our experiments, we tune the training Nc on a held-out validation set. Another consideration is whether to match NS, or âshotâ, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same âshotâ number.
# 2.7 Zero-Shot Learning
Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector vk for each class. These could be determined
4
Table 1: Few-shot classiï¬cation accuracies on Omniglot.
5-way Acc. 20-way Acc. Model Dist. Fine Tune 1-shot 5-shot 1-shot 5-shot MATCHING NETWORKS [29] MATCHING NETWORKS [29] NEURAL STATISTICIAN [6] PROTOTYPICAL NETWORKS (OURS) Cosine Cosine - Euclid. N Y N N 98.1% 98.9% 93.8% 98.5% 97.9% 98.7% 93.5% 98.7% 98.1% 99.5% 93.2% 98.1% 98.8% 99.7% 96.0% 98.9%
in advance, or they could be learned from e.g., raw text [7]. Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply deï¬ne ck = gÏ(vk) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to ï¬x the prototype embedding g to have unit length, however we do not constrain the query embedding f .
# 3 Experiments
For few-shot learning, we performed experiments on Omniglot [16] and the miniImageNet version of ILSVRC-2012 [26] with the splits proposed by Ravi and Larochelle [22]. We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) [31].
# 3.1 Omniglot Few-shot Classiï¬cation
Omniglot [16] is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. [29] by resizing the grayscale images to 28 à 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. [29] and is composed of four convolutional blocks. Each block comprises a 64-ï¬lter 3 à 3 convolution, batch normalization layer [10], a ReLU nonlinearity and a 2 à 2 max-pooling layer. When applied to the 28 à 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam [11]. We used an initial learning rate of 10â3 and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization.
We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher âwayâ) per training episode rather than fewer. We compare against various baselines, including the neural statistician [6] and both the ï¬ne-tuned and non-ï¬ne-tuned versions of matching networks [29]. We computed classiï¬cation accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset.
# 3.2 miniImageNet Few-shot Classiï¬cation
The miniImageNet dataset, originally proposed by Vinyals et al. [29], is derived from the larger ILSVRC-12 dataset [26]. The splits used by Vinyals et al. [29] consist of 60,000 color images of size 84 Ã 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle [22] in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only.
We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also
5
Table 2: Few-shot classiï¬cation accuracies on miniImageNet. All accuracy results are averaged over 600 test episodes and are reported with 95% conï¬dence intervals. âResults reported by [22].
5-way Acc. Model Dist. Fine Tune 1-shot 5-shot BASELINE NEAREST NEIGHBORSâ MATCHING NETWORKS [29]â MATCHING NETWORKS FCE [29]â META-LEARNER LSTM [22]â PROTOTYPICAL NETWORKS (OURS) Cosine Cosine Cosine - Euclid. N N N N N 28.86 ± 0.54% 49.79 ± 0.79% 43.40 ± 0.78% 51.09 ± 0.71% 43.56 ± 0.84% 55.31 ± 0.73% 43.44 ± 0.77% 60.60 ± 0.71% 49.42 ± 0.78% 68.20 ± 0.66%
80% + 80% ~ EE Matching / Proto. Nets ~ ME Matching Nets § 70% J 70% TE Proto. Nets e 60% 4 2 60% e e 3 50% + 3 50% 8 8 ft 40% 4 < 40% _ a Oo | ? 30% + 2 30% ~ 6 20% 20% 5-way 5-way 20-way 20-way 5-way 5-way 20-way 20-way Cosine Euclid Cosine Euclid. Cosine Euclid Cosine Euclid. 1-shot 5-shot
Figure 2: Comparison showing the effect of distance metric and number of classes per training episode on 5-way classiï¬cation accuracy for both matching and prototypical networks on miniImageNet. The x-axis indicates conï¬guration of the training episodes (way, distance, and shot), and the y-axis indicates 5-way test accuracy for the corresponding shot. Error bars indicate 95% conï¬dence intervals as computed over 600 test episodes. Note that matching networks and prototypical networks are identical in the 1-shot case.
use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classiï¬cation and 20-way episodes for 5-shot classiï¬cation. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle [22], which include a simple nearest neighbor approach on top of features learned by a classiï¬cation network on the 64 training classes. The other baselines are two non-ï¬ne-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin.
We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difï¬culty of 20-way classiï¬cation helps the network to generalize better, because it forces the model to make more ï¬ne-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence.
# 3.3 CUB Zero-shot Classiï¬cation
In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset [31]. The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. [23] in preparing the data. We use
6
Table 3: Zero-shot classiï¬cation accuracies on CUB-200.
Model Image Features 50-way Acc. 0-shot ALE [1] SJE [2] SAMPLE CLUSTERING [17] SJE [2] DS-SJE [23] DA-SJE [23] PROTO. NETS (OURS) Fisher AlexNet AlexNet GoogLeNet GoogLeNet GoogLeNet GoogLeNet 26.9% 40.3% 44.3% 50.1% 50.4% 50.9% 54.6%
their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024- dimensional features extracted by applying GoogLeNet [28] to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-ï¬ipped image2. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns.
We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a ï¬xed learning rate of 10â4 and weight decay of 10â5. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set.
Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE [1], SJE [2], and DS-SJE/DA-SJE [23]. We also compare to a recent clustering approach [17] which trains an SVM on a learned feature space obtained by ï¬ne-tuning AlexNet [14]. These zero-shot classiï¬cation results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes).
# 4 Related Work
The literature on metric learning is vast [15, 5]; we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) [8] learns a Mahalanobis distance to maximize K-nearest-neighborâs (KNN) leave-one-out accuracy in the transformed space. Salakhutdi- nov and Hinton [27] extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classiï¬cation [30] also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN [21] is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA [27] because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each classâs prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions.
Our approach is also similar to the nearest class mean approach [19], where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classiï¬er without retraining, however it relies on a linear embedding and was designed to handle
2Features downloaded from https://github.com/reedscot/cvpr2016.
7
the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classiï¬cation, but they do so by allowing classes to have multiple prototypes. They ï¬nd these prototypes in a pre-processing step by using k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classiï¬er that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences.
Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle [22]. The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classiï¬ers dynamically from new training episodes; however the core embeddings they rely on are ï¬xed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode.
Prototypical networks are also related to the neural statistician [6] from the generative modeling literature, which extends the variational autoencoder [12, 24] to learn generative models of datasets rather than individual points. One component of the neural statistician is the âstatistic networkâ which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classiï¬cation on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as beï¬ts our discriminative task of few-shot classiï¬cation.
With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of [3] in that both predict the weights of a linear classiï¬er. The DS-SJE and DA-SJE approach of [23] also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither [3] nor [23] uses episodic training, which allows us to help speed up training and regularize the model.
# 5 Conclusion
We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to speciï¬cally perform well in the few-shot setting by using episodic training. The approach is far simpler and more efï¬cient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough ï¬exibility on its own without requiring additional ï¬tted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning.
8
# Acknowledgements
We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research.
# References
[1] Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. Label-embedding for attribute- based classiï¬cation. In Computer Vision and Pattern Recognition, pages 819â826, 2013.
[2] Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. Evaluation of output embed- dings for ï¬ne-grained image classiï¬cation. In Computer Vision and Pattern Recognition, pages 2927â2936, 2015.
[3] Jimmy Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. Predicting deep zero-shot convolutional neural networks using textual descriptions. In International Conference on Computer Vision, pages 4247â 4255, 2015.
[4] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with bregman divergences. Journal of machine learning research, 6(Oct):1705â1749, 2005.
[5] Aurélien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.
[6] Harrison Edwards and Amos Storkey. Towards a neural statistician. International Conference on Learning Representations, 2017.
[7] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classiï¬er: Zero-shot learning using purely textual descriptions. In International Conference on Computer Vision, pages 2584â2591, 2013.
[8] Jacob Goldberger, Geoffrey E. Hinton, Sam T. Roweis, and Ruslan Salakhutdinov. Neighbourhood components analysis. In Advances in Neural Information Processing Systems, pages 513â520, 2004.
[9] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735â1780, 1997.
[10] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[12] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[13] Gregory Koch. Siamese neural networks for one-shot image recognition. Masterâs thesis, University of Toronto, 2015.
[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097â1105, 2012.
[15] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287â364, 2012.
[16] Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In CogSci, 2011.
[17] Renjie Liao, Alexander Schwing, Richard Zemel, and Raquel Urtasun. Learning deep parsimonious representations. Advances in Neural Information Processing Systems, 2016.
[18] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
[19] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image classiï¬- cation: Generalizing to new classes at near-zero cost. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2624â2637, 2013.
9
[20] Erik G Miller, Nicholas E Matsakis, and Paul A Viola. Learning from one example through shared densities on transforms. In CVPR, volume 1, pages 464â471, 2000.
[21] Renqiang Min, David A Stanley, Zineng Yuan, Anthony Bonner, and Zhaolei Zhang. A deep non-linear feature mapping for large-margin knn classiï¬cation. In IEEE International Conference on Data Mining, pages 357â366, 2009.
[22] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. International Conference on Learning Representations, 2017.
[23] Scott Reed, Zeynep Akata, Bernt Schiele, and Honglak Lee. Learning deep representations of ï¬ne-grained visual descriptions. arXiv preprint arXiv:1605.05395, 2016.
[24] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[25] Oren Rippel, Manohar Paluri, Piotr Dollar, and Lubomir Bourdev. Metric learning with adaptive density discrimination. International Conference on Learning Representations, 2016.
[26] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[27] Ruslan Salakhutdinov and Geoffrey E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. In AISTATS, pages 412â419, 2007.
[28] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1â9, 2015.
[29] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630â3638, 2016.
[30] Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classiï¬cation. In Advances in Neural Information Processing Systems, pages 1473â1480, 2005.
[31] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
10
# A Additional Omniglot Results
In Table 4 we show test classiï¬cation accuracy for prototypical networks using Euclidean distance trained with 5, 20, and 60 classes per episode.
Table 4: Additional classiï¬cation accuracy results for prototypical networks on Omniglot. Conï¬gura- tion of training episodes is indicated by number of classes per episode (âwayâ), number of support points per class (âshotâ) and number of query points per class (âqueryâ). Classiï¬cation accuracy was averaged over 1,000 randomly generated episodes from the test set.
Train Episodes 5-way Acc. 20-way Acc. Model Dist. Shot Query Way 1-shot 5-shot 1-shot 5-shot PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. 1 1 1 15 15 5 5 20 60 97.4% 99.3% 92.0% 97.8% 98.7% 99.6% 95.4% 98.8% 98.8% 99.7% 96.0% 99.0% PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. 5 5 5 15 15 5 5 20 60 96.9% 99.3% 90.7% 97.8% 98.1% 99.6% 94.1% 98.7% 98.5% 99.7% 94.7% 98.9%
Figure 3 shows a sample t-SNE visualization [18] of the embeddings learned by prototypical networks. We visualize a subset of test characters from the same alphabet in order to gain better insight, despite the fact that classes in actual test episodes are likely to come from different alphabets. Even though the visualized characters are minor variations of each other, the network is able to cluster the hand-drawn characters closely around the class prototypes.
# B Additional miniImageNet Results
In Table 5 we show the full results for the comparison of training episode conï¬guration in Figure 2 of the main paper.
We also compared Euclidean-distance prototypical networks trained with a different number of classes per episode. Here we vary the classes per training episode from 5 up to 30 while keeping the number of query points per class ï¬xed at 15. The results are shown in Figure 4. Our ï¬ndings indicate that construction of training episodes is an important consideration in order to achieve good results for few-shot classiï¬cation. Table 6 contains the full results for this set of experiments.
11
Figure 3: A t-SNE visualization of the embeddings learned by prototypical networks on the Omniglot dataset. A subset of the Tengwar script is shown (an alphabet in the test set). Class prototypes are indicated in black. Several misclassiï¬ed characters are highlighted in red along with arrows pointing to the correct prototype.
51% 1-shot 69.0% 5-shot S som 08.5% 7 Ea 4 2 68.0% > ann aor 6 âTt 8 - . © 49% u 2 -- ~ > fo fee _ > 67.5% ra Ae © age, ao © 67.0% â a § 40% â § 67.0% im . g 2 66.5% â 47% â < , a B 66.0% | |. 46% 5 65.5% 45% 65.0% 5 10 15 20 25 30 5 10 15 20 25 30 Training Classes per Episode Training Classes per Episode
Figure 4: Comparison of the effect of training âwayâ (number of classes per episode) for prototypical networks trained on miniImageNet. Each training episode contains 15 query points per class. Error bars indicate 95% conï¬dence intervals as computed over 600 test episodes.
12
Table 5: Comparison of matching and prototypical networks on miniImageNet under cosine vs. Euclidean distance, 5-way vs. 20-way, and 1-shot vs. 5-shot. All experiments use a shared encoder for both support and query points with embedding dimension 1,600 (architecture and training details are provided in Section 3.2 of the main paper). Classiï¬cation accuracy is averaged over 600 randomly generated episodes from the test set and 95% conï¬dence intervals are shown.
Model Dist. Train Episodes Shot Query Way 1-shot 5-way Acc. 5-shot MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS MATCHING NETS / PROTONETS Cosine Euclid. Cosine Euclid. 1 1 1 1 15 15 15 15 5 5 20 20 38.82 ± 0.69% 44.54 ± 0.56% 46.61 ± 0.78% 59.84 ± 0.64% 43.63 ± 0.76% 51.34 ± 0.64% 49.17 ± 0.83% 62.66 ± 0.71% MATCHING NETS MATCHING NETS MATCHING NETS MATCHING NETS PROTONETS PROTONETS PROTONETS PROTONETS Cosine Euclid. Cosine Euclid. Cosine Euclid. Cosine Euclid. 5 5 5 5 5 5 5 5 15 15 15 15 15 15 15 15 5 5 20 20 5 5 20 20 46.43 ± 0.74% 54.60 ± 0.62% 46.43 ± 0.78% 60.97 ± 0.67% 46.46 ± 0.79% 55.77 ± 0.69% 47.99 ± 0.79% 63.66 ± 0.68% 42.48 ± 0.74% 51.23 ± 0.63% 44.53 ± 0.76% 65.77 ± 0.70% 42.45 ± 0.73% 51.48 ± 0.70% 43.57 ± 0.82% 68.20 ± 0.66%
Table 6: Effect of training âwayâ (number of classes per training episode) for prototypical networks with Euclidean distance on miniImageNet. The number of query points per class in training episodes was ï¬xed at 15. Classiï¬cation accuracy is averaged over 600 randomly generated episodes from the test set and 95% conï¬dence intervals are shown.
Model Dist. Train Episodes Shot Query Way 1-shot 5-way Acc. 5-shot PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. Euclid. Euclid. Euclid. 1 1 1 1 1 1 15 15 15 15 15 15 5 10 15 20 25 30 46.14 ± 0.77% 61.36 ± 0.68% 48.27 ± 0.79% 64.18 ± 0.68% 48.60 ± 0.76% 64.62 ± 0.66% 48.57 ± 0.79% 65.04 ± 0.69% 48.51 ± 0.83% 64.63 ± 0.69% 49.42 ± 0.78% 65.38 ± 0.68% PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS PROTONETS Euclid. Euclid. Euclid. Euclid. Euclid. Euclid. 5 5 5 5 5 5 15 15 15 15 15 15 5 10 15 20 25 30 44.53 ± 0.76% 65.77 ± 0.70% 45.09 ± 0.79% 67.49 ± 0.70% 44.07 ± 0.80% 68.03 ± 0.66% 43.57 ± 0.82% 68.20 ± 0.66% 43.32 ± 0.79% 67.66 ± 0.68% 41.38 ± 0.81% 66.79 ± 0.66%
13 | {
"id": "1605.05395"
} |